Making reproducible research as natural as breathing

Peter CharltonPeter Charlton is a PhD student at King’s College London working as part of the Hospital of the Future (HotF) project. The overall aim of the HotF project is to provide early identification of hospital patients who are deteriorating. Peter’s work focuses on using wearable sensors to continuously assess patients’ health.

One of the key aims of the HotF project is to develop a technique to continuously monitor a patient’s “respiratory rate”: how often they breathe. Respiratory rate often changes early in the progression of a deterioration, giving advanced warning of a severe event such as a heart attack. However, it is currently measured by hand by counting the number of times a patient breathes in a set period of time. This approach is time-consuming, inaccurate, and only provides intermittent measurements. The alternative approach which I’m working on is to estimate respiratory rate from a small, unobtrusive, wearable sensor.

Wearable sensors are currently routinely used to monitor heart rate and blood oxygenation levels. It turns out that the signals which provide these measurements are subtly influenced by respiration, as demonstrated below. If these subtle changes can be extracted reliably, then we could monitor respiratory rate “for free”, without the need for any additional sensors. This may provide all-important information on changes in a patient’s health, allowing clinicians to identify deteriorating patients earlier.

PeterCharlton_heartratesignal

The heart rate is clearly visible in this signal since each spike corresponds to a heart beat. The spikes also vary in height with each of the four breaths. These subtle changes can be used to estimate respiratory rate.

So what’s all this got to do with reproducible research? Well, over the past few decades over 100 papers have been written describing methods for estimating respiratory rate electronically from signals that are already monitored by wearable sensors. If you read them (it takes a long time) then you find that hundreds of methods have been described. The key questions are: which method is the best, and is it good enough to use in clinical practice? Answering these questions can be a daunting task given how many different methods there are. Very few of the methods are publicly available, so to answer these questions you’d have to implement each of the methods yourself. Even once you have done this, you’d need to try them out on some data. Collecting this data is no easy task. Altogether, reproducing scientist’s previous work on this problem is quite difficult.

I’m hoping that this won’t be such a problem in the future. We have recently implemented many of the methods, collected a benchmark dataset on which to test the methods, and reported the results. All of this is publicly available. What’s more, you can download it all for free, from the methods, to the data, to the article describing the results. So in a few clicks you can catch up, reproduce our research, and start making progress yourself, even producing methods like this:

PeterCharlton_resp_video_gif_red

Well, nearly … I’ve written a tutorial on the methods, which is due to be published in a textbook soon. This work can be reproduced exactly. Since then we have extended the range of publicly available resources by adding more methods, and the new benchmark dataset. This most recent work can’t be reproduced exactly since we had to make a few changes before making it publicly available. I intend to make future work on this topic fully reproducible so that researchers can build on our work. Who knows, perhaps this will contribute towards earlier identification of deteriorating patients in the future.

Many robotic arms make light work

David Lloyd_crop
David

Dr David Lloyd is a Clinical Research Fellow at King’s College London and working as part of the iFIND project. The overall aim of the intelligent Fetal Imaging and Diagnosis project is to combine innovative technologies into a clinical ultrasound system that will lead to a radical change in the way fetal screening is performed.

One of the goals of the iFIND project is to produce an antenatal screening system that uses multiple ultrasound probes at the same time. There are lots of potential advantages to this – for example, we could combine the images from two probes to see more of the baby at once, or provide a more detailed picture of one part of the baby. With iFIND, our hope is to have several separate 3D ultrasound probes working simultaneously, giving us the opportunity to see more of the baby, in more detail, than ever before.

The problem is, how do we control a number of ultrasound probes at the same time? I’ve yet to meet anyone who can scan with two probes at the same time, and several people trying to scan one patient sounds like a bit of a crowd! There is a solution though, and it’s something the team here at iFIND are working hard to develop: robotic arms.

Sounds pretty cool doesn’t it? Get robots to do the scans! But let’s stop and think about this for a minute. We need to make a robotic arm that can not just hold an ultrasound probe, but can twist, flex, rotate and extend, just like a human arm, to get all the views necessary to visualise the baby. Then we need to give it “eyes”: something to tell it not just what it is seeing now, but where and how to move to see other parts of the baby. It also needs to know exactly how hard to press, and we need to make sure it has thorough safety mechanisms built in. Perhaps it’s a tougher challenge than it sounds.

DavidJackie holding probes on phantom
David with Jackie Matthew, sonographer & research training fellow, each holding a probe to simultaneously scan a phantom fetus

However, as I’ve learnt, no problem is insurmountable for the team at iFIND, and indeed our dedicated robotics group are designing an arm that can do just that. The first step is to record in detail what humans do when they perform a scan, and that’s exactly what we do with our participants. Each dedicated iFIND ultrasound scan we perform records not only the imaging data, but also the exact position, force and torque (twisting force) of the ultrasound probe throughout the scan.

The video below shows an example: on the left, we can see how the sonographer is moving the ultrasound probe across the abdomen of one of our volunteers; the colours under the probe show how much pressure they applied to the skin. The right panel shows the ultrasound images so we know exactly what they could see at the time.

force_map_us

We hope to collect information from all 500 of our participants, and will use it to instruct the robotic arms how to perform the ultrasound scans automatically, just like a person would.

Another problem the team have to think about is far more simple, but perhaps just as important: aesthetics. The arms we design need to look and feel just as gentle and safe as we are designing them to be. So whilst we are collecting all the important data to help develop the technology, we are also learning from participants, just to ask how they would feel being scanned by a robotic arm rather than a person, and what we could do to make them more comfortable about the idea.

So: our goal is to produce a robotic arm that has the dexterity and sensitivity of a human being, knows how to perform a fetal ultrasound, well actually several of them, and doesn’t look scary.. And they also have to talk to each other.

Maybe we’ll leave that for another blog…

Read previous posts about the iFIND project written by David Lloyd.