
Dr David Lloyd is a Clinical Research Fellow at King’s College London and working as part of the iFIND project. The overall aim of the intelligent Fetal Imaging and Diagnosis project is to combine innovative technologies into a clinical ultrasound system that will lead to a radical change in the way fetal screening is performed.
One of the goals of the iFIND project is to produce an antenatal screening system that uses multiple ultrasound probes at the same time. There are lots of potential advantages to this – for example, we could combine the images from two probes to see more of the baby at once, or provide a more detailed picture of one part of the baby. With iFIND, our hope is to have several separate 3D ultrasound probes working simultaneously, giving us the opportunity to see more of the baby, in more detail, than ever before.
The problem is, how do we control a number of ultrasound probes at the same time? I’ve yet to meet anyone who can scan with two probes at the same time, and several people trying to scan one patient sounds like a bit of a crowd! There is a solution though, and it’s something the team here at iFIND are working hard to develop: robotic arms.
Sounds pretty cool doesn’t it? Get robots to do the scans! But let’s stop and think about this for a minute. We need to make a robotic arm that can not just hold an ultrasound probe, but can twist, flex, rotate and extend, just like a human arm, to get all the views necessary to visualise the baby. Then we need to give it “eyes”: something to tell it not just what it is seeing now, but where and how to move to see other parts of the baby. It also needs to know exactly how hard to press, and we need to make sure it has thorough safety mechanisms built in. Perhaps it’s a tougher challenge than it sounds.

However, as I’ve learnt, no problem is insurmountable for the team at iFIND, and indeed our dedicated robotics group are designing an arm that can do just that. The first step is to record in detail what humans do when they perform a scan, and that’s exactly what we do with our participants. Each dedicated iFIND ultrasound scan we perform records not only the imaging data, but also the exact position, force and torque (twisting force) of the ultrasound probe throughout the scan.
The video below shows an example: on the left, we can see how the sonographer is moving the ultrasound probe across the abdomen of one of our volunteers; the colours under the probe show how much pressure they applied to the skin. The right panel shows the ultrasound images so we know exactly what they could see at the time.
We hope to collect information from all 500 of our participants, and will use it to instruct the robotic arms how to perform the ultrasound scans automatically, just like a person would.
Another problem the team have to think about is far more simple, but perhaps just as important: aesthetics. The arms we design need to look and feel just as gentle and safe as we are designing them to be. So whilst we are collecting all the important data to help develop the technology, we are also learning from participants, just to ask how they would feel being scanned by a robotic arm rather than a person, and what we could do to make them more comfortable about the idea.
So: our goal is to produce a robotic arm that has the dexterity and sensitivity of a human being, knows how to perform a fetal ultrasound, well actually several of them, and doesn’t look scary.. And they also have to talk to each other.
Maybe we’ll leave that for another blog…
Read previous posts about the iFIND project written by David Lloyd.