BME CAPSTONE, SPRING 2019
MACHINE LEARNING GUIDED ULTRASOUND CANNULATION, 
SPONSORED BY US DEPARTMENT OF DEFENSE, SOF’20
E. CYNOR, M. FOLKERTS, L. SCHERGER, A. SOEWITO, R. UPADHYAYA

In fall 2018, the Department of Defense asked our team to collaborate on their mission to enable non-experts to use ultrasound to guide cannulation (the insertion of a needle into an artery to deliver medication). The project’s goal was to improve accessibility and efficiency of battlefront emergency care. 

In January of 2019, I observed an open-heart surgery on an infant to understand how ultrasound guided cannulation works today in practice. Observing the surgery made me realize how difficult cannulation is, even for an expert in a controlled operating room. The doctor has to simultaneously hold the ultrasound steady with one hand, use the other to keep the skin tight, and use their “third hand” to insert the needle. Additionally, the surgeon has to constantly pan their vision back to the blurry ultrasound screen, which is used to locate the artery. After six failed cannulation attempts during this surgery, the doctor successfully inserted the needle, but the image disappeared on screen, so they couldn’t tell how to angle the needle to prevent going too deep.

In addition to the surgery observation, we also conducted physician interviews to gather feedback on the experience of using an ultrasound to guide cannulation. Three common problems were echoed throughout our interviews: blurry imaging, needle disappearance, and awkwardness handling the physical setup. To address these three concerns, we created a three part solution: 



For these three solutions to actualize, we needed to develop models for data collection and testing because we couldn’t just cannulate ourselves (this would require sticking a needle into our own arteries, which the IRB wouldn’t be too stoked about). We designed a model to practice cannulating using chicken breast, balloons, gelatin, and straws. We went through several rounds of ideation until we optimized data collection to enable capturing hundreds of images of the cross sections of our own arteries using a handheld ultrasound. Inspired by a car phone mount, we attached the screen to the transducer to consolidate the field of view to one area and free a hand. The improved setup helped us take over 200 images of our model, which we annotated to teach ourselves how to recognize the artery and find the optimal angle for needle insertion. We then realized we could use the images to train an AI system to guide non-experts through cannulation. During this design process, I learned how helpful it is to not only observe the problem, but also experience it so that the solution is not based on assumptions, but instead through empathy.

My role in this project was designing the hardware attachment, capturing vessel images and annotating them to build the algorithm, and interviewing physicians concerning cannulation. I also created schematics and idea maps to illustrate our ideas for presentations. 

Our objective was to improve the user’s experience of cannulation using an ultrasound imager, which we acheived by creating an AI model and hardware attachment to help guide cannulation. This was demonstrated during our final presentation, where our professors attempted a cannulation on an ultrasound model using our technology. With further testing, the AI model’s accuracy can be improved.

The following images were part of my individual lab notebook. They outline some of the steps to prototyping, including brainstorms, solution synthesis and design analysis.



Solution synthesis mapping to address the three design objectives:
vessel identification, line detection for needle tracking, and image stabilizer


Decision making to address our technical and non-technical considerations


Phone stand ideation: physical design needs




Concept schematic and plans for physical prototype



Physical phone stand to enable easier mechanical stability 


Engineering and cost analysis to determine high-throughput manufacturing capability