Piotr Jarosik, M.Sc., Eng.

Department of Information and Computational Science (ZIiNO)
Division of Computational Materials Science (PMKIM)
position: doctoral student
telephone: (+48) 22 826 12 81 ext.: 412
room: 414
e-mail: pjarosik

Conference papers
1.Jarosik P., Lewandowski M., The feasibility of deep learning algorithms integration on a GPU-based ultrasound research scanner, 2017 IEEE, 2017 IEEE International Ultrasonics Symposium, 2017-09-06/09-09, Washington, DC (US), DOI: 10.1109/ULTSYM.2017.8091750, pp.1-4, 2017

Ultrasound medical diagnostics is a real-time modality based on a doctor's interpretation of images. So far, automated Computer-Aided Diagnostic tools were not widely applied to ultrasound imaging. The emerging methods in Artificial Intelligence, namely deep learning, gave rise to new applications in medical imaging modalities. The work's objective was to show the feasibility of implementing deep learning algorithms directly on a research scanner with GPU software beamforming. We have implemented and evaluated two deep neural network architectures as part of the signal processing pipeline on the ultrasound research platform USPlatform (us4us Ltd., Poland). The USPlatform is equipped with a GPU cluster, enabling full software-based channel data processing as well as the integration of open source Deep Learning frameworks. The first neural model (S-4-2) is a classical convolutional network for one-class classification of baby body parts. We propose a simple 6-layer network for this task. The model was trained and evaluated on a dataset consisting of 786 ultrasound images of a fetal training phantom. The second model (Gu-net) is a fully convolutional neural network for brachial plexus localisation. The model uses ‘U-net’-like architecture to compute the overall probability of target detection and the probability mask of possible target locations. The model was trained and evaluated on 5640 ultrasound B-mode frames. Both training and inference were performed on a multi-GPU (Nvidia Titan X) cluster integrated with the platform. As performance metrics we used: accuracy as a percentage of correct answers in classification, dice coefficient for object detection, and mean and std. dev. of a model's response time. The ‘S-4-2’ model achieved 96% classification accuracy and a response time of 3 ms (334 predictions/s). This simple model makes accurate predictions in a short time. The ‘Gu-net’ model achieved a 0.64 dice coefficient for object detection and a 76% target's presence classification accuracy with a response time of 15 ms (65 predictions/s). The brachial plexus detection task is more challenging and requires more effort to find the right solution. The results show that deep learning methods can be successfully applied to ultrasound image analysis and integrated on a single advanced research platform


Ultrasonic imaging, Neural networks, Convolution, Machine learning, Image segmentation, Kernel

Jarosik P.-IPPT PAN
Lewandowski M.-IPPT PAN