Institute of Fundamental Technological Research
Polish Academy of Sciences

Partners

Michael Andre

University of California (US)

Recent publications
1.  Byra M., Han A., Boehringer A.S., Zhang Y.N., O'Brien Jr W.D., Erdman Jr J.W., Loomba R., Sirlin C.B., Andre M., Liver fat assessment in multiview sonography using transfer learning with convolutional neural networks, Journal of Ultrasound in Medicine, ISSN: 0278-4297, DOI: 10.1002/jum.15693, pp.1-10, 2021

Abstract:
Objectives - To develop and evaluate deep learning models devised for liver fat assessment based on ultrasound (US) images acquired from four different liver views: transverse plane (hepatic veins at the confluence with the inferior vena cava, right portal vein, right posterior portal vein) and sagittal plane (liver/kidney). Methods - US images (four separate views) were acquired from 135 participants with known or suspected nonalcoholic fatty liver disease. Proton density fat fraction (PDFF) values derived from chemical shift-encoded magnetic resonance imaging served as ground truth. Transfer learning with a deep convolutional neural network (CNN) was applied to develop models for diagnosis of fatty liver (PDFF ≥ 5%), diagnosis of advanced steatosis (PDFF ≥ 10%), and PDFF quantification for each liver view separately. In addition, an ensemble model based on all four liver view models was investigated. Diagnostic performance was assessed using the area under the receiver operating characteristics curve (AUC), and quantification was assessed using the Spearman correlation coefficient (SCC). Results - The most accurate single view was the right posterior portal vein, with an SCC of 0.78 for quantifying PDFF and AUC values of 0.90 (PDFF ≥ 5%) and 0.79 (PDFF ≥ 10%). The ensemble of models achieved an SCC of 0.81 and AUCs of 0.91 (PDFF ≥ 5%) and 0.86 (PDFF ≥ 10%). Conclusion - Deep learning-based analysis of US images from different liver views can help assess liver fat.

Keywords:
attention mechanism, convolutional neural networks, deep learning, nonalcoholic fatty liver disease, proton density fat fraction, ultrasound images

Affiliations:
Byra M. - IPPT PAN
Han A. - University of Illinois at Urbana-Champaign (US)
Boehringer A.S. - University of California (US)
Zhang Y.N. - University of California (US)
O'Brien Jr W.D. - other affiliation
Erdman Jr J.W. - University of Illinois at Urbana-Champaign (US)
Loomba R. - University of California (US)
Sirlin C.B. - University of California (US)
Andre M. - University of California (US)
2.  Han A., Byra M., Heba E., Andre M.P., Erdman J.W.Jr., Loomba R., Sirlin C.B., O'Brien W.D.Jr., Noninvasive diagnosis of nonalcoholic fatty liver disease and quantification of liver fat with radiofrequency ultrasound data using one-dimensional convolutional neural networks, Radiology, ISSN: 0033-8419, DOI: 10.1148/radiol.2020191160, Vol.295, No.2, pp.342-350, 2020

Abstract:
Background: Radiofrequency ultrasound data from the liver contain rich information about liver microstructure and composition. Deep learning might exploit such information to assess nonalcoholic fatty liver disease (NAFLD). Purpose: To develop and evaluate deep learning algorithms that use radiofrequency data for NAFLD assessment, with MRI-derived proton density fat fraction (PDFF) as the reference. Materials and Methods: A HIPAA-compliant secondary analysis of a single-center prospective study was performed for adult participants with NAFLD and control participants without liver disease. Participants in the parent study were recruited between February 2012 and March 2014 and underwent same-day US and MRI of the liver. Participants were randomly divided into an equal number of training and test groups. The training group was used to develop two algorithms via cross-validation: a classifier to diagnose NAFLD (MRI PDFF ≥ 5%) and a fat fraction estimator to predict MRI PDFF. Both algorithms used one-dimensional convolutional neural networks. The test group was used to evaluate the classifier for sensitivity, specificity, positive predictive value, negative predictive value, and accuracy and to evaluate the estimator for correlation, bias, limits of agreements, and linearity between predicted fat fraction and MRI PDFF. Results: A total of 204 participants were analyzed, 140 had NAFLD (mean age, 52 years ± 14 [standard deviation]; 82 women) and 64 were control participants (mean age, 46 years ± 21; 42 women). In the test group, the classifier provided 96% (95% confidence interval [CI]: 90%, 99%) (98 of 102) accuracy for NAFLD diagnosis (sensitivity, 97% [95% CI: 90%, 100%], 68 of 70; specificity, 94% [95% CI: 79%, 99%], 30 of 32; positive predictive value, 97% [95% CI: 90%, 99%], 68 of 70; negative predictive value, 94% [95% CI: 79%, 98%], 30 of 32). The estimator-predicted fat fraction correlated with MRI PDFF (Pearson r = 0.85). The mean bias was 0.8% (P = .08), and 95% limits of agreement were -7.6% to 9.1%. The predicted fat fraction was linear with an MRI PDFF of 18% or less (r = 0.89, slope = 1.1, intercept = 1.3) and nonlinear with an MRI PDFF greater than 18%. Conclusion: Deep learning algorithms using radiofrequency ultrasound data are accurate for diagnosis of nonalcoholic fatty liver disease and hepatic fat fraction quantification when other causes of steatosis are excluded.

Affiliations:
Han A. - University of Illinois at Urbana-Champaign (US)
Byra M. - IPPT PAN
Heba E. - other affiliation
Andre M.P. - University of California (US)
Erdman J.W.Jr. - University of Illinois at Urbana-Champaign (US)
Loomba R. - University of California (US)
Sirlin C.B. - University of California (US)
3.  Byra M., Jarosik P., Szubert A., Galperine M., Ojeda-Fournier H., Olson L., Comstock Ch., Andre M., Andre M., Breast mass segmentation in ultrasound with selective kernel U-Net convolutional neural network, Biomedical Signal Processing and Control, ISSN: 1746-8094, DOI: 10.1016/j.bspc.2020.102027, Vol.61, pp.102027-1-10, 2020

Abstract:
In this work, we propose a deep learning method for breast mass segmentation in ultrasound (US). Variations in breast mass size and image characteristics make the automatic segmentation difficult. To addressthis issue, we developed a selective kernel (SK) U-Net convolutional neural network. The aim of the SKswas to adjust network's receptive fields via an attention mechanism, and fuse feature maps extractedwith dilated and conventional convolutions. The proposed method was developed and evaluated usingUS images collected from 882 breast masses. Moreover, we used three datasets of US images collectedat different medical centers for testing (893 US images). On our test set of 150 US images, the SK-U-Netachieved mean Dice score of 0.826, and outperformed regular U-Net, Dice score of 0.778. When evaluatedon three separate datasets, the proposed method yielded mean Dice scores ranging from 0.646 to 0.780. Additional fine-tuning of our better-performing model with data collected at different centers improvedmean Dice scores by ~6%. SK-U-Net utilized both dilated and regular convolutions to process US images. We found strong correlation, Spearman's rank coefficient of 0.7, between the utilization of dilated convo-lutions and breast mass size in the case of network's expansion path. Our study shows the usefulness ofdeep learning methods for breast mass segmentation. SK-U-Net implementation and pre-trained weightscan be found at github.com/mbyr/bus_seg.

Keywords:
attention mechanism, breast mass segmentation, convolutional neural networks, deep learning, receptive field, ultrasound imaging

Affiliations:
Byra M. - IPPT PAN
Jarosik P. - other affiliation
Szubert A. - other affiliation
Galperine M. - other affiliation
Ojeda-Fournier H. - University of California (US)
Olson L. - University of California (US)
Comstock Ch. - Memorial Sloan-Kettering Cancer Center (US)
Andre M. - University of California (US)
4.  Byra M., Hentzen E., Du J., Andre M., Chang E.Y., Shah S., Assessing the performance of morphologic and echogenic features in median nerve ultrasound for carpal tunnel syndrome diagnosis, Journal of Ultrasound in Medicine, ISSN: 0278-4297, DOI: 10.1002/jum.15201, Vol.39, No.6, pp.1165-1174, 2020

Abstract:
Objectives: To assess the feasibility of using ultrasound (US) image features related to the median nerve echogenicity and shape for carpal tunnel syndrome (CTS) diagnosis. Methods: In 31 participants (21 healthy participants and 10 patients with CTS), US images were collected with a 30-MHz transducer from median nerves at the wrist crease in 2 configurations: a neutral position and with wrist extension. Various morphologic features, including the cross-sectional area (CSA), were calculated to assess the nerve shape. Carpal tunnel syndrome commonly results in loss of visualization of the nerve fascicular pattern on US images. To assess this phenomenon, we developed a nerve-tissue contrast index (NTI) method. The NTI is a ratio of average brightness levels of surrounding tissue and the median nerve, both calculated on the basis of a US image. The area under the curve (AUC) from a receiver operating characteristic curve analysis and t test were used to assess the usefulness of the features for differentiation of patients with CTS from control participants. Results: We obtained significant differences in the CSA and NTI parameters between the patients with CTS and control participants (P < .01), with the corresponding highest AUC values equal to 0.885 and 0.938, respectively. For the remaining investigated morphologic features, the AUC values were less than 0.685, and the differences in means between the patients and control participants were not statistically significant (P > .10). The wrist configuration had no impact on differences in average parameter values (P > .09). Conclusions: Patients with CTS can be differentiated from healthy individuals on the basis of the median nerve CSA and echogenicity. Carpal tunnel syndrome is not manifested in a change of the median nerve shape that could be related to circularity or contour variability.

Keywords:
carpal tunnel syndrome, cross-sectional area, echogenicity, median nerve, morphologic features, ultrasound

Affiliations:
Byra M. - IPPT PAN
Hentzen E. - other affiliation
Du J. - University of California (US)
Andre M. - University of California (US)
Chang E.Y. - University of California (US)
Shah S. - University of California (US)
5.  Byra M., Galperin M., Ojeda-Fournier H., Olson L., O Boyle M., Comstock C., Andre M., Breast mass classification in sonography with transfer learning using a deep convolutional neural network and color conversion, Medical Physics, ISSN: 0094-2405, DOI: 10.1002/mp.13361, Vol.46, No.2, pp.746-755, 2019

Abstract:
Purpose: We propose a deep learning-based approach to breast mass classification in sonographyand compare it with the assessment of four experienced radiologists employing breast imagingreporting and data system 4th edition lexicon and assessment protocol. Methods: Several transfer learning techniques are employed to develop classifiers based on a set of882 ultrasound images of breast masses. Additionally, we introduce the concept of a matching layer. The aim of this layer is to rescale pixel intensities of the grayscale ultrasound images and convertthose images to red, green, blue (RGB) to more efficiently utilize the discriminative power of theconvolutional neural network pretrained on the ImageNet dataset. We present how this conversioncan be determined during fine-tuning using back-propagation. Next, we compare the performance ofthe transfer learning techniques with and without the color conversion. To show the usefulness of ourapproach, we additionally evaluate it using two publicly available datasets. Results: Color conversion increased the areas under the receiver operating curve for each transferlearning method. For the better-performing approach utilizing the fine-tuning and the matching layer,the area under the curve was equal to 0.936 on a test set of 150 cases. The areas under the curves forthe radiologists reading the same set of cases ranged from 0.806 to 0.882. In the case of the two sepa-rate datasets, utilizing the proposed approach we achieved areas under the curve of around 0.890. Conclusions: The concept of the matching layer is generalizable and can be used to improve theoverall performance of the transfer learning techniques using deep convolutional neural networks. When fully developed as a clinical tool, the methods proposed in this paper have the potential to helpradiologists with breast mass classification in ultrasound.

Keywords:
BI-RADS, breast mass classification, convolutional neural networks, transfer learning, ultrasound imaging

Affiliations:
Byra M. - IPPT PAN
Galperin M. - Almen Laboratories, Inc. (US)
Ojeda-Fournier H. - University of California (US)
Olson L. - University of California (US)
O Boyle M. - University of California (US)
Comstock C. - Memorial Sloan-Kettering Cancer Center (US)
Andre M. - University of California (US)
6.  Byra M., Wan L., Wong J.H., Du J., Shah SB., Andre M.P., Chang E.Y., Quantitative ultrasound and b-mode image texture featurescorrelate with collagen and myelin content in human ulnarnerve fascicles, ULTRASOUND IN MEDICINE AND BIOLOGY, ISSN: 0301-5629, DOI: 10.1016/j.ultrasmedbio.2019.02.019, Vol.45, No.7, pp.1830-1840, 2019

Abstract:
We investigate the usefulness of quantitative ultrasound and B-mode texture features for characterization of ulnar nerve fascicles. Ultrasound data were acquired from cadaveric specimens using a nominal 30-MHz probe. Next, the nerves were extracted to prepare histology sections. Eighty-five fascicles were matched between the B-mode images and the histology sections. For each fascicle image, we selected an intra-fascicular region of interest. We used histology sections to determine features related to the concentration of collagen and myelin and ultrasound data to calculate the backscatter coefficient (–24.89 ± 8.31 dB), attenuation coefficient (0.92 ± 0.04 db/cm-MHz), Nakagami parameter (1.01 ± 0.18) and entropy (6.92 ± 0.83), as well as B-mode texture features obtained via the gray-level co-occurrence matrix algorithm. Significant Spearman rank correlations between the combined collagen and myelin concentrations were obtained for the backscatter coefficient (R = –0.68), entropy (R = –0.51) and several texture features. Our study indicates that quantitative ultrasound may potentially provide information on structural components of nerve fascicles.

Keywords:
nerve, quantitative ultrasound, high frequency, histology, pattern recognition, texture analysis

Affiliations:
Byra M. - IPPT PAN
Wan L. - University of California (US)
Wong J.H. - University of California (US)
Du J. - University of California (US)
Shah SB. - University of California (US)
Andre M.P. - University of California (US)
Chang E.Y. - University of California (US)

Conference abstracts
1.  Byra M., Wong J., Shah S., Han A., O Brien W., Du J., Chang E., Andre M., High-frequency quantitative ultrasound and B-mode analysis for characterization of peripheral nerves including carpal tunnel syndrome, ASA, 178th Meeting of the Acoustical Society of America, 2019-12-02/12-06, San Diego (US), DOI: 10.1121/1.5136729, Vol.146, No.4, pp.2809-2809, 2019

Abstract:
We investigated the use of high-frequency quantitative ultrasound (QUS) and B-mode texture features to characterize ulnar and median nerve fascicles using a clinical scanner (Vevo MD) and a 30-MHz center-frequency probe. US correlation with histology was first investigated in the ulnar nerve in situ in cadaveric specimens. 85 fascicles were matched in B-mode images and the histology sections. Collagen and myelin concentrations were quantified from trichrome labeling, and backscatter coefficient (-24.89 ± 8.31 dB), attenuation coefficient (0.92 ± 0.04 dB/cm MHz), Nakagami parameter (1.01 ± 0.18) and entropy (6.92 ± 0.83) were calculated from ultrasound data. B-mode texture features were obtained via the gray-level co-occurrence matrix algorithm. Combined collagen and myelin concentration were significantly correlated with the backscatter coefficient (R = -0.68), entropy (R = -0.51), and several texture features. For the median nerve, we measured backscatter and morphology in 10 patients with carpal tunnel syndrome and 21 healthy volunteers. Significant differences (<0.01) between patients and controls and AUC 0.89–0.94 for QUS biomarkers were observed. Our study indicates that QUS may potentially provide useful information on structural components of even very small nerves (2 × 4 mm) and fascicles for diagnosing and monitoring injury, and surgical planning.

Affiliations:
Byra M. - IPPT PAN
Wong J. - University of California (US)
Shah S. - University of California (US)
Han A. - University of Illinois at Urbana-Champaign (US)
O Brien W. - University of Illinois at Urbana-Champaign (US)
Du J. - University of California (US)
Chang E. - University of California (US)
Andre M. - University of California (US)
2.  Byra M., Han A., Boehringer A., Zhang Y., Erdman J., Loomba R., Valasek M., Sirlin C., O Brien W., Andre M., Quantitative liver fat fraction measurement by multi-view sonography using deep learning and attention maps, ASA, 178th Meeting of the Acoustical Society of America, 2019-12-02/12-06, San Diego (US), DOI: 10.1121/1.5136936, Vol.146, No.4, pp.2809-1, 2019

Abstract:
Qualitative sonography is used to assess nonalcoholic fatty liver disease (NAFLD), an important health issue worldwide. We used B-mode image deep-learning to objectively assess NAFLD in 4 views of the liver (hepatic veins at confluence with inferior vena cava, right portal vein, right posterior portal vein and liver/kidney) in 135 patients with known or suspected NAFLD. Transfer learning with a deep convolutional neural network (CNN) was applied for quantifying fat fraction and diagnosing fatty liver (≥ 5%) using contemporaneous MRI-PDFF as ground truth. Single and multi-view learning approaches were compared. Class activation mapping generated attention maps to highlight regions important for deep learning-based recognition. The most accurate single view was hepatic veins, with area under the receiver operating characteristic curve (AUC) of 0.86 and Spearman’s rank correlation coefficient of 0.65. A multi-view ensemble of deep-learning models trained for each view separately improved AUC (0.93) and correlation coefficient (0.76). Attention maps highlighted regions known to be used by radiologists in their qualitative assessment, e.g., hepatic vein-parenchyma interface and liver-kidney interface. Machine learning of four liver views can automatically and objectively assess liver fat. Class activation mapping suggests that the CNN focuses on similar features as radiologists. [No. R01DK106419.]

Affiliations:
Byra M. - IPPT PAN
Han A. - University of Illinois at Urbana-Champaign (US)
Boehringer A. - University of California (US)
Zhang Y. - University of California (US)
Erdman J. - University of Illinois at Urbana-Champaign (US)
Loomba R. - University of California (US)
Valasek M. - University of California (US)
Sirlin C. - University of California (US)
O Brien W. - University of Illinois at Urbana-Champaign (US)
Andre M. - University of California (US)
3.  Byra M., Galperin M., Ojeda-Fournier H., Olson L., O Boyle M., Comstock C., Andre M., Comparison of deep learning and classical breast mass classification methods in ultrasound, ASA, 178th Meeting of the Acoustical Society of America, 2019-12-02/12-06, San Diego (US), DOI: 10.1121/1.5136937, Vol.146, No.4, pp.2864-1, 2019

Abstract:
We developed breast mass classification methods based on deep convolutional neural networks (CNNs) and morphological features (MF), then compared those to assessment of four experienced radiologists employing BI-RADS protocol. The classification models were developed based on 882 clinical ultrasound B-mode images of masses with confirmed findings and regions of interest indicating mass areas. Various transfer learning techniques, including fine-tuning of a pre-trained CNN, were investigated to develop deep learning models. A matching layer technique was applied to convert gray-scale images to red, green, blue to efficiently utilize discrimination of the pre-trained model. For the classical approach, we calculated MF related to breast mass shape (e.g., height-width ratio, circularity) and then trained binary classifiers. We additionally evaluated both approaches using two publicly available US datasets. Several statistical measures (area under the receiver operating curve [AUC], sensitivity and specificity) were used to assess the classification performance on a test set of 150 cases. The matching layer significantly increased AUC from 0.895 to 0.936 while radiologists’ AUCs ranged from 0.806 to 0.882. This study shows both deep learning and classical models achieve high performance. When developed as a clinical tool, the methods examined in this study have potential to aid radiologists accurate breast mass classification with ultrasound.

Affiliations:
Byra M. - IPPT PAN
Galperin M. - Almen Laboratories, Inc. (US)
Ojeda-Fournier H. - University of California (US)
Olson L. - University of California (US)
O Boyle M. - University of California (US)
Comstock C. - Memorial Sloan-Kettering Cancer Center (US)
Andre M. - University of California (US)

Category A Plus

IPPT PAN

logo ippt            Pawińskiego 5B, 02-106 Warsaw
  +48 22 826 12 81 (central)
  +48 22 826 98 15
 

Find Us

mapka
© Institute of Fundamental Technological Research Polish Academy of Sciences 2024