Published on in Vol 6, No 5 (2022): May

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/34830, first published .
Machine Learning Decision Support for Detecting Lipohypertrophy With Bedside Ultrasound: Proof-of-Concept Study

Machine Learning Decision Support for Detecting Lipohypertrophy With Bedside Ultrasound: Proof-of-Concept Study

Machine Learning Decision Support for Detecting Lipohypertrophy With Bedside Ultrasound: Proof-of-Concept Study

Original Paper

1Data Science Program, University of British Columbia, Vancouver, BC, Canada

2Gerontology and Diabetes Research Laboratory, University of British Columbia, Vancouver, BC, Canada

3Division of Endocrinology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada

4Division of Geriatric Medicine, Department of Medicine, University of British Columbia, Vancouver, BC, Canada

5Centre for Hip Health and Mobility, Vancouver, BC, Canada

*these authors contributed equally

Corresponding Author:

Kenneth Madden, MSc, MD

Division of Geriatric Medicine

Department of Medicine

University of British Columbia

Gordon and Leslie Diamond Health Care Centre

2775 Laurel Street

Vancouver, BC, V5Z 1M9

Canada

Phone: 1 60 487 55706

Email: Kenneth.Madden@ubc.ca


Background: The most common dermatological complication of insulin therapy is lipohypertrophy.

Objective: As a proof of concept, we built and tested an automated model using a convolutional neural network (CNN) to detect the presence of lipohypertrophy in ultrasound images.

Methods: Ultrasound images were obtained in a blinded fashion using a portable GE LOGIQ e machine with an L8-18I-D probe (5-18 MHz; GE Healthcare). The data were split into train, validation, and test splits of 70%, 15%, and 15%, respectively. Given the small size of the data set, image augmentation techniques were used to expand the size of the training set and improve the model’s generalizability. To compare the performance of the different architectures, the team considered the accuracy and recall of the models when tested on our test set.

Results: The DenseNet CNN architecture was found to have the highest accuracy (76%) and recall (76%) in detecting lipohypertrophy in ultrasound images compared to other CNN architectures. Additional work showed that the YOLOv5m object detection model could be used to help detect the approximate location of lipohypertrophy in ultrasound images identified as containing lipohypertrophy by the DenseNet CNN.

Conclusions: We were able to demonstrate the ability of machine learning approaches to automate the process of detecting and locating lipohypertrophy.

JMIR Form Res 2022;6(5):e34830

doi:10.2196/34830

Keywords



The most common dermatological complication of insulin therapy for glycemic control in diabetes is lipohypertrophy, which has a prevalence ranging from approximately 25% to 65% in the literature [1,2]. These lesions are characterized by fibrosis, decreased vascularity, and adipose hypertrophy [3] and are likely due to both inflammation and the trophic properties of insulin [4]. These lesions have clinical effects that reach far beyond the skin—some previous works have shown that lipohypertrophy alters insulin absorption resulting in poor glycemic control and high glycemic variability in persons with diabetes [5-7]. Avoidance of lipohypertrophic sites has also shown to improve glycated hemoglobin levels, and current practice recommends the evaluation of these lesions based on either visual inspection or palpation [8,9]. More recent findings have developed clear criteria for detecting lipohypertrophy with ultrasound and have shown that approximately half of these lesions are not detectable by palpation [10,11]. These findings have led to the suggestion that bedside ultrasound can be used as an adjunct to palpation [10], but there are significant barriers to implementing this in standard diabetes clinics since ultrasound imaging is only familiar to and implemented by a small group of diabetes educators or physicians.

The development of machine learning techniques to predict masses in ultrasound images has been an ongoing effort in clinical practice for the past few decades. To assist physicians in diagnosing disease, many scholars have implemented techniques such as regression, decision trees, Naive Bayesian classifiers, and neural networks on patients’ ultrasound imaging data [12]. Further, many studies involving ultrasound images have attempted to preprocess the images to extract features. Previous work by Chiao et al [13] has demonstrated that the use of convolutional neural networks (CNNs) with ultrasound images is better than radiomic models in predicting breast cancer tumors [13]. Other recent work has shown success in classifying liver masses into 1 of 5 categories with 84% accuracy, using a CNN model [14]. Recent work looking into the use of various complex image augmentation approaches has shown that the use of generative adversarial networks to generate images to enlarge the data set improve the performance of the eventual model [15], and many such studies [16,17] have confirmed that minimal transformations such as flipping images can result in a higher prediction accuracy.

In an effort to improve the accessibility and efficiency of this method of detection, we have, as a proof of concept, developed a supervised machine learning algorithm to detect lipohypertrophy in ultrasound images using a CNN and a web-based application to deploy the trained models and make accurate predictions on the presence or absence of lipohypertrophy in ultrasound images.


Recruitment

All images were obtained from research participants who were enrolled in a diabetes education program at an academic center and who had an unknown lipohypertrophy status between July 2015 and March 2017 as part of a previous study of this condition [10]. All research participants were above 19 years of age, had a diagnosis of type 1 or type 2 diabetes mellitus, and were currently being treated with a minimum of 1 insulin injection daily or an insulin pump for at least 2 years. Participants were excluded if they were prescribed a systemic glucocorticoid, glucagon-like peptide-1 agonist, or if they had a nonlipodystrophic dermatological condition extending to the insulin injection site area. Each image was categorized as positive (lipohypertrophy present) or negative (no lipohypertrophy present) by a radiologist as per previously published criteria in a blinded fashion [10]. Ultrasound images were obtained in a blinded fashion using a portable GE LOGIQ e machine with an L8-18I-D probe (5-18 MHz; GE Healthcare).

Ethical Considerations

All research participants gave written consent, and our study protocol received approval by the Human Subjects Committee of the University of British Columbia (H20-03979).

Data Splits

Before beginning any model training, the data were split into train, validation, and test splits of 70%, 15%, and 15%, respectively, followed by some preprocessing steps of manually removing borders from the nonannotated versions of the images. We included all different types of diabetes as 1 set and did not differentiate between patients when splitting, as the histology of these lesions has been found to be independent of the source of insulin or mode of administration [18,19]. In fact, insulin-induced lipohypertrophy does not show any histological specificity, closely resembles hypertrophic cellulite [20], and appears identical to fat nodules due to other etiologies such as corticosteroids [21] or electromagnetic fields [22]. The lesions have been shown to be due to the direct result of the hypertrophic effects of administered insulin with no evidence for a pathogenic role for the insulin antibodies found in type 1 diabetes [23].

Image Transformation and Model Development

Given the small size of the data set, image augmentation techniques were used to expand the size of the training set and improve the model’s generalizability. A variety of classic transformations [16,17] were tested, and the model’s performance on these augmented data sets were documented at this stage (Figure 1). The augmenting transformations that led to the best performance were adding random vertical and horizontal flipping, randomly changing the brightness between –0.1 to 0.1, and randomly changing the contrast between 0 and 1, each with a probability of 50%. The images in the data set varied in size from 300300 up to 460500. As a result, after the above transformations, all images were resized to a standard common denominator of 300300 pixels by cropping. An example of a transformed image is shown in Figure 1. The augmented data is then used to train a CNN model using transfer learning, a technique using pretrained models on thousands of images, which then allows for retraining of the entire network with our comparatively smaller data set. Based on our literature review, the transfer learning architectures we chose to investigate were the following: VGG16, ResNet50, DenseNet169, and InceptionV3 [24]. Each model was incorporated into our small data set, trained in separate experiments using techniques to optimize the parameters of the model to maximize its ability to learn. To compare the performance of different architectures, the team considered the accuracy and recall scores of the models when tested on our test set.

Figure 1. Final image transformations included random vertical and horizontal flipping and random brightness and contrast adjustment.
View this figure

Object Detection

In addition, we wanted to implement object detection into our pipeline, giving users the opportunity to visually identify the location of lipohypertrophy being detected by our model. To implement object detection using a popular framework called YOLOv5 [25,26], the team created bounding boxes around the location of the lipohypertrophy masses on the positive training images using the annotated ultrasound images as a guide. Next, using the YOLOv5 framework, the YOLOv5m model was trained for 200 epochs with an image size of 320320 pixels (as this was what the Application Programming Interface allowed) and a batch size of 8.


Our images were obtained from a total of 103 participants, of whom 8% were diagnosed with type 1 and 92% were diagnosed with type 2 diabetes (Table 1). Our data set included 218 negative images (no lipohypertrophy present) and 135 positive images (lipohypertrophy present). Examples are shown in Figure 2.

Each of the potential models (VGG16, ResNet50, DenseNet169, and InceptionV3) were investigated by training them in separate experiments, using our augmented data set.

Table 1. Research participant characteristics (N=103).
CharacteristicsValues
Age (years), mean (SE)75.0 (11.8)
BMI (kg/m2), mean (SE)28.3 (6.1)
Participant with type 1 diabetes, n8
Number of years on insulin, mean (SE)9.4 (11.5)
Duration of diabetes (years), mean (SE)20.7 (6.1)
Glycated hemoglobin (%), mean (SE)8.0 (1.1)
Total daily dose (units), mean (SE)48.6 (42.9)
Daily doses, n (range)2 (1-6)
Figure 2. Some examples of images found in our data set. The top row displays negative images (no lipohypertrophy present) and the bottom row displays positive images (lipohypertrophy present) where the yellow annotations indicate the exact area of the mass. The yellow annotations are only for the reader; the images that the model was trained on were unmarked with no yellow annotations.
View this figure

As shown in Table 2, all models were able to achieve accuracy scores higher than 0.60 when tested on a holdout sample. When comparing performance of the various models, DenseNet demonstrated the highest accuracy score (0.76), the highest recall or sensitivity score (0.76), and the highest specificity score (0.49), indicating an overall better performance than Inception, VGG16, or ResNet. In addition to better performance, DenseNet also demonstrated a relatively small computational size (30 MB) compared to the other models (Inception, 100 MB; ResNet, 99 MB; VGG16, 547 MB).

With respect to object detection implementation, the YOLOv5m model was able to identify the specific location of lipohypertrophy in test cases, as demonstrated in Figure 3. In order to help a clinician verify the results of our models, YOLOv5m was able to accurately create bounding boxes around lipohypertrophy sites in ultrasound images. As shown in Figure 4, YOLOv5m demonstrated an F1 score of 0.78 at a confidence value of 0.41.

All 4 models (ResNet, VGG16, Inception, and DenseNet) were tested on a holdout sample to produce these accuracy, recall or sensitivity, and specificity results.

Table 2. Model accuracy scores, recall or sensitivity scores, and specificity scores.
ModelAccuracy scoresRecall or sensitivity scoresSpecificity scores
DenseNet0.760.760.49
Inception0.740.520.33
VGG160.650.190.12
ResNet0.6100
Figure 3. Our final object detection model results on a test sample reveals promising outcomes. The top row indicates the true location of lipohypertrophy, and the bottom row indicates where the model thinks the lipohypertrophy is. The number on the red box indicates the model’s confidence.
View this figure
Figure 4. Our results from the YOLOv5m object detection model showcase a successful initial attempt, as shown by our precision (a). Our best F1 score (b) is around 0.78 with a confidence value of about 0.4109. Any higher confidence value causes our recall (c) to suffer dramatically, which was the focus of our optimization.
View this figure

Principal Results

As a proof of concept, we were able to demonstrate the ability of a supervised machine learning algorithm to detect lipohypertrophy on ultrasound images using a CNN, and we were able to deploy this algorithm though a web-based application to make accurate predictions on the presence or absence of lipohypertrophy in ultrasound images obtained at the point of care. The DenseNet transfer learning architecture outperformed the other architectures tested, suggesting this would be the most appropriate choice to automate the process of detecting and locating lipohypertrophy, a common dermatological complication of insulin injections.

Comparison With Prior Works

Prediction of masses in ultrasound images using machine learning techniques has been an ongoing effort in clinical practice for the past few decades. To assist physicians in diagnosing disease, many scholars have implemented techniques such as regression, decision trees, Naive Bayesian classifiers, and neural networks on patients’ ultrasound imaging data [12]. Further, similar to this study, many investigators have used preprocessing techniques to extract features. In fact, Chiao et al [13] demonstrated that CNNs using ultrasound images perform better than other methods (such as radiomic models) in predicting breast cancer tumors. Another recent study showed considerable success in classifying liver masses into 1 of 5 categories with 84% accuracy, using a CNN mode [14]. To our knowledge, this is the first attempt to use CNN techniques to automate the detection of lipohypertrophy, demonstrating the considerable performance of our DenseNet model both in terms of test accuracy and recall (Table 2).

Recent research has delved into various complex image augmentation techniques to generate images [15]; we also found that traditional transformations managed to improve model performance, congruent with the results of this study. Furthermore, other studies [16,17] also confirmed that minimal transformations such as flipping the images led to higher prediction accuracy in their application. DenseNet has also proved successful in similar deep learning applications using small data sets [27], which we suspect is due to its ability to reduce the parameters in a model.

Limitations

Although our project has demonstrated in principle that machine learning can be used to detect lipohypertrophy, there are some key limitations that should be addressed before it can be used in a clinical setting. Given the small size of our data set, more images need to be incorporated into the model before it can be used to direct patient care. Besides, even after the addition of new images, an auditing process should also be developed to ensure that our machine learning model does not propagate any biases that could cause harm to specific patient populations.

Conclusions

Previous clinical studies of lipohypertrophy have demonstrated quite a high prevalence of this condition (greater than half). More importantly, they have demonstrated a significant burden of subclinical lesions in patients with diabetes [10]. This is clinically important both due to the alterations in insulin absorption with injection proximate to a lipohypertrophic lesion [5-7] and the fact that the only treatment for this condition is avoidance [28]. Although our proof-of-concept study was limited by the fact that our model was based on a small number of images, we have successfully demonstrated the development of a model that can automatically detect lipohypertrophy in patients with diabetes. Although more work needs to be done, future studies of models developed on larger image data sets could allow for the development of a rapid, noninvasive, bedside test for subclinical lipohypertrophy that could easily be used by health care professionals unfamiliar with the use of ultrasound technology.

Acknowledgments

This work was supported by the Allan M McGavin Foundation. The funder had no role in the production of the manuscript.

Authors' Contributions

JK collected the data. EB, TB, LH, JR, and XY analyzed the data and wrote the manuscript. GM and KM designed the study and wrote the manuscript. KM takes responsibility for the contents of this paper.

Conflicts of Interest

None declared.

  1. Vardar B, Kizilci S. Incidence of lipohypertrophy in diabetic patients and a study of influencing factors. Diabetes Res Clin Pract 2007 Aug;77(2):231-236. [CrossRef] [Medline]
  2. Blanco M, Hernández MT, Strauss KW, Amaya M. Prevalence and risk factors of lipohypertrophy in insulin-injecting patients with diabetes. Diabetes Metab 2013 Oct;39(5):445-453. [CrossRef] [Medline]
  3. Fujikura J, Fujimoto M, Yasue S, Noguchi M, Masuzaki H, Hosoda K, et al. Insulin-induced lipohypertrophy: report of a case with histopathology. Endocr J 2005 Oct;52(5):623-628 [FREE Full text] [CrossRef] [Medline]
  4. Atlan-Gepner C, Bongrand P, Farnarier C, Xerri L, Choux R, Gauthier JF, et al. Insulin-induced lipoatrophy in type I diabetes: a possible tumor necrosis factor-alpha-mediated dedifferentiation of adipocytes. Diabetes Care 1996 Nov;19(11):1283-1285. [CrossRef] [Medline]
  5. Johansson UB, Amsberg S, Hannerz L, Wredling R, Adamson U, Arnqvist HJ, et al. Impaired absorption of insulin aspart from lipohypertrophic injection sites. Diabetes Care 2005 Aug;28(8):2025-2027. [CrossRef] [Medline]
  6. Thow JC, Johnson AB, Marsden S, Taylor R, Home PD. Morphology of palpably abnormal injection sites and effects on absorption of isophane (NPH) insulin. Diabet Med 1990 Nov;7(9):795-799. [CrossRef] [Medline]
  7. Young RJ, Hannan WJ, Frier BM, Steel JM, Duncan LJ. Diabetic lipohypertrophy delays insulin absorption. Diabetes Care 1984;7(5):479-480. [CrossRef] [Medline]
  8. Grassi G, Scuntero P, Trepiccioni R, Marubbi F, Strauss K. Optimizing insulin injection technique and its effect on blood glucose control. J Clin Transl Endocrinol 2014 Dec;1(4):145-150 [FREE Full text] [CrossRef] [Medline]
  9. Kordonouri O, Lauterborn R, Deiss D. Lipohypertrophy in young patients with type 1 diabetes. Diabetes Care 2002 Mar;25(3):634. [CrossRef] [Medline]
  10. Kapeluto JE, Paty BW, Chang SD, Meneilly GS. Ultrasound detection of insulin-induced lipohypertrophy in Type 1 and Type 2 diabetes. Diabet Med 2018 Oct;35(10):1383-1390. [CrossRef] [Medline]
  11. Kapeluto J, Paty BW, Chang SD, Eddy C, Meneilly G. Criteria for the detection of insulin-induced lipohypertrophy using ultrasonography. CJD 2015 Dec 01;39(6):534 [FREE Full text] [CrossRef]
  12. Huang Q, Zhang F, Li X. Machine learning in ultrasound computer-aided diagnostic systems: a survey. Biomed Res Int 2018 Mar 04;2018:1-10 [FREE Full text] [CrossRef] [Medline]
  13. Chiao J, Chen K, Liao KY, Hsieh P, Zhang G, Huang T. Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine (Baltimore) 2019 May;98(19):e15200 [FREE Full text] [CrossRef] [Medline]
  14. Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study. Radiology 2018 Mar;286(3):887-896. [CrossRef] [Medline]
  15. Al-Dhabyani W, Gomaa M, Khaled H, Fahmy A. Deep learning approaches for data augmentation and classification of breast masses using ultrasound images. IJACSA 2019;10(5) [FREE Full text] [CrossRef]
  16. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Corrigendum: Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 Dec 28;546(7660):686. [CrossRef] [Medline]
  17. Loey M, Manogaran G, Khalifa NEM. A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images. Neural Comput Appl 2020 Oct 26:1-13 [FREE Full text] [CrossRef] [Medline]
  18. Richardson T, Kerr D. Skin-related complications of insulin therapy: epidemiology and emerging management strategies. Am J Clin Dermatol 2003;4(10):661-667. [CrossRef] [Medline]
  19. Hauner H, Stockamp B, Haastert B. Prevalence of lipohypertrophy in insulin-treated diabetic patients and predisposing factors. Exp Clin Endocrinol Diabetes 1996;104(2):106-110. [CrossRef] [Medline]
  20. Quatresooz P, Xhauflaire-Uhoda E, Piérard-Franchimont C, Piérard GE. Cellulite histopathology and related mechanobiology. Int J Cosmet Sci 2006 Jun;28(3):207-210. [CrossRef] [Medline]
  21. Flagothier C, Piérard GE, Quatresooz P. Cutaneous myospherulosis and membranous lipodystrophy: extensive presentation in a patient with severe steroid-induced dermal atrophy. J Eur Acad Dermatol Venereol 2006 Apr;20(4):457-460. [CrossRef] [Medline]
  22. Flagothier C, Quatresooz P, Pierard G. [Electromagnetic lipolysis and semicircular lipoatrophy of the thighs]. Ann Dermatol Venereol 2006;133(6-7):577-580. [CrossRef] [Medline]
  23. Raile K, Noelle V, Landgraf R, Schwarz HP. Insulin antibodies are associated with lipoatrophy but also with lipohypertrophy in children and adolescents with type 1 diabetes. Exp Clin Endocrinol Diabetes 2001;109(8):393-396. [CrossRef] [Medline]
  24. Zhang H, Han L, Chen K, Peng Y, Lin J. Diagnostic efficiency of the breast ultrasound computer-aided prediction model based on convolutional neural network in breast cancer. J Digit Imaging 2020 Oct;33(5):1218-1223 [FREE Full text] [CrossRef] [Medline]
  25. Luo Y, Zhang Y, Sun X, Dai H, Chen X. Intelligent solutions in chest abnormality detection based on YOLOv5 and ResNet50. J Healthc Eng 2021;2021:2267635 [FREE Full text] [CrossRef] [Medline]
  26. Aly GH, Marey M, El-Sayed SA, Tolba MF. YOLO based breast masses detection and classification in full-field digital mammograms. Comput Methods Programs Biomed 2021 Mar;200:105823. [CrossRef] [Medline]
  27. Buslaev A, Iglovikov VI, Khvedchenya E, Parinov A, Druzhinin M, Kalinin AA. Albumentations: fast and flexible image augmentations. Information 2020 Feb 24;11(2):125. [CrossRef]
  28. Smith M, Clapham L, Strauss K. UK lipohypertrophy interventional study. Diabetes Res Clin Pract 2017 Apr;126:248-253. [CrossRef] [Medline]


CNN: convolutional neural network


Edited by A Mavragani; submitted 09.11.21; peer-reviewed by X Wang, CI Sartorão Filho; comments to author 20.01.22; revised version received 14.03.22; accepted 09.04.22; published 06.05.22

Copyright

©Ela Bandari, Tomas Beuzen, Lara Habashy, Javairia Raza, Xudong Yang, Jordanna Kapeluto, Graydon Meneilly, Kenneth Madden. Originally published in JMIR Formative Research (https://formative.jmir.org), 06.05.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.