Published on in Vol 6, No 8 (2022): August

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/38664, first published .
Development of a Noninvasive Blood Glucose Monitoring System Prototype: Pilot Study

Development of a Noninvasive Blood Glucose Monitoring System Prototype: Pilot Study

Development of a Noninvasive Blood Glucose Monitoring System Prototype: Pilot Study

Original Paper

1Department of Information Technology, Kennesaw State University, Marietta, GA, United States

2Department of Software Engineering and Game Development, Kennesaw State University, Marietta, GA, United States

3Department of Exercise Science and Sport Management, Kennesaw State University, Kennesaw, GA, United States

4Department of Computer Science, Marquette University, Milwaukee, WI, United States

Corresponding Author:

Maria Valero, PhD

Department of Information Technology

Kennesaw State University

680 Arntson Drive

Suite J312

Marietta, GA, 30060

United States

Phone: 1 470 578 4552

Email: mvalero2@kennesaw.edu


Background: Diabetes mellitus is a severe disease characterized by high blood glucose levels resulting from dysregulation of the hormone insulin. Diabetes is managed through physical activity and dietary modification and requires careful monitoring of blood glucose concentration. Blood glucose concentration is typically monitored throughout the day by analyzing a sample of blood drawn from a finger prick using a commercially available glucometer. However, this process is invasive and painful, and leads to a risk of infection. Therefore, there is an urgent need for noninvasive, inexpensive, novel platforms for continuous blood sugar monitoring.

Objective: Our study aimed to describe a pilot test to test the accuracy of a noninvasive glucose monitoring prototype that uses laser technology based on near-infrared spectroscopy.

Methods: Our system is based on Raspberry Pi, a portable camera (Raspberry Pi camera), and a visible light laser. The Raspberry Pi camera captures a set of images when a visible light laser passes through skin tissue. The glucose concentration is estimated by an artificial neural network model using the absorption and scattering of light in the skin tissue. This prototype was developed using TensorFlow, Keras, and Python code. A pilot study was run with 8 volunteers that used the prototype on their fingers and ears. Blood glucose values obtained by the prototype were compared with commercially available glucometers to estimate accuracy.

Results: When using images from the finger, the accuracy of the prototype is 79%. Taken from the ear, the accuracy is attenuated to 62%. Though the current data set is limited, these results are encouraging. However, three main limitations need to be addressed in future studies of the prototype: (1) increase the size of the database to improve the robustness of the artificial neural network model; (2) analyze the impact of external factors such as skin color, skin thickness, and ambient temperature in the current prototype; and (3) improve the prototype enclosure to make it suitable for easy finger and ear placement.

Conclusions: Our pilot study demonstrates that blood glucose concentration can be estimated using a small hardware prototype that uses infrared images of human tissue. Although more studies need to be conducted to overcome limitations, this pilot study shows that an affordable device can be used to avoid the use of blood and multiple finger pricks for blood glucose monitoring in the diabetic population.

JMIR Form Res 2022;6(8):e38664

doi:10.2196/38664

Keywords



Background

Diabetes affects approximately one out of every 10 people in the United States [1]. Its prevalence has increased from 23.4 million Americans in 2015 to 30.3 million in 2021 and continues to rise at an alarming rate [2].

Successful management of diabetes involves monitoring blood glucose levels multiple times per day. The standard method for monitoring blood glucose concentration is through the use of a glucometer [3]. This device determines glucose concentration from a droplet of blood obtained from a finger prick or a laboratory blood draw. Taking repeated finger pricks over the course of a day is painful and creates a risk of infection at the collection site [4]. Therefore, noninvasive methods are an attractive alternative, however, those that are available today have several limitations.

Three main types of noninvasive glucose monitoring devices are currently available: (1) noninvasive optical glucose monitoring (NIO-GM), based on optical glucose monitoring, (2) noninvasive fluid sampling (NIFS-GM), based on fluid sample glucose estimation, and (3) minimally invasive devices (MI-GM), which use a sensor inserted into the subcutaneous tissue [5]. Figure 1 illustrates an example of each type of noninvasive and minimally invasive blood glucose monitoring.

Figure 1. Examples of (A) NIO-GM (adapted from Lubinski et al [6]), (B) MI-GM (adapted from Sjö [7], published under Creative Commons Attribution-Share Alike 4.0 International License [8] and (C) NIFS-GM (adapted from Park et al [9] published under Creative Commons Attribution NonCommercial License 4.0 International License [10]). MI-GM: minimally invasive device; NIFS-GM: noninvasive fluid sampling; NIO-GM: noninvasive optical glucose monitoring.
View this figure

NIO-GM estimates glucose concentration from energy absorption, reflection, or scattering of a light beam directed through the tissue [11]. These devices have the advantage of being both portable and inexpensive. NIO-GM technology includes fluorescence spectroscopy, which may lead to toxicity from fluorophores [12,13]; Raman spectroscopy, criticized for its lengthy spectral acquisition time and poor signal-to-noise ratio [14,15]; photoacoustic spectroscopy, which introduces noise from its sensitivity to environmental factors [15,16]; optical coherence tomography, which is overly sensitive to skin temperature [17]; and occlusion spectroscopy, known to result in signal drift [18]. In contrast, we have developed a NIO-GM device using near-infrared absorption spectroscopy, which is more practical and cost-efficient than those described above [19-23].

Objectives

Here, we describe the development of a novel noninvasive glucose monitoring system that uses the computing power of sensors and Internet of Things devices to continuously analyze blood glucose from a microcomputer and a sensor embedded within a clip positioned on the finger or ear. The prototype uses infrared spectroscopy to create images of the rotational and vibrational transitions of chemical bonds within the glucose molecule, and incident light reflection to measure their corresponding fluctuation. The images are converted into an array list, which is used to provide entries for an artificial neural network (ANN) to create an estimate of blood glucose concentration. The prototype is easy to use and is paired with a mobile app for free-living environments. Figure 2 shows an overview of the proposed system.

Figure 2. Overview of the proposed noninvasive blood glucose monitoring system.
View this figure

Physical Theory

Our prototype detects blood glucose concentration using noninvasive absorption spectroscopy optical glucose monitoring [24]. It is based on the Beer-Lambert law of absorption that is shown in equation 1 [24].

where I0 is the initial light intensity (W/cm2), I is the intensity of the ith at any depth within the absorption medium in W/cm2, l is the absorption depth within the medium in centimeters, e is the molar extinction coefficient in L/(mmol cm), and c is the concentration of absorbing molecules in mmol/L. The product of and c is proportional to the absorption coefficient (µa).

The concentration of absorbing molecules is based on the above equation. However, the effect of other blood components and absorbing tissue components affects the amount of light absorbed. As a result, the total absorption coefficient is the summation of the absorption coefficients of all the absorbing components [25]. Then, to minimize the absorption due to all the other components, the wavelength of the light source should be chosen so that the light source is highly absorbed by glucose and is mostly transparent to blood and tissue components.

Hardware Configuration

We used Internet of Things technologies to leverage power computing and low energy consumption of sensor devices and a Raspberry Pi camera for building the glucose-monitoring prototype [26]. Although the Raspberry Pi camera captures images, a laser light captures absorption. The specifications of the laser light can be found in Table 1.

A small clip that can be positioned on a finger or earlobe holds the laser on the top half and the camera on the bottom. Figure 3 depicts the elements of the prototype (Raspberry Pi, camera, and laser light). The prototype has been named GlucoCheck.

The Raspberry Pi camera captures one image every 8 seconds over 2 minutes, for a total of 15 images. Brightness and contrast levels are set to 70 cycles/degree, camera ISO sensitivity is set to 800, and resolution is set to 640 × 480. Figures 4 and 5 show the prototype attached to the finger and ear, respectively.

The materials for the GlucoCheck prototype cost approximately US $79-$154 in 2022, depending on the availability of chips, which has been an ongoing issue in recent months. Typically, computer boards are abundant, but 2022 saw a shortage of chips, leading to inflated prices compared to previous years.

Table 1. Light laser specifications.
BrandIcstation
Model numberKY-008 5mW Red Laser Transmitter
ModuleInfrared
Part number276810
Working voltage5 mW
Wavelength>650 nm
Size24 × 15 mm or 0.94 × 0.59 inches (length × width)
Figure 3. GlucoCheck device.
View this figure
Figure 4. Prototype clipped to the finger.
View this figure
Figure 5. Prototype clipped to the ear.
View this figure

ANN Model

Due to the large number of images used by our prototype, we use a convolutional neural network (CNN/ConvNet) approach. The convolutional layer is the first layer of a CNN network and is the main building block that handles most of the computational work. We imported necessary libraries including Tensor Flow, Keras, MobileNetV2, Matplotlib, and Numpy. The image data set was converted into arrays with preprocessing, then stored in a list format with assigned labels. Finally, the images were appended to a single data array with a corresponding label array and data augmentation techniques were used to train our model, including cropping, zooming, height and width shift, and horizontal flipping.

Our base model, MobileNet-v2, is a lightweight, 53-layer deep CNN method used to improve the classification of images with a limited data set. The next step was to build the head model, which sits on top of the base model. The next layer is the activation layer, which uses the rectified linear unit (ReLu) activation function [27]. The ReLu is a piecewise linear function that will output the input directly if it is positive; otherwise, it will output zero. It has become the default activation function for many types of neural networks because a model that uses it is easier to train and often achieves better performance [28]. The next layer is the pooling layer, which incorporates feature-down sampling. It is applied to each layer in the 3D volume. The fully connected layer, which involves flattening, is the final step. The entire pooling feature map matrix is transformed into a single column, which is then supplied to the neural network for processing. We put these attributes together to make a model using the fully linked layers. Finally, we classified the output using a “Softmax” activation function. The ANN model was trained using the Adam technique, which included a total of 20 epochs, a batch size of 1, and an initial learning rate of 1e-4, and a 0.5 dropout was considered. The next step was to train and test the model; 80% of the data was used for training the model, and 20% was used for testing the model. Figure 6 shows the ANN used for our glucose estimation process.

Figure 6. Artificial neural network model used for glucose estimation. ReLu: rectified linear unit.
View this figure

Cloud Integration for Real-time Measures

The glucose concentration obtained from the ANN model is sent to the cloud using HTTPS. Next, we configure an InfluxDB [29] database in the cloud to store the data. InfluxDB is an open-source time-series database developed by the company InfluxData. It is written in the Go programming language for storing and retrieving time series data in fields such as operations monitoring, application metrics, Internet of Things sensor data, and real-time analytics. InfluxDB is flexible enough to store data from each subject separately using tags. The integration with the cloud uses the Raspberry Pi, which is connected in real time, and the computed values are displayed on a mobile app for the user.

Model Testing

Glucose data from 8 individuals were used to train and test the model. Each participant was asked to fast for one hour following an unstructured meal prior to the testing visit. Blood glucose concentration was estimated using a commercially available glucometer (FORA 6 Connect BG50 Blood Glucose Starter Testing [17]), according to manufacturer instructions. The GlucoCheck prototype was used to capture images from each participant at two positions: the index finger and the earlobe. As mentioned previously, 80% of the data was used for training the model and 20% of the data was used for testing. The LabelBinarizer module of the Python library sklearn was used to convert the image data to a binary format and store it in an array associated with its corresponding labels/categories (85-95 mg/dL, 96-110 mg/dL, 111-125 mg/dL). Data augmentation (cropping, zooming, height and width shift, horizontal flipping) was used to enlarge our data set for training and testing the model. The data were then passed to our model for glucose estimation. Separate models were developed for images from the finger and images from the earlobe. Figure 7 illustrates the workflow of the protocol.

Figure 7. Method workflow.
View this figure

Ethical Considerations

For this pilot study, the following ethical considerations were in place. First, the Institutional Review Board of Kennesaw State University approved the study (IRB-FY22-318). In addition, participation in the study was voluntary. Participants were free to opt in or out of the study at any time. Informed consent was required to inform the participant about the study’s purpose, risks, and funding before they agreed or declined to join. Finally, any personally identifiable data were anonymized and kept confidential for the research group.


Experimental Data

Figure 8 shows images collected from a finger. The images were taken after the finger prick at seconds 8 (top left), 16 (top right), 24 (bottom left), and 32 (bottom right). Figure 9 shows images collected from an earlobe at seconds 8, 16, 24, and 32 after the finger prick.

All the images were then appended to a single data array with a corresponding label array. Then, we performed data augmentation (cropping, zooming, height and width shift, horizontal flipping), which allowed us to expand the variety of data available for training the model as we had a minimal amount of data. The data were then passed to our model for glucose estimation.

Figure 8. Fingertip images collected from volunteers.
View this figure
Figure 9. Ear/earlobe images collected from volunteers.
View this figure

Accuracy Evaluation

The accuracy of the model was assessed with a confusion matrix, which illustrates the proportion of images that were correctly classified. Blood glucose values were grouped as 111-125 mg/dL, 85-95 mg/dL, and 96-110 mg/dL, shown along the x and y axes.

Figure 10 shows the confusion matrix for the glucose estimates when worn on the finger, and indicates a 79% accuracy of the ANN model. The ANN model classified 8 images correctly and 4 images incorrectly in the 111-125 mg/dL category. For the 85-95 mg/dL category, 18 images were correctly classified and 0 images were classified incorrectly. All 3 images in the 96-110 mg/dL category were incorrectly classified. This poor level of accuracy is due to the limited data set for these values.

Figure 11 shows the results of the ANN model for the ear image data set, which achieved around 62% accuracy. The model classified 5 images correctly and 4 images incorrectly in the 111-125 mg/dL category. In addition, 6 images were correctly classified and 0 images were classified incorrectly in the 85-95 mg/dL category. Finally, 2 images were correctly classified and 4 images were incorrectly classified in the 96-110 mg/dL category.

Figure 10. Confusion matrix of finger artificial neural network model. The x-axis refers to the correct estimates, while the y-axis shows incorrect estimates. The unit for all x and y values is mg/dL.
View this figure
Figure 11. Confusion matrix for ear artificial neural network model. The unit for all x and y values is mg/dL.
View this figure

Mobile App

Our mobile app “GlucoCheck” is connected to our cloud InfluxDB database and provides continuous glucose monitoring and history data for users. Users can review their current glucose measurement and also view a chart of their previous measurements, allowing them to track glucose variation over a specific period of time. Figure 12 shows the initial screen on the app (left) and the display of glucose readings from the prototype (right).

Users may also enter readings from a glucometer into the app to track and compare measurements from other devices, as illustrated in Figure 13.

Figure 12. Mobile app interface showing blood glucose level.
View this figure
Figure 13. Option to enter glucose level manually. GlucoCheck readings (left) can be compared with other glucometer measures, entered by the user manually (right).
View this figure

Principal Findings

Here we detail and test a novel NIO-GM prototype that relies on an ANN and camera-based technology and is associated with an app that is user-friendly. Results indicate that these optical techniques and machine learning methodologies can effectively measure blood glucose when the light is transmitted and absorptive through the skin tissue. GlucoCheck had an acceptable 79% accuracy when images from fingers were analyzed and 62% accuracy for images from the earlobe position.

Table 2 compares GlucoCheck with previously tested techniques. The potential of GlucoCheck is comparable with other studies, but it has advantages over other technologies. The use of an integrated computer board (Raspberry Pi) and integration with the cloud gives GlucoCheck the unique ability to display values in real time via a mobile app. Additionally, the optional earlobe position of GlucoCheck is unique and allows for the device to be developed as an earring.

Table 2. Comparison of this study with previous work.
StudyBody partTechniqueNumber of subjectsAccuracyReal-timeMobile appYear
This study (GlucoCheck)Finger/earlobeBinary format of image and convolutional neural network879%YesYes2022
[19]FingerInfrared-multivariate calibration model3N/AaNoNo1992
[20]FingerHistogram and artificial neural network51490%NoYes2019
[21]Oral mucosaAttenuated total reflection and hollow fibers131/41486.3%NoNo2018
[22]Finger and wristReflected optical signal12Correlation of 0.86NoNo2019
[23]ForearmSpectra analysis of tissue light path187.5%NoNo2003

aN/A: not applicable.

Limitations

Future research is needed to address several limitations in the development of a more reliable noninvasive blood glucose prototype based on light. First, a large amount of data is needed for training a machine learning and deep learning model for complicated tasks. Collecting data from people with diabetes is often time-consuming and expensive compared with other tasks. Consequently, many studies face a shortage of data during their research cycles [30-35]. In this preliminary work, we used data augmentation techniques to compute additional data points from our preliminary data set. Additional data will be needed for the ANN model to detect the exact glucose value instead of a range.

Second, depending on the type of radiation used, a viable NIO-GM must account for differences in skin pigmentation, surface roughness, skin thickness, breathing artifacts, blood flow, body movements, and ambient temperature [36]. Accurate measures of the absorption (scattering) properties within human skin remains challenging in biomedical optics and biomedical engineering [37]. Similarly, skin roughness and pigmentation can affect light distribution when propagating through the skin [38]. These factors must be addressed in future technology. Finally, the prototype enclosure design must be comfortable and usable to be effective.

Conclusion

In this paper, we have presented a noninvasive glucose monitoring system that leverages the computational power of Internet of Things devices and can be used for diabetes management. The prototype is based on images taken from the finger or ear, and does not require blood samples. An ANN model was used to classify and estimate blood glucose concentrations from the images. When using images from the finger, the accuracy of GlucoCheck was 79%. For images taken from the ear, the accuracy was attenuated to 62%. Though the current data set is limited, these results are encouraging. Future studies are needed to address three main limitations: (1) the size of the database (by expanding the data collection process); (2) the prototype enclosure design (by working with biomedical and hardware engineers); and (3) the external factors (by analyzing the impact of skin color, skin thickness, and ambient temperature, among others). If successful, this prototype will be an attractive, life-changing technology for people with diabetes.

Acknowledgments

The authors greatly appreciate the reviewers that contributed to improving the manuscript. They also would like to acknowledge the College of Computing and Software Engineering and the Office of Research at Kennesaw State University for funding this project.

Conflicts of Interest

None declared.

  1. Diabetes. USA Health.   URL: https://www.usahealthsystem.com/specialties/diabetes [accessed 2022-03-01]
  2. Diabetes. Centers for Disease Control and Prevention.   URL: https://www.cdc.gov/diabetes/ [accessed 2022-07-14]
  3. Salacinski AJ, Alford M, Drevets K, Hart S, Hunt BE. Validity and reliability of a glucometer against industry reference standards. J Diabetes Sci Technol 2014 Jan;8(1):95-99 [FREE Full text] [CrossRef] [Medline]
  4. Farage MA, Miller KW, Berardesca E, Maibach HI. Clinical implications of aging skin: cutaneous disorders in the elderly. Am J Clin Dermatol 2009;10(2):73-86. [CrossRef] [Medline]
  5. Shang T, Zhang JY, Thomas A, Arnold MA, Vetter BN, Heinemann L, et al. Products for monitoring glucose levels in the human body with noninvasive optical, noninvasive fluid sampling, or minimally invasive technologies. J Diabetes Sci Technol 2022 Jan;16(1):168-214 [FREE Full text] [CrossRef] [Medline]
  6. Lubinski T, Janik S, Mäntele W, Kaluza M, Plotka B. Blood glucose control without pricking. A novel infrared spectroscopic technique for diabetes patients. Blood. 2020.   URL: https://analyticalscience.wiley.com/do/10.1002/was.00060006#was.auth.LubinskiT [accessed 2022-07-14]
  7. Sjö. Blood glucose meter FreeStyle Libre from Abbott. Wikimedia Commons. 2016 Feb 20.   URL: https://commons.wikimedia.org/wiki/File:BGM_twopart.JPG
  8. Attribution-ShareAlike 4.0 International (CC BY-SA 4.0). Creative Commons.   URL: https://creativecommons.org/licenses/by-sa/4.0/
  9. Park J, Kim J, Kim S, Cheong WH, Jang J, Park Y, et al. Soft, smart contact lenses with integrations of wireless circuits, glucose sensors, and displays. Sci Adv 2018 Jan;4(1):eaap9841 [FREE Full text] [CrossRef] [Medline]
  10. Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). Creative Commons.   URL: https://creativecommons.org/licenses/by-nc/4.0/
  11. Pitzer KR, Desai S, Dunn T, Edelman S, Jayalakshmi Y, Kennedy J, et al. Detection of hypoglycemia with the GlucoWatch biographer. Diabetes Care 2001 May;24(5):881-885. [CrossRef] [Medline]
  12. Pickup JC, Khan F, Zhi Z, Coulter J, Birch DJS. Fluorescence intensity- and lifetime-based glucose sensing using glucose/galactose-binding protein. J Diabetes Sci Technol 2013 Jan 01;7(1):62-71 [FREE Full text] [CrossRef] [Medline]
  13. Hull EL, Matter NI, Olson BP, Ediger MN, Magee AJ, Way JF, et al. Noninvasive skin fluorescence spectroscopy for detection of abnormal glucose tolerance. J Clin Transl Endocrinol 2014 Sep;1(3):92-99 [FREE Full text] [CrossRef] [Medline]
  14. Enejder AMK, Scecina TG, Oh J, Hunter M, Shih W, Sasic S, et al. Raman spectroscopy for noninvasive glucose measurements. J Biomed Opt 2005;10(3):031114 [FREE Full text] [CrossRef] [Medline]
  15. Vashist SK. Non-invasive glucose monitoring technology in diabetes management: a review. Anal Chim Acta 2012 Oct 31;750:16-27. [CrossRef] [Medline]
  16. Pai PP, Sanki PK, Sahoo SK, De A, Bhattacharya S, Banerjee S. Cloud computing-based non-invasive glucose monitoring for diabetic care. IEEE Trans Circuits Syst I 2018 Feb;65(2):663-676. [CrossRef]
  17. Haxha S, Jhoja J. Optical based noninvasive glucose monitoring sensor prototype. IEEE Photonics J 2016 Dec;8(6):1-11. [CrossRef]
  18. Amir O, Weinstein D, Zilberman S, Less M, Perl-Treves D, Primack H, et al. Continuous noninvasive glucose monitoring technology based on 'occlusion spectroscopy'. J Diabetes Sci Technol 2007 Jul;1(4):463-469 [FREE Full text] [CrossRef] [Medline]
  19. Robinson MR, Eaton RP, Haaland DM, Koepp GW, Thomas EV, Stallard BR, et al. Noninvasive glucose monitoring in diabetic patients: a preliminary evaluation. Clin Chem 1992 Sep;38(9):1618-1622. [Medline]
  20. Alarcón-Paredes A, Francisco-García V, Guzmán-Guzmán IP, Cantillo-Negrete J, Cuevas-Valencia RE, Alonso-Silverio GA. An IoT-based non-invasive glucose level monitoring system using Raspberry Pi. Applied Sciences 2019 Jul 28;9(15):3046. [CrossRef]
  21. Kasahara R, Kino S, Soyama S, Matsuura Y. Noninvasive glucose monitoring using mid-infrared absorption spectroscopy based on a few wavenumbers. Biomed Opt Express 2018 Jan 01;9(1):289-302 [FREE Full text] [CrossRef] [Medline]
  22. Rachim VP, Chung WY. Wearable-band type visible-near infrared optical biosensor for non-invasive blood glucose monitoring. Sensors and Actuators B: Chemical 2019 May;286:173-180. [CrossRef]
  23. Maruo K, Tsurugi M, Chin J, Ota T, Arimoto H, Yamada Y, et al. Noninvasive blood glucose assay using a newly developed near-infrared system. IEEE J Select Topics Quantum Electron 2003 Mar;9(2):322-330. [CrossRef]
  24. Singh K, Sandhu G, Lark B, Sud S. Molar extinction coefficients of some carbohydrates in aqueous solutions. Pramana - J Phys 2002 Mar;58(3):521-528. [CrossRef]
  25. Jacques SL. Optical properties of biological tissues: a review. Phys Med Biol 2013 Jun 07;58(11):R37-R61. [CrossRef] [Medline]
  26. Raspberry Pi.   URL: https://www.raspberrypi.com [accessed 2022-07-14]
  27. Ramachandran P, Zoph B, Le Q. Searching for activation functions. ArXiv. Preprint posted online on October 16, 2017   URL: https://arxiv.org/abs/1710.05941 [accessed 2022-07-14]
  28. Maier A, Syben C, Lasser T, Riess C. A gentle introduction to deep learning in medical image processing. Z Med Phys 2019 May;29(2):86-101 [FREE Full text] [CrossRef] [Medline]
  29. InfluxData.   URL: https://www.influxdata.com/ [accessed 2022-07-14]
  30. Pham T, Tran T, Phung D, Venkatesh S. Predicting healthcare trajectories from medical records: A deep learning approach. J Biomed Inform 2017 May;69:218-229 [FREE Full text] [CrossRef] [Medline]
  31. Lekha S, M S. Real-time non-invasive detection and classification of diabetes using modified convolution neural network. IEEE J Biomed Health Inform 2018 Sep;22(5):1630-1636. [CrossRef] [Medline]
  32. G S, R V, KP S. Diabetes detection using deep learning algorithms. ICT Express 2018 Dec;4(4):243-246. [CrossRef]
  33. Spänig S, Emberger-Klein A, Sowa J, Canbay A, Menrad K, Heider D. The virtual doctor: An interactive clinical-decision-support system based on deep learning for non-invasive prediction of diabetes. Artif Intell Med 2019 Sep;100:101706. [CrossRef] [Medline]
  34. Faruqui SHA, Du Y, Meka R, Alaeddini A, Li C, Shirinkam S, et al. Development of a deep learning model for dynamic forecasting of blood glucose level for type 2 diabetes mellitus: Secondary analysis of a randomized controlled trial. JMIR mHealth uHealth 2019 Nov 01;7(11):e14452 [FREE Full text] [CrossRef] [Medline]
  35. Martinsson J, Schliep A, Eliasson B, Mogren O. Blood glucose prediction with variance estimation using recurrent neural networks. J Healthc Inform Res 2020 Mar;4(1):1-18 [FREE Full text] [CrossRef] [Medline]
  36. Ferrante do Amaral CE, Wolf B. Current development in non-invasive glucose monitoring. Med Eng Phys 2008 Jun;30(5):541-549. [CrossRef] [Medline]
  37. van Gemert MJ, Jacques S, Sterenborg H, Star W. Skin optics. IEEE Trans Biomed Eng 1989 Dec;36(12):1146-1154. [CrossRef] [Medline]
  38. Lu JQ, Hu X, Dong K. Modeling of the rough-interface effect on a converging light beam propagating in a skin tissue phantom. Appl Opt 2000 Nov 01;39(31):5890-5897. [CrossRef] [Medline]


ANN: artificial neural network
CNN: convolutional neural network
MI-GM: minimally invasive device
NIFS-GM: noninvasive fluid sampling
NIO-GM: noninvasive optical glucose monitoring
ReLu: rectified linear unit


Edited by A Mavragani; submitted 12.04.22; peer-reviewed by M Khokhar, K Adapa; comments to author 28.05.22; revised version received 06.07.22; accepted 09.07.22; published 26.08.22

Copyright

©Maria Valero, Priyanka Pola, Oluwaseyi Falaiye, Katherine H Ingram, Liang Zhao, Hossain Shahriar, Sheikh Iqbal Ahamed. Originally published in JMIR Formative Research (https://formative.jmir.org), 26.08.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.