Published on in Vol 7 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/45355, first published .
Predicting Pain in People With Sickle Cell Disease in the Day Hospital Using the Commercial Wearable Apple Watch: Feasibility Study

Predicting Pain in People With Sickle Cell Disease in the Day Hospital Using the Commercial Wearable Apple Watch: Feasibility Study

Predicting Pain in People With Sickle Cell Disease in the Day Hospital Using the Commercial Wearable Apple Watch: Feasibility Study

Original Paper

1Duke Sickle Cell Comprehensive Care Unit, Department of Medicine, Division of Hematology, Duke University Hospital, Durham, NC, United States

2Brody School of Medicine, East Carolina University, Greenville, NC, United States

3Department of Pediatric Hematology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, Netherlands

4Department of Engineering Sciences and Applied Mathematics, Northwestern University, Evanston, IL, United States

5Joan & Sanford I Weill Medical College, Cornell University, New York, NY, United States

6University of North Carolina at Chapel Hill, Chapel Hill, NC, United States

Corresponding Author:

Rebecca Sofia Stojancic, BA, BSc

Duke Sickle Cell Comprehensive Care Unit

Department of Medicine, Division of Hematology

Duke University Hospital

40 Duke Medicine Cir

Clinic 2N

Durham, NC, 27710

United States

Phone: 1 919 684 0628

Email: rsstojan@ncsu.edu


Background: Sickle cell disease (SCD) is a genetic red blood cell disorder associated with severe complications including chronic anemia, stroke, and vaso-occlusive crises (VOCs). VOCs are unpredictable, difficult to treat, and the leading cause of hospitalization. Recent efforts have focused on the use of mobile health technology to develop algorithms to predict pain in people with sickle cell disease. Combining the data collection abilities of a consumer wearable, such as the Apple Watch, and machine learning techniques may help us better understand the pain experience and find trends to predict pain from VOCs.

Objective: The aim of this study is to (1) determine the feasibility of using the Apple Watch to predict the pain scores in people with sickle cell disease admitted to the Duke University SCD Day Hospital, referred to as the Day Hospital, and (2) build and evaluate machine learning algorithms to predict the pain scores of VOCs with the Apple Watch.

Methods: Following approval of the institutional review board, patients with sickle cell disease, older than 18 years, and admitted to Day Hospital for a VOC between July 2021 and September 2021 were approached to participate in the study. Participants were provided with an Apple Watch Series 3, which is to be worn for the duration of their visit. Data collected from the Apple Watch included heart rate, heart rate variability (calculated), and calories. Pain scores and vital signs were collected from the electronic medical record. Data were analyzed using 3 different machine learning models: multinomial logistic regression, gradient boosting, and random forest, and 2 null models, to assess the accuracy of pain scores. The evaluation metrics considered were accuracy (F1-score), area under the receiving operating characteristic curve, and root-mean-square error (RMSE).

Results: We enrolled 20 patients with sickle cell disease, all of whom identified as Black or African American and consisted of 12 (60%) females and 8 (40%) males. There were 14 individuals diagnosed with hemoglobin type SS (70%). The median age of the population was 35.5 (IQR 30-41) years. The median time each individual spent wearing the Apple Watch was 2 hours and 17 minutes and a total of 15,683 data points were collected across the population. All models outperformed the null models, and the best-performing model was the random forest model, which was able to predict the pain scores with an accuracy of 84.5%, and a RMSE of 0.84.

Conclusions: The strong performance of the model in all metrics validates feasibility and the ability to use data collected from a noninvasive device, the Apple Watch, to predict the pain scores during VOCs. It is a novel and feasible approach and presents a low-cost method that could benefit clinicians and individuals with sickle cell disease in the treatment of VOCs.

JMIR Form Res 2023;7:e45355

doi:10.2196/45355

Keywords



Sickle cell disease (SCD) is an inherited monogenic disorder that affects millions of individuals across the world and is estimated to affect 300,000 new children every year [1-3]. The sickled red blood cells have adhesive properties to other cells, building up in blood vessels and blocking blood flow to tissues. This process is known as vaso-occlusive crises (VOCs) and leads to the onset of a complex cascade of vaso-occlusion, inflammation, and ischemia, ultimately resulting in complications such as acute pain [4]. VOCs are often referred to simply as “pain crises” and frequently do not have a specific cause [5]. Shah et al looked at over 8000 individuals with sickle cell disease over a 3-year period and reported that each patient averaged 3.3 VOCs per year [6].

VOCs are associated with a decreased health-related quality of life and are a significant morbidity for individuals with sickle cell disease; it is the most common cause of hospitalization in SCD [5,7]. The treatment for VOCs is currently limited to analgesics such as opioids, which are given in proportion to the reported level of pain [8]. Although most patients manage their pain at home, if VOCs cannot be controlled, hospitalization is required to administer intravenous analgesics and fluids. Ultimately, due to the frequency and unpredictability of VOCs, there are high health care usage and costs for patients with sickle cell disease [9]. Having an ability to objectively determine the timing and intensity of VOCs may improve pain management and lead to an increased health-related quality of life as well as lower resource usage in patients living with sickle cell disease.

Recent efforts to better understand pain include using machine learning techniques to help analyze pain and associated physiological data. Machine learning is the usage of data and analytics to predict outcomes, allowing computers to execute operations without explicit instructions. Machine learning models have been applied to SCD and non-SCD pain–related research to visualize how pain indicators relate to subjective pain [10-12]. Health care studies involving machine learning have evaluated patients suffering from chronic and postoperative pain using heart rate variability (HRV), brain activity, and clinical data to create complex multivariable models that attempt to predict pain levels [10]. In using the mentioned variables, researchers built models that successfully predicted self-reported pain intensity or postoperative pain intensity, but the methods used to collect data were expensive and difficult to perform on a large scale.

Even as this research continues, a gap exists in the usage of sustainable, cost-effective methods to better understand pain in SCD. We believe that consumer wearable devices, in particular, smartwatches, may be a way to fill that gap. Consumer wearable devices are increasing in popularity globally and can be an affordable way to gather large amounts of continuous and real-time biometric data both in and out of a clinical setting [13,14]. Biometric data collected by consumer wearables can include heart rate (HR), HRV, step count, burned calories, and oxygen saturation. Our research team previously evaluated data collected from the consumer wearable Microsoft Band 2 to assess if subjective pain scores could be predicted in individuals living with sickle cell disease. The study was able to predict subjective pain scores using the data collected from Microsoft Band 2, pain score data, and a regression machine learning model, with a correlation of 0.706 in adult patients during their time in the Day Hospital for a painful VOC [6]. The Microsoft Band 2 provided robust data but had a limited battery life of approximately 6 hours and has since been discontinued. To continue our research into the feasibility of using consumer wearables as a sustainable and cost-effective method to better understand pain, we adopted the Apple Watch based on its popularity and global acceptance, as well as the robust data collection of Apple Health Kit and the open access to raw collected data. This study looked to evaluate the performance of various machine learning models on data collected from the Apple Watch to predict reported pain scores in individuals with sickle cell disease suffering from VOC.

The aim of this study is to (1) determine the feasibility of using the Apple Watch to predict the pain scores in people with sickle cell disease admitted to the Day Hospital and (2) build and evaluate machine learning algorithms to predict the pain scores of VOCs with the Apple Watch.


Data Collection

Following approval from the Duke Institutional Review Board, patients meeting the inclusion criteria and entering the Day Hospital with a VOC between July 2021 and September 2021 were approached and consented. Patients included had to have a confirmed SCD diagnosis, 18 years of age or older, and admitted with a primary diagnosis of VOC. The study team provided participants with an Apple Watch Series 3 to be worn for the duration of their visit. The Apple Watch was attached to the wrist of the participant and placed in “Other” exercise mode. Exercise mode allowed for more continuous collection of HR and other Apple Health Kit data, collecting HR data every 5 seconds [15]. This allowed us to retrieve the maximum amount of HR data the Apple Watch could record during the time the participants were enrolled in the study. Biometric data collected via the Apple Watch included HR, active calorie burn, and basal calories burned. Pain scores and vital sign variables including blood pressure, pulse, and temperature, as well as demographics including age, SCD genotype, sex, and ethnicity were extracted from the electronic medical records (EMRs). Patients completed the study either due to discharge following pain management, the closing of the Day Hospital, or transferring to the emergency room. Data from the Apple Watch were extracted from Apple Health Kit and exported as XML files, converted to XLSX, and analyzed using Python (version 3.9.6; Python Software Foundation).

Outcomes

Pain scores, HR data, calculated HRV, and calories burned were combined using the minimum absolute time difference between time stamps, to create a cohesive data set. We assumed the pain score to remain the same for up to ±15 minutes when each pain score was recorded, in order to expand the usable data set.

We expanded our data set by self-calculating HRV from the HR data, based on existing evidence of HRV’s relationship to pain [16]. Classically, HRV is calculated by analyzing the electrocardiography data. Due to the lack of electrocardiography data, we instead used the fluctuations in the HR to calculate HRV, which can also be called pulse rate variability. Previous studies have shown that pulse rate variability and HRV are significantly correlated, and the values are very close to each other for measurements [17]. There are multiple metrics to represent HRV [18], and we chose the root-mean-square of successive differences between normal heartbeats of 70 and 110 beats per minute. The time difference between successive normal heartbeats was noted for 5 minutes, the values of their successive differences were calculated and squared, and then the result was then averaged and squared off.

Analysis

Considering the discrete pain values as distinct classes, 3 classification models were fit to the data: multinomial logistic regression, gradient boosting, and random forest (Figure 1). The machine learning models were trained with half of the data set, and the other half was then used for testing. The performance of these models was compared to 2 basic models, called “null models,” which used no biometric measures in their prediction. The 2 null models, mode and random, predicted pain scores based on the frequency of the scores in the training set. The mode model assumed that the future pain score would be equal to the most common pain score in the data set, whereas the random model assumed the probability of each pain score to be equal to the frequency in which the score appeared in the data set. It should be noted that these null models are of no clinical significance but are used as a comparison to assess the validity and accuracy of our classification models. If the models we created were no better than the null models, there would be no validity in using the created models. As our evaluation metrics, we considered micro-averaged accuracy, micro-averaged F1-score, area under the receiving operating characteristic curve, and root-mean-square error (RMSE; Textbox 1). Except for RMSE, the higher the metrics, the better the model. We also use cross-validation to further validate the strength of the models. Using cross-validation, we can use all the data to assess the performance of the models through multiple iterations. For the training-testing split to be a good representation of the overall data, including the class imbalance, we use stratified-10-fold cross-validation [19].

Figure 1. Random forest classification model: a tree-based algorithm [20].
Definitions table of the used metrics to evaluate the performance of each model.

Accuracy

  • Refers to the proportion of correct data points predicted by the machine learning algorithm out of all data points.

Micro-averaging

  • A way to redefine certain statistical measures to deal with class imbalance, where we take the weighted average of the scores for each class. (“1” is considered a perfect model.)

F1-score

  • Considers not only the accurate recall of a model but also how close together predicted values are to each other. (“1” is considered a perfect model.)

Area under the receiving operating characteristic curve

  • Determines how well our model picks between different pain score classes. In our model, each numerical pain score is a class. (“1” is considered a perfect model.)

Root-mean-square error

  • Refers to how far the true values are from values predicted by our model. Larger values represent the further distance between predicted and true values.
Textbox 1. Definitions table of the used metrics to evaluate the performance of each model.

Ethics Approval

The study protocol was approved by the institutional review board of Duke University Medical Center (IRB Pro00068979). All study participants signed consent prior to study participation, and no compensation was provided. Identifiable personal information was not collected in this study, and all data were kept confidential according to the internal data security policy. Data were only accessible to authorized researchers.


Our study population included 20 patients, including 12 (60%) females. All participants had a confirmed diagnosis of SCD including 14 individuals with hemoglobin type SS (70%), 5 with hemoglobin type SC (25%), and 1 with hemoglobin type SOArab (5%). All participants identified as Black or African American. A detailed breakdown of the collected data across the sample population is included (Table 1). The median age was 35.5 (IQR 30-41) years. The included patients wore the Apple Watch for the average time of 2 hours 17 minutes.

All models outperformed the 2 null models created, and the random forest model significantly outperformed all models followed by the gradient boosting model (Table 2). The scatter plots in Figure 2 show that the model was not able to predict some pain scores (0-2) and that it worked best for certain scores (5-8). This was, respectively, due to the absence of patients reporting very low pain scores, given that the data were collected from patients during a VOC, which created a class imbalance in the data. A comparison of each model is found in Figure 3, and it is evident that even the worst-performing model that uses biometric data, the multinomial logistic regression model, is stronger than the null models. Figure 4 shows the mean and SD of cross-validation accuracy for the 3 models over the 10 folds. We see that the SD for all 3 models is fairly small, indicating that the models are most likely to perform equally well for an independent data set.

Table 1. Additional information on the collected data across the sample population.

Median (IQR)
Number of data points per patient980 (672 to 1282)
Time spent wearing Apple Watch2 hours 11 minutes (1 hour 31 minutes to 2 hours 50 minutes)
Age of patients, years35.5 (30 to 41)
Pain score on entry to Day Hospital8 (7.5 to 8.5)
Table 2. The performance of each model including 2 null models.
Prediction modelAccuracy (%)Micro-averaged F1-scoreAUCaRMSEb
Null model 1: random23.830.240.51.77
Null model 2: mode32.920.330.51.32
Multinomial logistic regression fit37.720.370.681.30
Gradient boosting fit69.060.690.921.10
Random forest fit84.520.850.980.84

aAUC: area under the receiving operating characteristic curve.

bRMSE: root-mean-square error.

Figure 2. Scatter plots for the 3 models (multinomial logistic regression, gradient boosting, and random forest). The size of the marker represents the number of data points on the corresponding grid point. The straight line along with the shaded region represents the predicted pain score=true pain score±1.
Figure 3. Bar graph comparison of the evaluation metrics for each model along with 2 null models. AUC: area under the receiving operating characteristic curve; RMSE: root-mean-square error.
Figure 4. Bar graph comparison of the 10-fold cross-validation accuracy for the 3 machine learning models. The error line is the SD of the accuracies achieved at each fold.

Principal Results

In this study, we were able to show the feasibility of using the Apple Watch Series 3 to collect biometric data during treatment for VOC in adults with sickle cell disease, and that the data collected along with pain scores recorded within EMR could be used to build accurate machine learning models. The random forest model was our best-performing machine learning model, with an accuracy of 84.5%. Considering all the measures, our analyses showed the success of using biometric data collected from a consumer wearable, the Apple Watch, for use in dependable and accurate machine learning pain prediction models.

These results, and those from our previous study with the Microsoft Band 2 [6], continue to be encouraging the potential of consumer wearables in pain prediction for VOCs in individuals with sickle cell disease. In the previous study using the Microsoft Band 2 and a mobile app for data collection, we were able to achieve a pain prediction accuracy of 72.9% using a regression machine learning model. A primary difference in the studies, aside from the device used, was the method of pain score variable collection. This study used only the nurse-recorded pain scores from the EMR, which meant that the pain scores were discrete instead of continuous variables, so classification models were the best fit and resulted in higher accuracy in their pain score prediction. A review of machine learning and pain studies by Matsangidou et al showed the prevalence of this research, reviewing 26 papers published between 2015 and 2021 on pain and machine learning [21]. Within the studies reviewed, many were using machine learning to classify or predict the manifestation of pain in relation to conditions such as osteoarthritis, spinal cord injury, ankylosing spondylitis, and various types of back pain. These studies were all able to predict pain with accuracies above 50% in their best-performing machine learning models, with some achieving accuracies of 90% [21]. The multitude of papers available on using machine learning and pain prediction shows that this is a promising and upcoming field. However, we continue to find a lack of studies, outside our efforts, using the combination of biometric data collected from wearables and machine learning to predict pain in SCD.

Importance of the Work

Predictive pain tools have the potential to help people living with sickle cell disease, and their medical teams notice trends in their symptoms and pain. There exists a positive correlation between anxiety around pain and pain severity, which can lead to higher pain levels in people with sickle cell disease due to increased anxiety around their expected symptoms [22]. People with sickle cell disease have very high readmission rates associated with excessive costs [23,24]. Treating pain early on could prevent the pain cascade [25], and this could reduce the need for further intervention by medical providers. A tool that both validates their pain and potentially predicts future pain may lower anxiety by giving more information surrounding their standardized pain score, enabling preventative treatments for pain. This may also result in someone living with sickle cell disease coming into the hospital or primary care to receive treatment for pain earlier. Also, SCD primarily affects people of color who have a history of being mistreated or undertreated by the health care system in the United States [26,27]. Prediction models can help validate and support a patient’s own experience and can provide them with a voice in a space where they may have felt like they have less of one. All of the above may result in both individuals with sickle cell disease and hospitals alike saving money regarding treatment, admission, and readmission rates for in-patient care.

Strengths and Limitations

Our results from the machine learning models are very promising and could significantly improve the treatment of pain from VOCs. A key strength in our methods is that the biometric data collected came from a consumer wearable device, the Apple Watch, and led to accurate predictive machine learning models. This means that data can be collected noninvasively and passively but used to create clinically relevant information.

Overall, 3 classification machine learning models were used to compare and evaluate their ability to predict pain scores with the data set but were also compared to 2 null models. This further strengthens our results, by comparing the trained machine learning models not only to each other but also to null models that used no biometric data. In having all 3 trained models outperform the null models, we provided a check that our prediction results using the trained models were valid. In comparing the 3 trained models, we were able to determine that 1 had the greatest success, the random forest model. Our cross-validation analyses show that our models including the random forest model will perform equally in an independent data set; however, external validation using other data sets is necessary to determine the reproducibility and generalizability of this model for all patients with sickle cell disease living in and outside the United States.

The study included limitations that should be discussed. The participant pool was created via a single-center Day Hospital, which is an option for those who are experiencing high levels of pain but not available in most hospitals. All participants were treated for pain management based on individual pain plans and had high levels of pain upon enrollment into the study. This led to the majority of reported pain scores from participants being within the 5-8 range in the 11-point pain scale (0-10), with no scores reported in the 0-2 range. We used micro-averaging (Textbox 1) to take into account this class imbalance. A full range of reported scores by including data from participants who are not experiencing pain or severe pain will help with the class imbalance. We also have a small sample size of 20 patients, which resulted in less data for the machine learning algorithms. Even with our limited sample size, we were able to create machine learning models that performed well, and better than the null models, in all metrics.

In future research, related to sample size, we plan to expand on our available data set both in patients enrolled and length of time in collecting data from the wearable device, to include time periods both in VOC pain crises as well as outpatient periods when not in significant pain. In collecting large quantities of data from a larger population, we can further train and evaluate our machine learning models for pain prediction and remove the class imbalance seen in this data set. The positive outcomes of the research provide support for the use of consumer wearables in the health care system; still, several difficulties have led to limited adoption [28]. One reason is the number of restrictions around the implementation of these consumer wearables in research studies or clinics. The time and money needed to deploy such consumer wearable initiatives require many financial resources. There also still exists some stigma around the implementation of consumer wearables in clinics [29], even with their Food and Drug Administration approval. However, with the COVID-19 pandemic illuminating the need for various remote monitoring services and other ways of managing the health care of so many people, these tools have become more accepted to manage a wide variety of conditions. Every year, more and more insurance companies are providing billable reimbursement for not only the consumer wearable itself but also reimbursing the provider team for the time spent managing these consumer wearables in the health care system. With these improvements, we hope to see that these tools have greater usage for not only the areas of chronic pain, but also any others.

Conclusions

Given our results in this study, machine learning can use biometric data from the Apple Watch to become a tool to predict pain scores but will require further validation. Collected information via consumer wearables can be beneficial to patients, clinicians, and hospitals, due to its ability to provide a voice to patients’ symptoms, give clinicians an additional tool for pain reference, and potentially reduce the resource burden on hospitals.

Data Availability

The data sets generated and analyzed during this study are available from the corresponding author on reasonable request.

Conflicts of Interest

NS is a consultant, researcher, and speaker at Global Blood Therapeutics and also a consultant at Novartis, Forma Therapeutics, Agios Pharmaceuticals, and Emmaus pharmaceuticals.

  1. Piel FB, Hay SI, Gupta S, Weatherall DJ, Williams TN. Global burden of sickle cell anaemia in children under five, 2010-2050: modelling based on demographics, excess mortality, and interventions. PLoS Med 2013;10(7):e1001484 [FREE Full text] [CrossRef] [Medline]
  2. Ware RE, de Montalembert M, Tshilolo L, Abboud MR. Sickle cell disease. The Lancet 2017 Jul;390(10091):311-323. [CrossRef]
  3. Wastnedge E, Waters D, Patel S, Morrison K, Goh MY, Adeloye D, et al. The global burden of sickle cell disease in children under five years of age: a systematic review and meta-analysis. J Glob Health 2018 Dec;8(2):021103 [FREE Full text] [CrossRef] [Medline]
  4. Puri L, Nottage KA, Hankins JS, Anghelescu DL. State of the art management of acute vaso-occlusive pain in sickle cell disease. Paediatr Drugs 2018 Feb;20(1):29-42. [CrossRef] [Medline]
  5. Uwaezuoke SN, Ayuk AC, Ndu IK, Eneh CI, Mbanefo NR, Ezenwosu OU. Vaso-occlusive crisis in sickle cell disease: current paradigm on pain management. J Pain Res 2018;11:3141-3150. [CrossRef]
  6. Johnson A, Yang F, Gollarahalli S, Banerjee T, Abrams D, Jonassaint J, et al. Use of mobile health apps and wearable technology to assess changes and predict pain during treatment of acute pain in sickle cell disease: feasibility study. JMIR Mhealth Uhealth 2019;7(12):e13671. [CrossRef]
  7. Mehta SR, Afenyi-Annan A, Byrns PJ, Lottenberg R. Opportunities to improve outcomes in sickle cell disease. Am Fam Physician 2006;74(2):303-310 [FREE Full text] [Medline]
  8. Yawn BP, John-Sowah J. Management of sickle cell disease: recommendations from the 2014 expert panel report. Am Fam Physician 2015;92(12):1069-1076. [CrossRef]
  9. Adam SS, Flahiff CM, Kamble S, Telen MJ, Reed SD, De Castro LM. Depression, quality of life, and medical resource utilization in sickle cell disease. Blood Adv 2017;1(23):1983-1992. [CrossRef] [Medline]
  10. Lee J, Mawla I, Kim J, Loggia ML, Ortiz A, Jung C, et al. Machine learning-based prediction of clinical pain using multimodal neuroimaging and autonomic metrics. Pain 2018;160(3):550-560. [CrossRef]
  11. Ji Y, Chalacheva P, Rosen CL, DeBaun MR, Coates TD, Khoo MCK. Identifying elevated risk for future pain crises in sickle-cell disease using photoplethysmogram patterns measured during sleep: a machine learning approach. Front Digit Health 2021;3:714741. [CrossRef]
  12. Khalaf M, Hussain A, Keight R, Al-Jumeily D, Fergus P, Keenan R, et al. Machine learning approaches to the application of disease modifying therapy for sickle cell using classification models. Neurocomputing 2017 Mar;228:154-164. [CrossRef]
  13. Kim KJ, Shin DH. An acceptance model for smart watches: implications for the adoption of future wearable technology. Internet Res 2015;25(4):527-541. [CrossRef] [Medline]
  14. Laricchia F. Smartwatches—statistics and facts. Statista. 2022.   URL: https://www.statista.com/topics/4762/smartwatches/#topicHeader__wrapper [accessed 2023-02-03]
  15. Khushhal A, Nichols S, Evans W, Gleadall-Siddall DO, Page R, O'Doherty AF, et al. Validity and reliability of the Apple Watch for measuring heart rate during exercise. Sports Med Int Open 2017;1(6):E206-E211. [CrossRef]
  16. Koenig J, Jarczok MN, Ellis RJ, Hillecke TK, Thayer JF. Heart rate variability and experimentally induced pain in healthy adults: a systematic review. Eur J Pain 2014;18(3):301-314. [CrossRef]
  17. Dehkordi P, Garde A, Karlen W, Wensley D, Ansermino JM, Dumont GA. Pulse rate variability compared with heart rate variability in children with and without sleep disordered breathing. 2013 Presented at: 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); July 3-7, 2013; Osaka, Japan. [CrossRef]
  18. Shaffer F, Ginsberg JP. An overview of heart rate variability metrics and norms. Front Public Health 2017;5:258. [CrossRef]
  19. Géron A. Hands-on Machine Learning With Scikit-Learn, Keras, and TensorFlow. 2nd ed. Sebastopol, CA: O'Reilly Media, Inc; 2019:848.
  20. Misra S, Li H, He J. Chapter 9—Noninvasive fracture characterization based on the classification of sonic wave travel times. In: Machine Learning for Subsurface Characterization. Houston, TX: Gulf Professional Publishing; 2020:243-287.
  21. Matsangidou M, Liampas A, Pittara M, Pattichi CS, Zis P. Machine learning in pain medicine: an up-to-date systematic review. Pain Ther 2021;10(2):1067-1084. [CrossRef]
  22. Koyama T, McHaffie JG, Laurienti PJ, Coghill RC. The subjective experience of pain: where expectations become reality. Proc Natl Acad Sci 2005;102(36):12950-12955. [CrossRef]
  23. Shah N, Bhor M, Xie L, Paulose J, Yuce H. Sickle cell disease complications: prevalence and resource utilization. PLoS ONE 2019;14(7):e0214355. [CrossRef]
  24. Kumar V, Chaudhary N, Achebe MM. Epidemiology and predictors of all-cause 30-day readmission in patients with sickle cell crisis. Sci Rep 2020;10(1):2082. [CrossRef]
  25. McClish DK, Smith WR, Dahman BA, Levenson JL, Roberts JD, Penberthy LT, et al. Pain site frequency and location in sickle cell disease: the PiSCES project. Pain 2009;145(1-2):246-251. [CrossRef] [Medline]
  26. Kennedy BR, Mathis CC, Woods AK. African Americans and their distrust of the health care system: healthcare for diverse populations. J Cult Divers 2007;14(2):56-60. [Medline]
  27. Nelson SC, Hackman HW. Race matters: perceptions of race and racism in a sickle cell center. Pediatr Blood Cancer 2013;60(3):451-454. [CrossRef]
  28. Gao Y, Li H, Luo Y. An empirical study of wearable technology acceptance in healthcare. Ind Manag Data Syst 2015;115(9):1704-1723. [CrossRef]
  29. Hunkin H, King DL, Zajac IT. Perceived acceptability of wearable devices for the treatment of mental health problems. J Clin Psychol 2020;76(6):987-1003. [CrossRef]


EMR: electronic medical record
HR: heart rate
HRV: heart rate variability
RMSE: root-mean-square error
SCD: sickle cell disease
VOC: vaso-occlusive crisis


Edited by A Mavragani; submitted 26.12.22; peer-reviewed by S Badawy, G Liu; comments to author 09.01.23; revised version received 30.01.23; accepted 30.01.23; published 14.03.23

Copyright

©Rebecca Sofia Stojancic, Arvind Subramaniam, Caroline Vuong, Kumar Utkarsh, Nuran Golbasi, Olivia Fernandez, Nirmish Shah. Originally published in JMIR Formative Research (https://formative.jmir.org), 14.03.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.