Background: Older adults are at an increased risk of falls with the consequent impacts on the health of the individual and health expenditure for the population. Smartwatch apps have been developed to detect a fall, but their sensitivity and specificity have not been subjected to blinded assessment nor have the factors that influence the effectiveness of fall detection been fully identified.
Objective: This study aims to assess accuracy metrics for a novel fall detection smartwatch algorithm.
Methods: We performed a cross-sectional study of 22 healthy adults comparing the detection of induced forward, side (left and right), and backward falls and near falls provided by a smartwatch threshold-based algorithm, with a video record of induced falls serving as the gold standard; a blinded assessor compared the two. Three different smartwatches with two different operating systems were used. There were 226 falls: 64 were backward, 51 forward, 55 left sided, and 56 right sided.
Results: The overall smartwatch app sensitivity for falls was 77%, the specificity was 99%, the false-positive rate was 1.7%, and the false-negative rate was 16.4%. The positive and negative predictive values were 98% and 84%, respectively, while the accuracy was 89%. There were 249 near falls: the sensitivity was 89%, the specificity was 100%, there were no false positives, 11% were false negatives, the positive predictive value was 100%, the false-negative predictive value was 83%, and the accuracy was 93%.
Conclusions: Falls were more likely to be detected if the fall was on the same side as the wrist with the smartwatch. There was a trend toward some smartwatches and operating systems having superior sensitivity, but these did not reach statistical significance. The effectiveness data and modifying factors pertaining to this smartwatch app can serve as a reference point for other similar smartwatch apps.
The risk of falling increases with age. Approximately 30% of people older than 65 years and living in the community have a fall at least once a year, with an increase of 5% each year . The incidence is even higher in those living in aged care facilities [ ]. This is a major public health problem leading to injuries [ , ], loss of quality of life [ , ], loss of independence [ ], placement in assisted-living facilities [ , ] and premature mortality [ ]. Fall-related injuries represent 21% of the total health care expenses due to injuries [ ] and between 0.85% and 1.5% of the total health care expenditure [ ]. Lying on the floor for a long time after a fall has been associated with serious consequences, with a greater likelihood of hospitalization, decline in activities of daily living, placement into long-term care, and mortality [ , ].
Assistive technologies such as call alarm systems and personal emergency response systems are increasingly available. This also holds true for wearables, defined as devices that can be worn or are in contact with human skin to continuously and closely monitor an individual’s activities without interrupting or limiting the users’ motions . These are cost-effective in reducing hospital admissions when used within emergency response systems [ , ]. However, these systems are not always used by consumers, in part, due to difficulties activating them, including cognitive impairment at the time of, or prior to, the fall [ ].
There is an increasing interest in using sensor systems embedded in smartwatches for health care purposes [, ]. This is particularly the case with falls detection. Although there are several fall detection devices and apps, none to our knowledge have been subjected to a blinded study to evaluate effectiveness, particularly with a variety of smartwatches and smartphones using different operating systems. This study aims to address these issues.
The procedures followed in this study were conducted according to the principles of the World Medical Association Declaration of Helsinki and were approved by the University of New South Wales and St Vincent’s Hospital Human Research Ethics Committee jointly (16/229). The study was independently audited.
This is a cross-sectional blinded study comparing the fall detection classification provided by a smartwatch algorithm with a reference standard’s classification, in this case, a video record of induced falls.
A total of 22 volunteer participants deemed to be medically healthy were recruited after satisfying all the inclusion and exclusion criteria. Participants were recruited by distribution of a leaflet on the university campus and compensated for their time. The inclusion criteria were males/females older than 18 years willing and able to provide written informed consent prior to initiation of any study-related procedures. Participants were excluded if they had any of the following: disability that may prevent them from completing the study (eg, severe illness), being suspected of or having a known allergy to any components of the smartwatch, having any injury or medical condition that would be adversely affected by an induced fall, and being pregnant.
Smartwatch Threshold Algorithm
This study used a threshold-based algorithm programmed for different smartwatches. The threshold-based algorithm running on the smartwatch app uses threshold values, or settings, to automatically detect a fall. The frequency of the smartwatch accelerometer is 2 kHz with the algorithm of the app collecting data every 0.01 seconds. The algorithm follows strict rules for the three phases of a fall, as shown in. The algorithm was supplied by My Medic Watch.
T1 is defined as the time during which the smartwatch is moving toward the earth (fall time) recording a low acceleration, lower than 1G. T2 is the time during which the smartwatch hits the ground, recording a very high positive acceleration for a short period of time. T3 is the time during which the smartwatch is “almost” immobile on the ground for a long period of time. These threshold values are optimized in the app according to the particular smartwatch and body morphology, including body weight and height. Optimization was performed during the test falls.
A near fall can be recognized when all, or one, of the accelerometer data are close to one of the thresholds, as depicted in. We have arbitrarily defined “close” as 20% lower than the fall threshold value.
Participants were randomly assigned to have either smartwatch model A or model C on one wrist and model B or no device on the other wrist. Model A and C were running one operating system, while model B was running on a different operating system. Every smartwatch contained the fall detection app that was programmed to detect and record falls paired with a smartphone located at the study site. The same app was used for each model. The smartwatches and smartphones used one of two operating systems: android or iOS. Two smartwatches were connected to iOS and one to Android. The versions of iOS and Android were the latest available at the time of the test. The version of the operating system on the smartphone and smartwatch were the same for all participants. The smartphones were linked to the smartwatch (according to the operating system) to communicate stored data of the time-stamped recorded episodes to secure cloud servers that were then compared to the video-recorded events.
Before starting the trial, participants were placed in a crash mat protected area, the smartwatches were placed on the participants’ wrists, and a helmet was provided to be used during the tests; no other safety devices were used. Once the trial started, the smartwatch app was set up in monitoring mode and two rounds of four falls were induced in the blindfolded participants. A fall was defined as an event that results in a person coming to rest inadvertently on the ground, floor, or other lower level. A nonfall was defined as any event occurring while both the smartwatch app and the video record were active but excluding a fall or near fall (defined later). In every round, a frontward fall, a right side fall, a left side fall, and a backward fall were induced. These were induced by pushing the participant while standing. The method of fall induction was the same for all participants, executed by the same person. The participants were told of the impending direction of the push. Each assessment took approximately 5 minutes with 8 falls: 2 backward, 2 forward, 2 right, and 2 left. Additionally, up to 3 test falls were performed before the first round to ensure the participants were feeling comfortable with the procedure. Test falls were not included in the analyses. Further, prior to the test falls and between the falls, the participants wore the smartwatches and walked around freely. Near falls where the participant took one or more steps in the direction of the push without falling were also recorded, as there is some evidence that they may presage a fall . This definition is in accord with the traditional definition as applied to this experimental scenario: “a stumble event or loss of balance that would result in a fall if sufficient recovery mechanisms were not activated” [ ]. Importantly, the fall-triggering settings were optimized for each participant during the test falls. A non–near fall was defined as any event occurring while both the smartwatch app and the video record were active but excluding a fall or near fall.
During the fall, the algorithm was collecting the acceleration data and the time of the fall. The data collected were in three phases: “prefall” (preparation and walking to the crash mat, several minutes) as soon as the smartwatches were on the participants wrist, “induced fall” (8 falls around 5 minutes), and “post fall,” walking back from the crash matt to the area to remove the smartwatches. In addition to this, the falls were recorded by built-in motion-detecting cameras (recording at 50 frames/second) available at the study site, the National Facility for Human Robot Interaction Research, University of New South Wales. Motion detection data were used to indicate when a fall was observed. The video of the falls also contained a timestamp that was used to compare it with the falls detected by the smartwatch app. In this case, the video recorded event was used as a reference standard, and the falls detected by the smartwatches were compared against it.
After all the falls had been induced, the smartwatches and safety equipment were removed, and participants were observed for approximately 10 minutes: the heart rate, blood pressure, and symptoms (if any) were assessed.
To perform the analysis of the falls, data were first retrieved from video records of the built-in motion-detecting cameras and coded as a fall or near fall by the authors and a person independent of the conduct of the study. Where there was disagreement, a majority opinion was taken. These were then compared independently by an external person with data retrieved from a fall detection database built to register the falls detected by the smartwatch algorithm. Each fall was classified as a true positive if the smartwatch app detected a fall at the time when the event was recorded on the video, a false positive if the smartwatch detected a fall event that was not recorded on the video, a false negative if the smartwatch did not detect a fall event recorded on the video, and a true negative if neither the smartwatch nor the video recorded a fall. Near falls were similarly analyzed. Results were computed for sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, positive predictive value, negative predictive value, and accuracy. CIs for sensitivity, specificity, and accuracy are “exact” Clopper-Pearson CIs. CIs for the likelihood ratios are calculated using the “Log method.” To compare fall and near fall detection by smartwatch model and direction of fall only, sensitivity data were used with chi-square tests and a significance value of P<.05. Further data are available on request. Sample size calculations were not formally performed beyond an approximate anticipated number of 20 to 25 participants that could be accommodated for the study given the constraints of the availability of the study site and personnel time.
Characteristics of the Participants and the Falls
A total of 22 participants were enrolled in the study: 14 (63%) females and 8 (36%) males; 20 (91%) completed the whole procedure. Two (9%) females abandoned the study during the process: one after a soft tissue injury and the other for unstated reasons. An average of 7.2 falls was performed for each participant; however, one of the participants withdrew from the study after having performed 5 sets of 8 falls, and another after having performed 1 set of 8 falls. Of the induced 226 falls, 64 were backward, 51 were forward, 55 were left sided, and 56 were right sided. Two participants reported postfall self-limiting symptoms associated with soft tissue injuries, 1 required medication and physiotherapy, and their symptoms resolved after 6 weeks.
Demographic characteristics of the participants are shown in. With regard to BMI, 1 (6%) female was classified as underweight, 1 male and 1 female were classified as overweight (9%), and 1 (6%) male was classified as obese.
|Gender||Age (years)||Height (cm)||Weight (kg)|
Overall Performance of the Algorithm
A total of 12 participants were wearing two smartwatches, model A device on one wrist and model B on the other wrist; 10 participants were wearing only one smartwatch, model C, on one wrist. The overall performances of the algorithm, disregarding the model of the smartwatch, are detailed inand . There was no difference in the performance of the algorithm according to which wrist if both were used. and represent the results of near fall detection and the associated statistics. The overall test outcomes are summarized in the following section.
In general, the direction of the fall or near fall did not significantly influence sensitivity. Nonetheless, there was a trend for better detection of backward falls: of the 64 backward falls, 11 were false negatives, giving a sensitivity of 82%, versus forward falls, of which there were 51 with 12 false negatives, giving a sensitivity of 76%. Further, there was a significant difference in fall detection if the fall was to the same side versus opposite side of the wrist that had the smartwatch (left sided and right sided sensitivities combined: 92.5% vs 76.3%; P=.009). The same held true for near falls. If the fall was to the same side as the wrist with the smartwatch, there was a 95% sensitivity for left sided falls (55 with 3 false negatives) and 89% sensitivity for right sided falls (56 with 11 false negatives) versus if the fall was on the opposite side as the wrist with the smartwatch, there was 84% sensitivity for left sided falls (55 with 9 false negatives) and 80% sensitivity for right sided falls (56 with 11 false negatives).
|True fall status||Test result, n||Total, n|
|Negative (nonfall)||Positive (fall)|
|Nonfall||265 (true negative)||3 (false positive 1.7%)||268|
|Fall||52 (false negative 16.4%)||174 (true positive)||226|
|Value (95% CI)|
|Sensitivity (%)||76.99 (70.95-82.31)|
|Specificity (%)||98.88 (96.76-99.77)|
|Positive likelihood ratio||68.78 (22.27-212.39)|
|Negative likelihood ratio||0.23 (0.18-0.30)|
|Positive predictive value (%)||98.31 (94.95-99.44)|
|Negative predictive value (%)||83.60 (80.05-86.61)|
|Accuracy (%)||88.87 (85.76-91.50)|
|True near fall status||Test result, n||Total, n|
|Non–near fall||343 (true negative for all falls, normal falls, and near falls)||0 (false positive)||343|
|Near fall||43 (false negative when near fall 11.1%)||206 (true positive)||249|
|Value (95% CI)|
|Sensitivity (%)||88.86 (85.29-91.82)|
|Specificity (%)||100 (98.23-100)|
|Positive likelihood ratio||N/Aa (no false positives)|
|Negative likelihood ratio||0.11 (0.08-0.15)|
|Positive predictive value (%)||100|
|Negative predictive value (%)||82.73 (78.33-86.39)|
|Accuracy (%)||92.74 (90.34-94.69)|
aN/A: not applicable.
Performance by Smartwatch Model
The number of responses for each smartwatch model were A=186, B=186, and C=122. Model A was used 173 times on the left wrist and 13 times on the right wrist. As per, there were differences among the models according to sensitivity and specificity, but none were significant. This was also true of the operating system. Similar results were found for near falls.
|Value (95% CI)|
|Sensitivity (%)||78.8 (68.6-86.9)|
|Specificity (%)||99 (94.6-100)|
|Sensitivity (%)||71.8 (61-81)|
|Specificity (%)||98 (93-99.8)|
|Sensitivity (%)||82.1 (96.6-91.1)|
|Specificity (%)||100 (94.6-100)|
The primary goal of this study was to evaluate the validity of an algorithm programmed in commercially available smartwatches to detect induced falls. Our study found that the algorithm had an overall sensitivity of 77% and specificity of 99%. The false-positive rate was very low at 1.7%, while the false-negative rate was 16.4%. The positive and negative predictive values were 98% and 84%, respectively, while the accuracy was 89%. Falls were more likely to be detected if the fall was on the same side as the wrist with the smartwatch. Similar results were found for near falls. There was a trend toward some smartwatches having superior sensitivity, though neither this nor the operating system reached statistical significance.
Several studies have been conducted to assess the performance of wearable devices for fall detection, mostly by using smartphones or other specialized self-created wearable devices [- ]. However, only a few of these studies have been performed using commercially available smartwatches [ - ]. In addition, this study is the only one to assess the performance of a fall detection algorithm in different commercially available smartwatches with different operating systems using a video recording system as a gold standard and using blinded data analysis.
The fall detection algorithm was threshold based—programmed to send an alert once a predetermined threshold had been breached. Threshold-based algorithms, as opposed to pattern recognition methods, are preferred on smartphone operating systems due to the restrictions on computing and storage capabilities of the devices . Indeed, pattern recognition methods are costly and need massive analyses of data, access to databases, and long training periods.
Casilari and Oviedo-Jimenez  tested different algorithms with an LG W110 smartwatch model R, finding that the fall detection performance depends on the algorithm used. However, there were only 4 participants with a total of 40 falls. Sensitivity ranged from 70% to 100% and specificity from 80% to 100% depending on the type of fall. Mauldin et al [ ] have studied three different pattern recognition algorithms based on Naive bayes (NB), support vector machine (SVM), and deep learning models by using a Microsoft band 2 smartwatch. In this context, the algorithm tested in our study performed better than their NB and SVM models in sensitivity and precision, and when compared with their deep learning model, our algorithm performed better in precision but not sensitivity. Mauldin et al [ ] also declared in their study that they tested an Android wear-based commercially available fall detection app (Rightminder) released on the Google Play store. The sensitivity was only 50%, and no technical details of this app are publicly available.
Further, these studies have used small groups of participants (3-7) performing several falls each (up to 10 per side). From our experience in laboratory settings, the dynamics of the falls are affected by repetition, as participants tend to fall in the same way. We minimized this effect by having a high number of participants (N=22) repeating each fall only twice per side. Furthermore, the previous studies asked the participants to fall rather than having them fall as a result of being pushed unexpectedly by another person as was done in our study. This approach more accurately reflects a true fall given the spontaneity. The differing protocol designs in these studies make it impossible to accurately compare one against the other.
Our findings suggest that the performance of the algorithm differs among various brand devices. Indeed, the combined performance of brand A and C smartwatches on sensitivity and false-negative rates was higher than the brand B smartwatch. However, the brand B smartwatch precision and thus the false-positive rate is better than brands A and C devices. This is probably related to the differences in the operating systems. Medrano et al  explain that in current smartphone operating systems such as Android and iOS, it is difficult to configure specific sampling rates. As the sampling frequencies in both systems are different, the performance of the algorithm will likely be influenced by the operating system used. Moreover, Fudickar et al [ ] have investigated the impact of the sampling frequency of the accelerometer on the performance of different threshold-based algorithms in smartphones, concluding that a detection system must deal with the polling frequency of the accelerometer sensors embedded in the device. No studies have been performed regarding this issue on smartwatches; however, it is likely that the situation is the same.
Additionally, our study has found that the performance of the algorithm could be strongly dependent on the smartwatch model. According to Silva et al , the performance of a fall detection algorithm could be affected by the quality of the sensors embedded in the device. Additionally, as the manufacturer can change the sensors over time, the performance of the algorithm will also rely on the smartwatch model [ ]. This situation could explain the differences we have found between the smartwatch models tested, making it difficult to compare with other studies if they have not used the same smartwatch device and model.
It has been previously reported that the direction of the fall affects the performance of the algorithm used in smartwatches [, ]. In this context, the performance of the algorithm is largely dependent on which side the fall occurred in relation to the smartwatch. Our algorithm performs better when the fall occurs on the same side of the wrist wearing the smartwatch than when the fall occurs on the opposite side. This is a tendency observed regardless of the smartwatch model. Mauldin et al [ ] found a similar performance in the three pattern recognition models they tested. Casilari and Oviedo-Jimenez [ ] reported an overall result for side falls; therefore, it is not possible to know if they have found the same tendency.
Regarding the back falls, Mauldin et al  found their different algorithm models had poor performance indices in this direction. This was thought to be a consequence of less wrist movement in back falls as compared to other directions of falls. However, our algorithm performed the best on back falls, suggesting that the intensity of the wrist movement or the impact is not affecting the algorithm in this fall direction.
Finally, another factor that could affect the performance of the algorithm in detecting falls in different directions is the participant’s body habitus. It has been proposed that height and weight could affect the performance of the algorithm ; thus, implementing personalized settings according to participants’ characteristics is a way to improve the algorithm sensitivity. To address these issues of body habitus and smartwatch model, we deliberately adjusted the algorithm settings during the test falls. This likely contributes to the positive results and should be considered in future studies.
Our study has some limitations. First, there was a relatively small number of participants though not in comparison with other published studies. Second, not all participants wore a smartwatch on each arm, potentially influencing the results. However, only 1 participant was wearing one smartwatch; the results were essentially unchanged with that participant’s data removed. Third, our participants were healthy in contradistinction to the older adult population who would most likely be using the app. Nonetheless, inducing falls in such participants would expose them to considerable risk.
Despite these reservations, the smartwatch app performed well in comparison to studies of other apps and under more rigorous conditions with more stringent analyses, yielding an accuracy of 89%. Indeed, the field of physical activity sensors generally accepts an accuracy of 70% to 80% . Our future research will be focused on investigating the performance of the algorithm in different smartwatch models by using personalized settings. Moreover, head-to-head studies of fall detection devices in smartwatches using real-world participants and settings are likely to improve available evidence concerning the effectiveness of these devices for consumers such as older adults and regulatory or licensing bodies.
The authors would like to acknowledge the National Facility for Human Robot Interaction Research, University of New South Wales, and Michael Gratton. The authors would also like to acknowledge Francisco Fleming in his role as a research assistant and Serge Lauriou in his role as an advisor to My Medic Watch Pty Ltd.
BB and SGF contributed to the concept and design of the study. All authors were involved in the implementation of the study and data collection as well as analyses. All authors contributed to the writing of the manuscript. The final version of the paper has been seen and approved by all authors.
Conflicts of Interest
BB reports grants from My Medic Watch during the conduct of the study. In addition, BB has patent AU2017338619 with royalties paid, patent CA3039538Ð with royalties paid, patent CN109843171Ð with royalties paid, patent EP3522782Ð with royalties paid, patent JP2020504806 with royalties paid, patent KR1020190058618 with royalties paid, and patent US20200051688 with royalties paid. BB is a scientific advisor to My Medic Watch Pty Ltd. EB reports grants from My Medic Watch during the conduct of the study. In addition, EB has patent AU2017338619 with royalties paid, patent CA3039538 with royalties paid, patent CN109843171 with royalties paid, patent EP3522782 with royalties paid, patent JP2020504806 with royalties paid, patent KR1020190058618 with royalties paid, and patent US20200051688 with royalties paid. EB is the Director of My Medic Watch Pty Ltd. SGF has nothing to disclose. My Medic Watch provided unrestricted funds to cover the infrastructure costs of the study: ethics submission, research assistant for participant logistics and data collection, and statistician for data analyses.
- Peel NM. Epidemiology of falls in older age. Can J Aging 2011 Mar;30(1):7-19. [CrossRef] [Medline]
- Rubenstein LZ. Falls in older people: epidemiology, risk factors and strategies for prevention. Age Ageing 2006 Sep;35 Suppl 2:ii37-ii41. [CrossRef] [Medline]
- Hartholt KA, van Beeck EF, Polinder S, van der Velde N, van Lieshout EMM, Panneman MJM, et al. Societal consequences of falls in the older population: injuries, healthcare costs, and long-term reduced quality of life. J Trauma 2011 Sep;71(3):748-753. [CrossRef] [Medline]
- Tinetti ME, Liu WL, Claus EB. Predictors and prognosis of inability to get up after falls among elderly persons. JAMA 1993 Jan 06;269(1):65-70. [Medline]
- Fleming J, Brayne C, Cambridge City over-75s Cohort (CC75C) study collaboration. Inability to get up after falling, subsequent time on floor, and summoning help: prospective cohort study in people over 90. BMJ 2008 Nov 17;337:a2227 [FREE Full text] [CrossRef] [Medline]
- Heinrich S, Rapp K, Rissmann U, Becker C, König HH. Cost of falls in old age: a systematic review. Osteoporos Int 2010 Jun;21(6):891-902. [CrossRef] [Medline]
- Gao W, Emaminejad S, Nyein HYY, Challa S, Chen K, Peck A, et al. Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 2016 Jan 28;529(7587):509-514 [FREE Full text] [CrossRef] [Medline]
- Roush RE, Teasdale TA, Murphy JN, Kirk MS. Impact of a personal emergency response system on hospital utilization by community-residing elders. South Med J 1995 Sep;88(9):917-922. [CrossRef] [Medline]
- Bernstein M. "Low-tech" personal emergency response systems reduce costs and improve outcomes. Manag Care Q 2000;8(1):38-43. [Medline]
- Reeder B, David A. Health at hand: a systematic review of smart watch uses for health and wellness. J Biomed Inform 2016 Oct;63:269-276 [FREE Full text] [CrossRef] [Medline]
- Lu T, Fu C, Ma MH, Fang C, Turner AM. Healthcare applications of smart watches. A systematic review. Appl Clin Inform 2016 Sep 14;7(3):850-869 [FREE Full text] [CrossRef] [Medline]
- Srygley JM, Herman T, Giladi N, Hausdorff JM. Self-report of missteps in older adults: a valid proxy of fall risk? Arch Phys Med Rehabil 2009 May;90(5):786-792 [FREE Full text] [CrossRef] [Medline]
- Casilari E, Luque R, Morón MJ. Analysis of Android device-based solutions for fall detection. Sensors (Basel) 2015 Jul 23;15(8):17827-17894 [FREE Full text] [CrossRef] [Medline]
- Lapierre N, Neubauer N, Miguel-Cruz A, Rios Rincon A, Liu L, Rousseau J. The state of knowledge on technologies and their use for fall detection: a scoping review. Int J Med Inform 2018 Mar;111:58-71. [CrossRef] [Medline]
- Pannurat N, Thiemjarus S, Nantajeewarawat E. Automatic fall monitoring: a review. Sensors (Basel) 2014 Jul 18;14(7):12900-12936 [FREE Full text] [CrossRef] [Medline]
- Casilari E, Oviedo-Jiménez MA. Automatic fall detection system based on the combined use of a smartphone and a smartwatch. PLoS One 2015;10(11):e0140929 [FREE Full text] [CrossRef] [Medline]
- Maglogiannis I, Ioannou C, Spyroglou G, Tsanakas P. Fall detection using commodity smart watch and smart phone. In: Artificial Intelligence Applications and Innovations 10th IFIP WG 12.5 International Conference, AIAI 2014, Rhodes, Greece, September 19-21, 2014. Proceedings. Berlin, Heidelberg: Springer; 2014:70-78.
- Mauldin TR, Canby ME, Metsis V, Ngu AHH, Rivera CC. SmartFall: a smartwatch-based fall detection system using deep learning. Sensors (Basel) 2018 Oct 09;18(10):3363 [FREE Full text] [CrossRef] [Medline]
- Sukreep S, Elgazzar K, Chu H, Mongkolnam P, Nukoolkit C. iWatch: a fall and activity recognition system using smart devices. Int J Comp Commun Eng 2019;8(1):18-31. [CrossRef]
- Medrano C, Igual R, Plaza I, Castro M. Detecting falls as novelties in acceleration patterns acquired with smartphones. PLoS One 2014;9(4):e94811 [FREE Full text] [CrossRef] [Medline]
- Fudickar S, Lindemann A, Schnor B. Threshold-based fall detection on smart phones. In: Proceedings of the International Conference on Health Informatics. 2014 Presented at: HEALTHINF; March 3-6, 2014; Loire Valley, France p. 303-309. [CrossRef]
- Silva M, Teixeira PM, Abrantes F, Sousa F. Design and evaluation of a fall detection algorithm on mobile phone platform. In: Gabrielli S, Elias D, Kahol K, editors. Ambient Media and Systems Second International ICST Conference, AMBI-SYS 2011, Porto, Portugal, March 24-25, 2011, Revised Selected Papers. Berlin, Heidelberg: Springer; 2011:28-35.
- Sposaro F, Tyson G. iFall: an Android application for fall monitoring and response. Annu Int Conf IEEE Eng Med Biol Soc 2009;2009:6119-6122. [CrossRef] [Medline]
- Awais M, Palmerini L, Bourke AK, Ihlen EAF, Helbostad JL, Chiari L. Performance evaluation of state of the art systems for physical activity classification of older subjects using Inertial Sensors in a Real Life Scenario: a benchmark study. Sensors (Basel) 2016 Dec 11;16(12):2105 [FREE Full text] [CrossRef] [Medline]
|NB: Naive bayes|
|SVM: support vector machine|
Edited by G Eysenbach; submitted 02.05.21; peer-reviewed by HL Tam, E Sadeghi-Demneh, B Chaudhry; comments to author 30.07.21; revised version received 23.09.21; accepted 19.12.21; published 21.03.22Copyright
©Bruce Brew, Steven G Faux, Elizabeth Blanchard. Originally published in JMIR Formative Research (https://formative.jmir.org), 21.03.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.