Published on in Vol 4, No 11 (2020): November

Preprints (earlier versions) of this paper are available at https://www.medrxiv.org/content/10.1101/2020.09.05.20164731v1, first published .
Diagnosing Chronic Obstructive Airway Disease on a Smartphone Using Patient-Reported Symptoms and Cough Analysis: Diagnostic Accuracy Study

Diagnosing Chronic Obstructive Airway Disease on a Smartphone Using Patient-Reported Symptoms and Cough Analysis: Diagnostic Accuracy Study

Diagnosing Chronic Obstructive Airway Disease on a Smartphone Using Patient-Reported Symptoms and Cough Analysis: Diagnostic Accuracy Study

Original Paper

1Joondalup Health Campus, Perth, Australia

2School of Nursing, Midwifery and Paramedicine, Curtin University, Perth, Australia

3Partnering in Health Innovations Research Group, Joondalup Health Campus, Perth, Australia

4Genesis Care Sleep and Respiratory, Perth, Australia

5Bear Statistics, Perth, Australia

6ResApp Health, Brisbane, Australia

7School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Australia

Corresponding Author:

Paul Porter, MBBS, FRACP

Joondalup Health Campus

Suite 204, Joondalup Health Campus

Perth, 6027

Australia

Phone: 61 0412484747

Email: phporter@iinet.net.au


Background: Rapid and accurate diagnosis of chronic obstructive pulmonary disease (COPD) is problematic in acute care settings, particularly in the presence of infective comorbidities.

Objective: The aim of this study was to develop a rapid smartphone-based algorithm for the detection of COPD in the presence or absence of acute respiratory infection and evaluate diagnostic accuracy on an independent validation set.

Methods: Participants aged 40 to 75 years with or without symptoms of respiratory disease who had no chronic respiratory condition apart from COPD, chronic bronchitis, or emphysema were recruited into the study. The algorithm analyzed 5 cough sounds and 4 patient-reported clinical symptoms, providing a diagnosis in less than 1 minute. Clinical diagnoses were determined by a specialist physician using all available case notes, including spirometry where available.

Results: The algorithm demonstrated high positive percent agreement (PPA) and negative percent agreement (NPA) with clinical diagnosis for COPD in the total cohort (N=252; PPA=93.8%, NPA=77.0%, area under the curve [AUC]=0.95), in participants with pneumonia or infective exacerbations of COPD (n=117; PPA=86.7%, NPA=80.5%, AUC=0.93), and in participants without an infective comorbidity (n=135; PPA=100.0%, NPA=74.0%, AUC=0.97). In those who had their COPD confirmed by spirometry (n=229), PPA was 100.0% and NPA was 77.0%, with an AUC of 0.97.

Conclusions: The algorithm demonstrated high agreement with clinical diagnosis and rapidly detected COPD in participants presenting with or without other infective lung illnesses. The algorithm can be installed on a smartphone to provide bedside diagnosis of COPD in acute care settings, inform treatment regimens, and identify those at increased risk of mortality due to seasonal or other respiratory ailments.

Trial Registration: Australian New Zealand Clinical Trials Registry ACTRN12618001521213; http://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=375939

JMIR Form Res 2020;4(11):e24587

doi:10.2196/24587

Keywords



Chronic obstructive pulmonary disease (COPD) is the fourth leading cause of mortality, affecting more than 384 million individuals worldwide [1]. It is characterized by airflow limitation and a progressive decline in lung function [2]. The population prevalence of COPD via spirometry screening is reported to be 9% to 26% in those older than 40 years [3]. It is estimated that 80% of people with COPD are undiagnosed [4], and up to 60% of those with a diagnosis of COPD have been found to be misdiagnosed upon subsequent spirometry [5,6]. Moreover, 30% to 60% of patients who have been diagnosed by a physician as having COPD have not undergone spirometry testing [7]. In a study of 533 patients with COPD, 15% of those with spirometry tests did not show obstruction and 45% did not fulfill quality criteria [8].

COPD should be considered in patients who present with dyspnea, chronic cough, sputum production, or recurrent lower respiratory tract infections and patients who have been exposed to tobacco or air pollution. Airflow limitation, demonstrated by a forced expiratory volume in the first second to forced vital capacity (FEV1/FVC) ratio of <0.7 on postbronchodilator spirometry, is considered diagnostic of COPD according to criteria stipulated by the Global Initiative for Chronic Obstructive Lung Disease (GOLD) [2]. The severity of airflow limitation in COPD can be classified by the degree of reduction in FEV1 as a percentage of the predicted value [2]. However, spirometry is not routinely used in emergency departments or primary care settings due to inexperience, time constraints, and availability of equipment [9]. Further, the COPD remote patient monitoring equipment (spirometers and oximeters) with the most technological promise and compatibility with daily living are expensive and of limited use [10].

Early and accurate diagnosis of COPD is imperative to ensure initiation of correct treatment, particularly as evidence suggests that the incipient stages represent a period of rapid decline in lung function, during which cessation of smoking and targeted intervention may be of value [11]. Rapid identification and management of COPD is important in acute care settings, as there is a heightened risk of mortality from respiratory infections such as seasonal influenza [12]. SARS-CoV-2 has a reported case fatality rate of 1.4% for patients without comorbid conditions versus 8.0% for those with chronic respiratory conditions [13].

Screening for COPD in primary care settings using spirometry in asymptomatic patients has not been found to be efficient, as high numbers of patients need to be screened to detect any cases [14,15]. Screening questionnaires, such as the COPD diagnostic questionnaire (CDQ), have performed poorly in an asymptomatic cohort in the primary care setting [16]. We propose that the best use of an algorithm for screening is in a scenario in which patients present to a health care facility with symptoms, as this has a higher pretest probability of case detection.

We previously demonstrated high diagnostic agreement of an automated algorithm with clinical diagnoses for pediatric respiratory diseases, including croup, asthma, bronchiolitis, and pneumonia. The algorithm also accurately separated upper from lower respiratory tract conditions [17]. The technology, which has regulatory approval, is similar to that used in speech recognition software and combines lower airway audio data transmitted during cough events and simple patient-reported clinical symptoms to derive the diagnostic probability output [18]. As the lower airway is open to the outside during a cough, sounds are transmitted through the mouth and can be recorded. In this way, it is similar to traditional auscultation; however, much higher bandwidth is achievable using our method, as the chest wall no longer reduces sound transmission. We recorded audio using a standard smartphone, and the built-in diagnostic algorithm provided a rapid result without requiring clinical examination or additional diagnostic tests.

In this paper, we describe the development and evaluate the accuracy of an algorithm for diagnosing COPD in a cohort of mixed respiratory disorders, including acute respiratory infections. The intended use population is patients who present to health settings with suspected respiratory illness.


Study Population and Setting

Between January 2016 and March 2019, a convenience study sample was obtained by prospectively recruiting participants from the emergency department, low-acuity ambulatory care, and inpatient wards of a large general hospital in Western Australia and from the consulting rooms of a respiratory physician.

This diagnostic accuracy study is part of a more extensive development program (Breathe Easy; Australian New Zealand Clinical Trials Registry ACTRN12618001521213). Patients were approached if they presented to a participating site with signs or symptoms of respiratory disease or to specialist rooms for a lung function test. Patients with no discernible symptoms of respiratory disease were also recruited. Patients were excluded if they were on ventilatory support, had a terminal disease, were medically unstable, had structural upper airway disease, or had a medical contraindication to providing a voluntary cough (eg, severe respiratory distress; eye, chest, or abdominal surgery within 3 months; history of pneumothorax). Patients with uncontrolled heart failure or cardiomyopathy, neuromuscular disease, or lobectomy or pneumonectomy were also excluded. From this cohort, only those aged 40 to 75 years were enrolled in the COPD development program.

Written informed consent was obtained from all participants, and the study was approved by a human research ethics committee (Reference No. 1501). There were no adverse events reported. The study did not interfere with clinical care and all treatment decisions were at the discretion of the treating physician.

Index Test (Software Algorithm)

The development of the mathematical techniques used to derive the algorithm has been described in depth elsewhere [17-20]. Briefly, an independent training cohort (N=564) was used to obtain clinical data and cough samples (from which mathematical features were extracted). In developing the algorithm, selected features were weighted and combined to build various continuous classifier models used to determine the probability of a COPD diagnosis (reference test). The probability output of the algorithm represents the specific, weighted combination of features. Multiple clinical symptoms and audio characteristics were examined and combined, with the goal to minimize the number of inputs and to use patient-reported symptoms rather than clinically determined signs, vital signs, or investigations. Each input added to the overall accuracy and discriminatory clinical ability of the algorithm. The optimal model and corresponding probability decision threshold were selected using a receiver operating characteristic (ROC) curve, with due consideration given to achieving a balance of positive percent agreement (PPA) and negative percent agreement (NPA) [18]. Once the optimal model was developed, it was locked from further development and prospectively tested for accuracy on an independent testing set.

Audio data were obtained from 5 coughs using a smartphone (iPhone 6; Apple Inc) held approximately 50 cm away from the participant at a 45° angle to the direction of the airflow. Recordings were undertaken in standard clinical environments; however, we took care to avoid other people’s coughs and voices. The cough recording was obtained within 30 minutes of the physical examination of the patient to ensure the clinical features had not changed. If the participant was unable to provide 5 coughs that were recognized by the cough detection software or if the cough recording became corrupted, the participant was excluded from further analysis.

The following 4 clinical symptoms were selected for inclusion in the tested model: participant age, smoking pack-years, and participant-reported presence of acute cough or fever during this illness. One smoking pack-year was defined as 20 cigarettes or 20 g of tobacco smoked each day over 1 year [21]. Where the clinical symptoms were partially unknown, the algorithm did not return a response.

Reference Test (Clinical Diagnosis or Spirometry)

A full medical assessment was performed on all participants at the time of enrollment, including history and clinical examination. Diagnostic tests were ordered by the treating clinician independently of the study and results were available to researchers.

A specialist physician assigned a clinical diagnosis to each participant based on a review of their medical file, including discharge diagnosis, all outpatient and inpatient notations, and radiology and laboratory results. The same clinical diagnosis definitions (Table 1) were employed in both the testing set (described here) and in the training set used for algorithm development.

Spirometry was performed according to standard methodology [2,22].

Table 1. Clinical diagnosis definitions.
ConditionDefinition
COPDaRespiratory symptoms consistent with COPD and history of smoking (>10 pack-years) or environmental exposure AND: If spirometry performed, then FEV1/FVCb 0.7 on the best test (after bronchodilator) ORIf spirometry not performed, then previous physician diagnosis of COPD
COPD (infectious exacerbation)ALL OF: Met COPD case definition (as above)Worsening symptoms of SOBc or coughSigns and symptoms of acute respiratory tract infection
Acute LRTIdNew lower respiratory tract symptoms (SOB, cough, chest pain 1 week) and acute fever AND: For pneumonia: new consolidation on CXRe or CTf ORFor LRTI: infiltrate but no consolidation on CXR or CXR not performed
No lower airway diseaseNo lung disease and spirometry results within normal parameters (FEV1/FVC 0.7 on best test)

aCOPD: chronic obstructive pulmonary disease.

bFEV1/FVC: forced expiratory volume in the first second to forced vital capacity.

cSOB: shortness of breath.

dLRTI: lower respiratory tract infection.

eCXR: chest x-ray.

fCT: computed tomography.

Analysis Population

Diagnostic accuracy tests were performed for 4 groups using an independent test set of participants. The same inclusion and exclusion criteria were used for both training and test sets (Table 2).

After a clinical diagnosis was assigned to all participants, the database was locked and the software algorithm was run by an independent researcher to ensure blinding was maintained. Each participant’s cough sound data and clinical diagnosis were only used once in the prospective test.

Table 2. Analysis groups.
Group nameRoleParticipants included and excluded
Group 1: COPDa total cohortbTo determine the presence or absence of COPDIncluded participants with: COPD with and without acute lower respiratory tract infections (pneumonia and LRTIc)Chronic bronchitis, emphysema, or chronic asthma (with and without acute lower respiratory tract infections, such as pneumonia and LRTI)No underlying COPD with acute lower respiratory tract infections (pneumonia and LRTI)No lower airway disease Excluded participants with physician-diagnosed episodic asthma who were experiencing an isolated acute exacerbation or physician-diagnosed restrictive lung disease
Group 2A: COPD with infectious comorbidityTo determine the presence or absence of COPD when participants with COPD also have an acute LRTIAll of group 1, excluding participants with COPD without LRTI
Group 2B: COPD without infectious comorbidityTo determine the presence or absence of COPD when participants with COPD do not have an acute LRTIAll of group 1, excluding participants with COPD with LRTI
Group 3: COPD confirmed by spirometryTo determine the presence or absence of spirometry-confirmed COPDOf group 1, excluding those whose COPD was not confirmed by spirometry

aCOPD: chronic obstructive pulmonary disease.

bFrom the total cohort (group 1), groups 2A, 2B, and 3 were derived.

cLRTI: lower respiratory tract infection.

Statistical Analysis

Power calculations were derived as follows. Based on an expected positive and negative percent agreement greater than 85% from the training program, to obtain a superiority end point of 75% (lower bound 95% confidence interval of maximum width within 0.10), a minimum of 48 cases were required.

PPA is defined as the percentage of participants with a positive index test result for a specified condition who also have a positive reference standard for the same condition. NPA is the percentage of participants who return negative results for both tests.

The primary study end point was defined as the PPA and the NPA of the index test with the reference standard, with 95% confidence intervals calculated using the Clopper-Pearson method. The probability of positive clinical diagnosis was calculated for each participant by the final classifier model and was used as the decision threshold in the derived ROC curve.


In the prospective testing set, 270 participants met inclusion criteria for and were enrolled in the COPD diagnostic study. Of these, 153 were from the hospital emergency department or inpatient wards, and 117 were respiratory outpatients or from the ambulatory acute care unit.

A total of 252 participants provided a valid index and reference test (Figure 1); 2 were excluded because the clinical diagnosis was recorded as unsure. The mean age of the participants was 59.7 (SD 9.2) years, and 148 of the 252 (58.7%) participants were women. Those with COPD were older than those without COPD (65.5 vs 57.8 years; P.001), although the sex proportion did not differ with the diagnosis. Of the 252 participants analyzed, 215 (85.3%) had at least one of the following respiratory symptoms: acute, chronic, or productive cough; fever; rhinorrhea; shortness of breath; wheeze; or hoarse voice. Participant characteristics are shown in Table 3, including spirometry results where available.

Figure 1. The flow of participants through the study. COPD: chronic pulmonary obstructive disease.
View this figure
Table 3. Participant characteristics. Data include all participants in analyzed groups (COPD positive and negative).
CharacteristicCOPDa total cohort (group 1, N=252)COPD with infectious comorbidity (group 2A, n=117)COPD without infectious comorbidity (group 2B, n=135)COPD confirmed by spirometry (group 3, n=229)
Age, mean (SD)59.7 (9.2)60.6 (9.1)59.0 (9.1)59.0 (9.1)
BMI, mean (SD)28.8 (7.3)29.0 (7.9)28.6 (6.7)29.2 (7.3)
FEV1b, mean (SD)2.3 (1.0)0.9 (0.2)2.3 (1.0)2.3 (1.0)
FVCc, mean (SD)3.2 (1.1)1.9 (0.4)3.3 (1.1)3.2 (1.1)
FEV1/FVC, mean (SD)69.1 (16.2)46.1 (11.3)70.5 (15.4)69.1 (16.2)
Predicted FEV1, mean (SD)81.2 (28.8)34.3 (12.3)84.0 (27.0)81.2 (28.8)
Predicted FVC, mean (SD)90.7 (22.2)57.6 (14.0)92.7 (21.0)90.7 (22.2)
Predicted FEV1/FVC, mean (SD)83.2 (21.9)58.4 (14.0)85.3 (21.1)83.2 (21.9)
Acute cough, n (%)




No136 (54.0)22 (18.8)114 (84.4)129 (56.3)

Yes116 (46.0)95 (81.2)21 (15.6)100 (43.7)
Fever, n (%)




No126 (58.1)39 (33.3)87 (87.0)114 (58.8)

Yes91 (41.9)78 (66.7)13 (13.0)80 (41.2)
Rhinorrhea, n (%)




No116 (53.7)61 (52.1)55 (55.6)101 (52.3)

Yes100 (46.3)56 (47.9)44 (44.4)92 (47.7)
Wheeze, n (%)




No145 (66.8)84 (71.8)61 (61.0)134 (69.1)

Yes72 (33.2)33 (28.2)39 (39.0)60 (30.9)

aCOPD: chronic obstructive pulmonary disease.

bFEV1: forced expiratory volume in the first second.

cFVC: forced vital capacity.

For cases where spirometry (n=229) was used to confirm the presence or absence of COPD, the mean age of participants was 59.0 (SD 9.1) years and 80 (65.0%) participants were women, with FEV1 measurements as shown in Table 4. The COPD-negative group included 6 patients with chronic fixed asthma who had an FEV1 below 80%.

Table 4. Spirometry-derived FEV1 (GOLD severity categories) in participants with and without COPD [2].
Percent predicted FEV1a (GOLDb severity category)COPDc positive, n (%)COPD negative, n (%)
30.0% (GOLD 4: very severe)5 (12)0 (0.0)
30.0% to 49.9% (GOLD 3: severe)17 (40)2 (2)
50.0% to 79.9% (GOLD 2: moderate)16 (38)4 (5)
≥80.0% (GOLD 1: mild)4 (10)75 (93)
Total42 (100)81 (100)

aFEV1: forced expiratory volume in the first second.

bGOLD: Global Initiative for Chronic Obstructive Lung Disease.

cCOPD: chronic obstructive pulmonary disease.

The calculated PPA and NPA of the algorithm with clinical diagnosis and area under the curve (AUC) are shown in Table 5. ROC curves for each test group are shown in Figure 2.

Although the algorithm was developed to discriminate based on GOLD criteria, we repeated the analysis using lower limit of normal (LLN) thresholds to diagnose COPD. Test performance in the COPD confirmed by spirometry group (n=229) returned a PPA of 100% (95% CI 90.75%-100.0%) and an NPA of 75.4% (95% CI 68.65%-81.32%).

Table 5. PPA, NPA, and calculated AUC of the algorithm (index test) compared with clinical diagnosis (reference test).
GroupPPAa, % (95% CI); n/NNPAb, % (95% CI); n/NAUCc (95% CI)
Group 1: COPDd total cohort (n=252)93.8 (85.0-98.3); 61/6577.0 (70.3-82.8); 144/1870.95 (0.9-1.0)
Group 2A: COPD with infectious comorbidity86.7 (69.3-96.2); 26/3080.5 (70.6-88.2); 70/870.93 (0.9-1.0)
Group 2B: COPD without infectious comorbidity100 (90.0-100.0); 35/3574.0 (64.3-82.3); 74/1000.97 (0.9-1.0)
Group 3: COPD confirmed by spirometry100 (91.6-100.0); 42/4277.0 (70.3-82.8); 144/1870.97 (0.9-1.0)

aPPA: positive percent agreement.

bNPA: negative percent agreement.

cAUC: area under the curve.

dCOPD: chronic obstructive pulmonary disease.

Figure 2. Receiver operating characteristic curve and AUC for (A) COPD total cohort (group 1), AUC=0.95 (95% CI 0.92-0.98); (B) COPD with infectious comorbidity (group 2A), AUC=0.93 (95% CI 0.88-0.98); (C) COPD without infectious comorbidity (group 2B), AUC=0.974 (95% CI 0.95-1.00); (D) COPD diagnosed by spirometry group (group 3), AUC=0.973 (95% CI 0.95-1.00). AUC: area under the curve; COPD: chronic obstructive pulmonary disease.
View this figure

We have described a simple, rapid diagnostic test for COPD that demonstrates high agreement with clinical diagnosis in the acute setting. Diagnostic agreement of the software algorithm with clinical diagnosis of COPD showed a PPA of 93.8% and an NPA of 77.0%. Agreement was maintained when the patient had an acute respiratory infection (PPA of 86.7% and NPA of 80.5%). Notably, the index test retained high diagnostic agreement in cases of spirometry-confirmed COPD (PPA of 100.0% and NPA of 77.0%).

Population and primary care surveys have demonstrated that mild (FEV1 ≥80% of percent predicted) and moderate (FEV1 50%-80% of percent predicted) airflow limitation is seldom diagnosed by clinicians [23,24]. In our study, 20 out of 42 (48%) participants with clinically diagnosed COPD had only mild or moderate airflow limitation (Table 4). This group represents those who would benefit most from this algorithm, both because of new treatment possibilities and because they are frequently underdiagnosed.

We used the GOLD criteria for COPD diagnosis (FEV1/FVC 0.7) when developing our algorithms, although COPD can also be defined using the LLN. When calculated using the LLN thresholds, test performance was not significantly different from values obtained using the GOLD criteria. It should be noted that, as our model was developed to recognize COPD diagnosed using the GOLD criteria, we would expect a lower performance when the diagnostic criteria are changed.

In many European countries, spirometry is available in acute and primary care settings [8]. However, uptake of the test is limited, leading to underdiagnosis or misdiagnosis of patients [6]. Several barriers to using spirometry in primary and acute care settings have been reported, including expense and limitations in access, expertise, and time [25]. Alternative testing methods have been developed. A meta-analysis of the CDQ in ever-smokers in 4 studies had a pooled sensitivity of 64.5% (95% CI 59.9%-68.8%) and a specificity of 65.2% (52.9%-75.8%) [16]. Another study recruiting current and former smokers over 40 years from the general population demonstrated moderate sensitivity and specificity of the CDQ (74% and 72%, respectively), the COPD Population Screener (56% and 90%, respectively), and the Lung Function Questionnaire (79% and 68%, respectively) [26]. An analysis from 3 studies of handheld flow meters showed a sensitivity of 79.9% (95% CI 74.2%-84.7%) and a specificity of 84.4% (95% CI 68.9%-93.0%) [16]. In a scenario comparable to our study, when the CDQ was performed on symptomatic patients in primary care, the AUC was 0.65, sensitivity was 89.2% and 65.8%, and specificity was 24.4% and 54.0% for participants at low risk and high risk of having COPD, respectively [27]. The performance of our software algorithm exceeds that of the currently available COPD screening questionnaires, outperforms the sensitivity of handheld flow meters with comparable specificity, and demonstrates high agreement with the gold standard (spirometry) in under one minute. This algorithm is intended to be used as a stand-alone device, allowing for real-time diagnosis. As it is easy to operate and requires no physical patient contact, infection risk is minimized.

We envisage that the algorithm could be used as an initial screening test in acute care settings for patients who present with nonspecific respiratory symptoms. A positive result could be used to guide immediate care in the acute setting. As the test is delivered via smartphone, it could be applied in person or during a telehealth consultation. A formal diagnosis of COPD requires confirmation by spirometry, the gold standard tool for COPD diagnosis [2]. Confirmatory spirometry could be performed during subsequent specialist follow-up.

In this study, we were able to accurately identify the presence or absence of COPD in patients with lower respiratory tract infections, including pneumonia. In these situations, spirometry can be difficult to perform adequately, and an initial diagnostic test will help detect COPD in acutely unwell patients and identify those individuals most at risk of developing complications. Individuals with COPD are known to experience more frequent complications and higher mortality rates due to seasonal illnesses such as influenza [12]. More recently, a meta-analysis examining the risk of severe outcomes from SARS-CoV-2 infection (admission to the intensive care unit, mechanical ventilation, or death) showed a greater than fivefold increase in the risk of severe disease in patients with coexistent COPD [28]. We recommend that all patients with COPD with a suspected infection be carefully monitored in view of this increased risk. The diagnosis of COPD in patients presenting with SARS-CoV-2 or similar respiratory infections would allow more focused therapeutic pathways and guide health care resources to this at-risk group.

There are several limitations to this study. Our study population was recruited in an urban setting and had smoking-related COPD. The generalizability of these results to COPD of differing etiologies and in other settings requires confirmation. The tests were performed by trained research personnel in controlled environments, although we would consider the device less onerous to use than spirometry. The cough recording can be affected by background noise and positioning of the device, although the program will alert the user if background noise is excessive. The population recruited reflects the intended age range of use. However, as expected, those with diagnosed COPD were slightly older than those without COPD, and it will be important to replicate this study using an older control group.

The COPD diagnostic algorithm described in this study is used in combination with a suite of other respiratory diagnostic algorithms developed in the Breathe Easy program, including tests for asthma, pneumonia, and lower respiratory tract disease [17]. The software provides a diagnostic output for each condition simultaneously every time it is used. Having independent decision algorithms for asthma, COPD, and pneumonia is particularly important due to the considerable clinical overlap between the conditions.

In conclusion, the algorithm was able to accurately identify COPD, even in the presence of infection. The algorithm operates as a stand-alone tool and provides a rapid result. It may find application in the acute care setting as a screening tool to alert clinicians to the presence of COPD, allowing for more rapid, targeted, and appropriate management.

Conflicts of Interest

PP, SC, and UA are scientific advisors for ResApp Health. PP and UA are shareholders in ResApp Health. UA was ResApp Health's chief scientist. ResApp Health is an Australian publicly listed company commercializing the technology under license from the University of Queensland, where UA is employed. UA is a named inventor of the University of Queensland technology. VP and JW are employees of ResApp Health. NB received consultancy fees for statistical analysis. JB, CS, FP, and PD declare no competing interests.

  1. Adeloye D, Chua S, Lee C, Basquill C, Papana A, Theodoratou E, Global Health Epidemiology Reference Group (GHERG). Global and regional estimates of COPD prevalence: Systematic review and meta-analysis. J Glob Health 2015 Dec;5(2):020415 [FREE Full text] [CrossRef] [Medline]
  2. Global Initiative for Chronic Obstructive Lung Disease. Global Strategy for the Diagnosis, Management, and Prevention of Chronic Obstructive Pulmonary Disease: 2020 Report. GOLD COPD.   URL: https://goldcopd.org/gold-reports/ [accessed 2019-03-31] [WebCite Cache]
  3. Robitaille C, Dajczman E, Hirsch AM, Small D, Ernst P, Porubska D, et al. Implementation of a targeted screening program to detect airflow obstruction suggestive of chronic obstructive pulmonary disease within a presurgical screening clinic. Can Respir J 2015;22(4):209-214 [FREE Full text] [CrossRef] [Medline]
  4. Bednarek M, Maciejewski J, Wozniak M, Kuca P, Zielinski J. Prevalence, severity and underdiagnosis of COPD in the primary care setting. Thorax 2008 May;63(5):402-407. [CrossRef] [Medline]
  5. Gershon AS, Thiruchelvam D, Chapman KR, Aaron SD, Stanbrook MB, Bourbeau J, Canadian Respiratory Research Network. Health Services Burden of Undiagnosed and Overdiagnosed COPD. Chest 2018 Jun;153(6):1336-1346. [CrossRef] [Medline]
  6. Ho T, Cusack RP, Chaudhary N, Satia I, Kurmi OP. Under- and over-diagnosis of COPD: a global perspective. Breathe (Sheff) 2019 Mar;15(1):24-35 [FREE Full text] [CrossRef] [Medline]
  7. Diab N, Gershon AS, Sin DD, Tan WC, Bourbeau J, Boulet L, et al. Underdiagnosis and Overdiagnosis of Chronic Obstructive Pulmonary Disease. Am J Respir Crit Care Med 2018 Nov 01;198(9):1130-1139. [CrossRef] [Medline]
  8. Arne M, Lisspers K, Ställberg B, Boman G, Hedenström H, Janson C, et al. How often is diagnosis of COPD confirmed with spirometry? Respir Med 2010 Apr;104(4):550-556 [FREE Full text] [CrossRef] [Medline]
  9. Heffler E, Crimi C, Mancuso S, Campisi R, Puggioni F, Brussino L, et al. Misdiagnosis of asthma and COPD and underuse of spirometry in primary care unselected patients. Respir Med 2018 Sep;142:48-52 [FREE Full text] [CrossRef] [Medline]
  10. Fan KG, Mandel J, Agnihotri P, Tai-Seale M. Remote Patient Monitoring Technologies for Predicting Chronic Obstructive Pulmonary Disease Exacerbations: Review and Comparison. JMIR Mhealth Uhealth 2020 May 21;8(5):e16147 [FREE Full text] [CrossRef] [Medline]
  11. Csikesz NG, Gartman EJ. New developments in the assessment of COPD: early diagnosis is key. Int J Chron Obstruct Pulmon Dis 2014;9:277-286 [FREE Full text] [CrossRef] [Medline]
  12. Walker TA, Waite B, Thompson MG, McArthur C, Wong C, Baker MG, et al. Risk of Severe Influenza Among Adults With Chronic Medical Conditions. J Infect Dis 2020 Jan 02;221(2):183-190. [CrossRef] [Medline]
  13. Report of the WHO-China Joint Mission on coronavirus disease 2019 (COVID-19). World Health Organization.   URL: https:/​/www.​who.int/​publications-detail/​report-of-the-who-china-joint-mission-on-coronavirus-disease-2019-(covid-19) [accessed 2020-04-03]
  14. Lin K, Watkins B, Johnson T, Rodriguez JA, Barton MB, US Preventive Services Task Force. Screening for chronic obstructive pulmonary disease using spirometry: summary of the evidence for the U.S. Preventive Services Task Force. Ann Intern Med 2008 Apr 01;148(7):535-543. [CrossRef] [Medline]
  15. Haroon SM, Jordan RE, O'Beirne-Elliman J, Adab P. Effectiveness of case finding strategies for COPD in primary care: a systematic review and meta-analysis. NPJ Prim Care Respir Med 2015 Aug 27;25:15056 [FREE Full text] [CrossRef] [Medline]
  16. Haroon S, Jordan R, Takwoingi Y, Adab P. Diagnostic accuracy of screening tests for COPD: a systematic review and meta-analysis. BMJ Open 2015 Oct 08;5(10):e008133. [CrossRef] [Medline]
  17. Porter P, Abeyratne U, Swarnkar V, Tan J, Ng T, Brisbane JM, et al. A prospective multicentre study testing the diagnostic accuracy of an automated cough sound centred analytic system for the identification of common respiratory disorders in children. Respir Res 2019 Jun 06;20(1):81 [FREE Full text] [CrossRef] [Medline]
  18. Abeyratne UR, Swarnkar V, Setyati A, Triasih R. Cough sound analysis can rapidly diagnose childhood pneumonia. Ann Biomed Eng 2013 Nov;41(11):2448-2462. [CrossRef] [Medline]
  19. Sharan RV, Abeyratne UR, Swarnkar VR, Porter P. Automatic Croup Diagnosis Using Cough Sound Recognition. IEEE Trans Biomed Eng 2019 Feb;66(2):485-495. [CrossRef] [Medline]
  20. Swarnkar V, Abeyratne U, Tan J, Ng TW, Brisbane JM, Choveaux J, et al. Stratifying asthma severity in children using cough sound analytic technology. J Asthma 2019 Nov 25:1-10. [CrossRef] [Medline]
  21. Prignot J. Quantification and chemical markers of tobacco-exposure. Eur J Respir Dis 1987 Jan;70(1):1-7. [Medline]
  22. Miller MR, Hankinson J, Brusasco V, Burgos F, Casaburi R, Coates A, ATS/ERS Task Force. Standardisation of spirometry. Eur Respir J 2005 Aug;26(2):319-338. [CrossRef] [Medline]
  23. Lindberg A, Bjerg A, Bjerg-Bäcklund A, Rönmark E, Larsson L, Lundbäck B. Prevalence and underdiagnosis of COPD by disease severity and the attributable fraction of smoking Report from the Obstructive Lung Disease in Northern Sweden Studies. Respir Med 2006 Feb;100(2):264-272 [FREE Full text] [CrossRef] [Medline]
  24. Sandelowsky H, Ställberg B, Nager A, Hasselström J. The prevalence of undiagnosed chronic obstructive pulmonary disease in a primary care population with respiratory tract infections - a case finding study. BMC Fam Pract 2011 Nov 03;12:122 [FREE Full text] [CrossRef] [Medline]
  25. Walters JAE, Hansen E, Mudge P, Johns DP, Walters EH, Wood-Baker R. Barriers to the use of spirometry in general practice. Aust Fam Physician 2005 Mar;34(3):201-203. [Medline]
  26. Spyratos D, Haidich A, Chloros D, Michalopoulou D, Sichletidis L. Comparison of Three Screening Questionnaires for Chronic Obstructive Pulmonary Disease in the Primary Care. Respiration 2017;93(2):83-89 [FREE Full text] [CrossRef] [Medline]
  27. Kotz D, Nelemans P, van Schayck CP, Wesseling GJ. External validation of a COPD diagnostic questionnaire. Eur Respir J 2008 Feb;31(2):298-303. [CrossRef] [Medline]
  28. Lippi G, Henry BM. Chronic obstructive pulmonary disease is associated with severe coronavirus disease 2019 (COVID-19). Respir Med 2020 Jun;167:105941 [FREE Full text] [CrossRef] [Medline]


AUC: area under the curve
CDQ: chronic obstructive pulmonary disease diagnostic questionnaire
COPD: chronic obstructive pulmonary disease
FEV1: forced expiratory volume in the first second
FVC: forced vital capacity
GOLD: Global Initiative for Chronic Obstructive Lung Disease
LLN: lower limit of normal
NPA: negative percent agreement
PPA: positive percent agreement
ROC: receiver operating characteristic


Edited by M Focsa, G Eysenbach; submitted 25.09.20; peer-reviewed by M Duplaga, Z Su; comments to author 19.10.20; revised version received 23.10.20; accepted 25.10.20; published 10.11.20

Copyright

©Paul Porter, Scott Claxton, Joanna Brisbane, Natasha Bear, Javan Wood, Vesa Peltonen, Phillip Della, Fiona Purdie, Claire Smith, Udantha Abeyratne. Originally published in JMIR Formative Research (http://formative.jmir.org), 10.11.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on http://formative.jmir.org, as well as this copyright and license information must be included.