Telehealth Adoption and Discontinuation by US Hospitals: Results From 2 Quasi-Natural Experiments

Background Prior US hospital telehealth (video visit) studies have focused on describing factors that influence telehealth adoption or performance effects for specific patient segments, hospital systems, or geographic regions. To our knowledge, a larger-scale, national-level (US) study has yet to be conducted on the causal impacts of hospital telehealth adoption as well as discontinuation. Objective The aim of this study is to understand the causal impact of US hospital telehealth adoption or discontinuation on hospital performance from 2016 to 2018. Methods We analyzed impacts of telehealth adoption or discontinuation by US hospitals on emergency department visits, total ambulatory visits (minus emergency department visits), outpatient services revenue, total facility expenses, and total hospital revenue for the 2016-2018 period. We specifically focused on performance effects for hospitals that switched from not having telehealth to adopting telehealth, or vice versa, during the 2016-2018 period, thus exploiting 2 quasi-natural experiments. We applied a difference-in-differences research design to each of the 2 main analyses. We compared hospitals that have made a telehealth change to groups of hospitals with similar characteristics that did not make a telehealth change, which established a counterfactual. To appropriately match hospitals between treatment and control groups, we applied propensity score matching. Our primary data were from the American Hospital Association Annual Survey and the Healthcare Cost Report Information System data. Several control variables were obtained from additional sources, including the Area Health Resource File and the Federal Communications Commission. Results We found that telehealth adoption by US hospitals during the 2016-2018 period resulted in, on average, an increased number of total ambulatory visits (P=.008), increased total facility expenses (P<.001), and increased hospital revenue (P=.004) compared with the control group. We found that telehealth discontinuation during the same period resulted in, on average, decreased outpatient services revenue (P=.02) compared with the control group. Conclusions Our findings suggest that telehealth adoption increases use but has mixed impacts on performance, given that cost and revenue increase. However, once telehealth is offered, removing it can have a negative impact on performance, implying that returning to prior performance levels, if telehealth is removed, may be challenging.


Introduction Background
Telehealth, in the form of video visits between health care providers and patients (telehealth, henceforth), is used by hospitals and their affiliated clinics to maintain or improve access to postdischarge follow-up, continuity of care, and care for nonurgent issues [1][2][3][4][5]. Although a number of studies have evaluated the impacts of telehealth on outcomes [2,[6][7][8][9][10], such studies have primarily focused on either the determinants of telehealth adoption [11] or effects of telehealth primarily for patient populations limited to specific hospital systems or regions [4,[12][13][14]. Larger-scale studies exploiting national-level natural variation in telehealth adoption as well as discontinuation over multiple years have yet to be conducted.
Overall, although many view telehealth with optimism, we do not yet fully understand the impact on hospital-level outcomes when telehealth is adopted or, in the case of challenges, discontinued. Thus, this study seeks to understand such impacts, including impacts resulting from telehealth discontinuation, which is not an aspect of telehealth that has been considered yet in the literature. For instance, in regard to challenges that may lead to discontinuation of telehealth, it is well known that telehealth can be especially difficult to sustain and integrate with workflows designed for in-person interactions [7] and can result in variable outcomes [15,16]. Particular challenges for hospitals offering telehealth include prioritization of the success of telehealth; engagement by providers, patients, and leaders; and continuous improvement [17]. Many times, telehealth is initially viewed with optimism, but the reality is that many clinicians have stopped using it in the past after a few visits [17]. Especially important to mitigate such issues are deliberate efforts to create protocols, develop appropriate scheduling techniques, and formalize an understanding for when telehealth is and is not appropriate [18], which, if not addressed, can lead to significant challenges, resistance, or program failure. Furthermore, the effects of telehealth have been found to have mixed or even positive effects on costs [3,19]. In the case of telehealth substituting for expensive in-person visits such as visits to the emergency department (ED) or in-patient admissions, telehealth can be cost-effective [20,21]. However, when offering video-based consultations to patients, it is also possible that increased access to health care increases provider costs and the number of visits requested by patients, which can result in less revenue, especially if telehealth is reimbursed at a lower rate than in-person visits [3].
Finally, telehealth is a particularly interesting case because it can be technically relatively easy to adopt or discontinue, especially if using a vendor-supported or cloud-based system, but, as discussed previously, can simultaneously result in significant and costly workflow challenges [8,22]. It is well known that telehealth use is an excellent opportunity to enhance access to care, but it is also well known that inadequate barrier identification and management can doom telehealth pilots [17,23]. Furthermore, given the variety of factors that may influence telehealth adoption, use, and potential discontinuation, several factors, including hospital and regional characteristics, must be controlled for. Thus, this study comprehensively examines both telehealth adoption and discontinuation in the United States from 2016 to 2018 through analysis of 2 quasi-natural experiments (ie, one for adoption and one for discontinuation), while controlling for several potential confounding variables. We also conduct robustness checks to validate our findings.

Implications
Our primary findings are as follows: (1) telehealth adoption by US hospitals during the period studied resulted in increased ambulatory visits, increased facility expenses, and increased hospital revenue in comparison with the control group, and (2) telehealth discontinuation resulted in decreased outpatient services revenue in comparison with the control group. The implications are that adopting telehealth increases use of ambulatory services, which implies greater access, but these findings also suggest that profit performance will likely be mixed. Furthermore, removing telehealth once offered can negatively affect future performance, implying that performance levels likely will not simply return to what they were before telehealth was adopted and then subsequently discontinued. Further implications are discussed later.

Overview
To address our research objectives, we analyzed the impact of telehealth adoption or discontinuation by US hospitals from 2016 to 2018 using difference-in-differences estimation of 2 quasi-natural experiments: (1) US hospital telehealth adoption during the period considered and (2) US hospital telehealth discontinuation during the same period. We specifically considered impacts of telehealth adoption or discontinuation during this period on ED visits, total ambulatory visits (minus ED visits), outpatient services revenue, total facility expenses, and hospital revenue (a more detailed description of these dependent variables is available in Multimedia Appendix 1).

Data
Data on which US hospitals continued to offer, or discontinued, telehealth were obtained from the American Hospital Association (AHA) Annual Survey for 2016-2018 (although data quality may be a concern, prior studies such as the one by Adler-Milstein et al [11] have found the AHA data to be highly consistent with the data from the Healthcare Information and Management Systems Society data set, suggesting high data quality). Outcome data for ED visits, total ambulatory visits, outpatient services revenue, total facility expenses, and hospital revenue per US hospital were obtained from the 2016-2018 AHA Annual Survey and the AHA's Centers for Medicare & Medicaid Services Healthcare Cost Report Information System (HCRIS) data (ie, AHA's version of the Centers for Medicare & Medicaid Services HCRIS data). Covariates used for propensity score matching and controls were obtained from the AHA data sets and from the US county-level data available from the Area Health Resource File, as well as the Area Deprivation Index (ADI) sourced from BroadStreet, health rankings data from the University of Wisconsin Population Health Institute, and supplementary data from the Federal Communications Commission for broadband speeds per county. We included several controls from these data sources to account for rival explanations. Controls and covariates were derived from a literature review [24][25][26][27][28][29][30]. Tables 1 and 2 describe the relevant variables.

Statistical Analyses
We applied difference-in-differences (DID) estimation with propensity score matching at the firm (hospital) unit of analysis to understand the effect of telehealth adoption and discontinuation by US hospitals during the 2016-2018 period. We conducted 2 primary analyses that exploited 2 quasi-natural experiments. The first DID analysis focused on telehealth adoption and evaluated impacts on performance for hospitals that went from no telehealth to offering telehealth during this period. The second DID analysis focused on telehealth discontinuation and evaluated impacts for hospitals that went from offering telehealth to discontinuing telehealth during this period. Control group selection and formation is discussed later in this section. This design followed other notable studies that assessed the impact of health information technology adoption and use on outcomes [28][29][30][31] as well as recommendations on effectively estimating causal effects by means of observational data [32,33]. This design is appropriate for estimating causal effects when pre-and posttreatment observational data are available, treatment and control groups with sufficiently balanced covariates and common trends before treatment can be established, and exogenous shocks can be assumed to be consistent between groups [34].
For the telehealth adoption analysis, treatment hospitals are those that first did not offer telehealth but then offered telehealth in a subsequent year. As we have 3 years of data that include the telehealth video visit (yes or no) question, we restricted our focus to video visits for chronic conditions or postsurgical follow-up as opposed to also including consideration of telehealth related to remote patient monitoring and mental health and addiction as separately measured in the AHA Annual Survey. For all US hospitals surveyed by the AHA for this quasi-natural experiment, treatment hospitals are those that (1) did not offer telehealth in 2016 but started in 2017 or 2018 (group 1, n=71) or (2) did not offer telehealth in 2016 or 2017 but then started offering it in 2018 (group 2, n=14). Control hospitals are those that did not offer telehealth in all 3 years (n=50).
For the telehealth discontinuation analysis, treatment hospitals are those that offered telehealth but then discontinued it in a subsequent year. For this quasi-natural experiment, the treatment hospitals are those that (1) offered telehealth in 2016 but discontinued in 2017 or 2018 (group 1, n=12) or (2) offered telehealth in both 2016 and 2017 but discontinued in 2018 (group 2, n=80). Control hospitals are those that offered telehealth in all 3 years (n=432).
To balance the covariates between the treatment and control groups in each of these analyses, we applied propensity scoring and, subsequently, matching. Propensity scoring is applied by first determining the propensity of a hospital being in the treatment group, given observable covariates [35,36]. Then, to reduce selection bias, a matching technique is used to find control group participants (hospitals, in this case) that ultimately result in no observable significant covariate differences between treatment and control groups [35]. Similar to Oh et al [30] and Bao et al [29], we calculated propensity scores by means of logistic regression for each of the analyses (ie, for the adoption analysis and then again for the discontinuation analysis), as explained further in this section. Our covariates consisted of both hospital-level variables and county-level variables, with SEs clustered at the hospital level to account for repeated county-level observations for hospitals within the same county. The logistic regression analysis results for propensity scores are reported in Multimedia Appendix 1.
Using the scores that resulted from obtaining predicted values per hospital, we applied one-to-many matching using both the propensity score and covariates (a one-to-one matching procedure was also tested, as reported in Multimedia Appendix 1, and the results were similar). Matched hospitals belonged to the same teaching, urban, and system status. In addition, we matched hospitals with similar sizes by restricting hospital size (total admissions plus visits) to a difference of no more than a factor of 1.5 and a difference in propensity scores of no more than 0.1. Therefore, for each treatment hospital, we had a cluster of hospitals as the control. For telehealth adoption, the result was a treatment group consisting of 85 hospitals and a matched sample control group consisting of 85 hospital clusters, with an average size of 2 controls and a median size of 1 control in each hospital cluster. For telehealth discontinuation, the result was a treatment group consisting of 92 hospitals and a matched sample control group consisting of 92 hospital clusters, with an average size of 28 controls and a median size of 17 controls in each hospital cluster. We used averaged outcomes (ED visits, total ambulatory visits, outpatient services revenue, total facility expenses, and hospital revenue) for each observed control cluster. The matching for hospitals in group 1 was conducted based on the propensity score and covariates observed at year 2016, and the matching for group 2 was based on observations at year 2017. Comparison of covariates between the 2 groups resulted in no significant differences.
To obtain the propensity score, we conducted a logistic regression analysis using treatment group membership (1 for yes and 0 for no) as the dependent variable for the adoption analysis and then again for the discontinuation analysis. We applied a collection of hospital and county level characteristics as the independent variables for each analysis, with the same control variables being used in each propensity score model. Let p it =P(hospital i in the treatment group) with the following formula: ln(p it /1-p it ) = β 0 + β' 1 X it β 0 is the constant and X it represents factors that affect a hospital's decision of whether telehealth existed for the adoption analysis (1=hospital is in the treatment group and therefore adopted telehealth in 2017 or 2018) or was discontinued for the discontinuation analysis (1=hospital is in the treatment group and therefore discontinued telehealth in 2017 or 2018). β' 1 is the coefficient vector.
Next, identification of the change in ED visits, total ambulatory visits, outpatient services revenue, total facility expenses, and hospital revenue after telehealth adoption and discontinuation was derived through the following DID model, applied once to the adoption analysis and once to the discontinuation analysis. Note that when conducting the analyses, we combined hospitals from group 1 and group 2 as the treatment group. β 0 is the constant, β 1 is the effect from the treatment group, β 2 represents posttreatment periods, and β 3 is the treatment effect (ie, the DID effect), which is the expected value difference in the time trend as well as the difference between treatment and control groups after treatment. We included hospital fixed effects (μ i ) to address any time-invariant hospital heterogeneity and time fixed effects to address time trends (ϑ t ). We performed an estimation using ordinary least squares [31]. The DID equation representing our model is as follows:

Robustness
Threats to validity could include endogeneity of telehealth adoption and decision-making around discontinuation, especially if our sample was subject to selection bias. We addressed this concern by also conducting Heckman analyses. Furthermore, nonrandom market changes, after treatment, may differentially affect outcomes [37]. For instance, perhaps broadband infrastructure or household use of the Internet expanded or contracted at different rates between the control and treatment groups in or after 2017 or 2018. These threats were addressed with our propensity scoring and matching approach that included county-level maximum broadband speeds and household internet use as covariates in the logistic regression analysis, in addition to several other covariates considered when scoring and matching. For instance, we also included the ADI in our propensity score matching procedure to address regional economic states and, potentially, changes over time such as changes after treatment that are not fully addressed in a DID model. Overall, we included several hospital-level (eg, Case Mix Index, hospital size, and market competition) and county-level covariates (eg, maximum broadband speeds, household internet use, county health ranking, and ADI) to address a variety of potential threats to validity (eg, differences in broadband penetration affecting telehealth adoption or outcomes). Finally, we also tested whether outcomes change in the years after treatment to provide additional explanatory information.

Common Trends
For testing common trends, we plotted the averages of ED visits, total ambulatory visits, outpatient services revenue, total facility expenses, and hospital revenue for each of the groups at points in time (years) relative to when telehealth was adopted ( Figure  1) or discontinued ( Figure 2). Note that throughout the paper, the numbers of visits are shown in thousands, whereas expenses and revenue are shown in millions (US $).
To test the common trends assumption statistically, we also interacted pretreatment values with corresponding time dummies within the DID model ( Figure 1). None of the coefficients were significant, suggesting that the trends are sufficiently common.
Again, to test the common trends assumption statistically, we also interacted pretreatment values with corresponding time dummies within the DID model ( Figure 2). None of the coefficients were significant, suggesting that the trends are sufficiently common. Figure 1. Common trends per outcome per year relative to when telehealth was adopted. EDVisits: emergency department visits; HospRev: hospital revenue; OutpatSerRev: outpatient services revenue; TotAmbVisits: total ambulatory visits; TotFacExp: total facility expenses.

Figure 2.
Common trends per outcome per year relative to when telehealth was discontinued. EDVisits: emergency department visits; HospRev: hospital revenue; OutpatSerRev: outpatient services revenue; TotAmbVisits: total ambulatory visits; TotFacExp: total facility expenses.

Estimations
The estimation results are reported below in Table 3 (for adoption) and Table 4 (for discontinuation; model analyses were conducted with R [The R Foundation for Statistical Computing]). For brevity, control variables are not included in the tables, but they were included in all regressions along with hospital and time fixed effects. The interaction terms represent the DID effect, which represents the expected value of the additional difference between the treatment and control groups after treatment (ie, the end of the time trend), when first accounting for the differences in time trends and difference in treatment and control groups.
For telehealth adoption, we found the DID interaction term for total ambulatory visits to be positive and significant (P=.008). This means that the expected value of total ambulatory visits was higher in the treatment group than in the control group, even after accounting for the time and group differences, as well as several covariates discussed earlier and also in Multimedia Appendix 1. The average total ambulatory visits, as reported earlier, was 196 (thousand; SD 209 [thousand]). Given that the DID coefficient is 24.53 (thousand), this effect represents a significant increase in total ambulatory visits. Thus, we conclude that telehealth adoption resulted in more ambulatory visits for the adopting US hospitals during the period studied.
We further found the DID interaction term to be positive (P<.001) for the effect on total facility expenses. Thus, the expected value of total facility expenses was higher in the treatment group (ie, those that adopted telehealth) than in the control group (ie, similar hospitals that did not have, and did not adopt, telehealth during the same period). The average total facility expenses in our sample, as reported earlier, was (in millions) US $321.61 (SD US $342.3). The coefficient (in millions) is US $33.39 (P<.001), which represents a substantial average increase in the expenses when telehealth was adopted.
We also found the DID interaction term to be positive (P=.004) for the effect on hospital revenue, which suggests higher total revenue on average for those in the treatment group. The average total hospital revenue in our sample, as reported earlier, was (in millions) US $152. 34. The coefficient (in millions) is $32.60 (P=.004), which represents a substantial average increase in the revenue when telehealth was adopted. However, we also note that this coefficient is slightly lower than that of the average increase in total facility expenses, suggesting that profits are likely to be negative or minimal when telehealth is first adopted.
The impact on ED visits (P=.36) was nonsignificant. The impact on outpatient services revenue was marginally significant (P=.01) and negative, suggesting that adoption led to at least a temporary drop in revenue, on average, in comparison with the control group.
For telehealth discontinuation, we found the DID interaction term (trt×post) to be significant and negative (P=.02) for the effect on outpatient services revenue. This means that the expected value for outpatient services revenue, ceteris paribus, was lower in the treatment group (ie, the group that discontinued telehealth) than in the control group, after accounting for the time trend and the assumed trend for the counterfactual. We also note that many control variables and fixed effects, to account for an unobserved time invariant heterogeneity, were accounted for. The average outpatient service revenue in our sample, as reported earlier, was (in millions) US $247.07. The coefficient (millions) is -US $65.37 (P=.02), which represents a substantial average drop in revenue compared with the control group when telehealth was discontinued.
We also found the DID interaction term to be negative and marginally significant (P=.09) for the effect on hospital revenue. This again means that the expected value, given all the aforementioned trends and variables, was lower in the treatment group than in the control group after treatment. The average hospital revenue for our sample, as reported earlier, was (in millions) US $188.32. Given that the coefficient (in millions) is -US $13.22, this represents a substantial potential average drop in total hospital revenue when telehealth was discontinued.
The impacts on ED visits (P=.10), total ambulatory visits (P=.28), and total facility expenses (P=.35) were nonsignificant.  A total of 11 observations that did not have a mention of an emergency department visit were omitted from the model for emergency department visits, which is why the n is 499 instead of 510. g A total of 8 observations that did not have a mention of outpatient services revenue were omitted from the model for outpatient services revenue, which is why the n is 502 instead of 510. h A total of 8 observations that did not have a mention of hospital revenue were omitted from the model for hospital revenue, which is why the n is 502 instead of 510.   An observation that did not have a mention of an emergency department visit was omitted from the model for emergency department visits, which is why the n is 551 instead of 552. g A total of 3 observations that did not have a mention of hospital revenue were omitted from the model for hospital revenue, which is why the n is 549 instead of 552.

Robustness Checks
We conducted additional tests to address potential endogeneity issues and threats to validity. First, hospital management, not some central regulatory authority, makes telehealth adoption and discontinuation decisions. Thus, our sample has a potential self-selection endogeneity issue. To address this statistically, beyond the use of propensity score matching, we used a Heckman model [38,39]. The Heckman model consists of 2 stages and is designed to control for those omitted from the sample. The first stage models the self-selection decision, that is, whether a hospital adopts or discontinues telehealth. The second stage models the treatment effect while taking into consideration the selection decision by including the inverse mills ratio calculated from the first stage. The results of this robustness check are available in Multimedia Appendix 1 and are consistent with our primary results.
To test whether the outcomes were different for different years after treatment, we conducted 2-sample t tests using the 2018 data for group 1 (2 years after the treatment) versus the 2018 data of group 2 (1 year after the treatment). Recall that both group 1 and group 2 consist of treatment hospitals. Hospitals in group 1 are those that did not receive treatment in 2016, then received treatment in 2017 and 2018. Hospitals in group 2 are those that did not receive treatment in 2016 and 2017, then received treatment in 2018. The results are reported in Table 5 (for adoption) and Table 6 (for discontinuation).
We observed that after telehealth was adopted, there was an upward trend for the number of visits, expenses, and revenue when comparing year 2 to year 1 after the treatment (Table 5), although none are significant.
We observed that after telehealth was discontinued, there was no significant difference for most of the outcome variables, except for ED visits and total facility expenses ( Table 6). For ED visits, we observed that the number of ED visits decreased further 2 years after the treatment compared with the previous year. The same trend of a further decrease 2 years after the treatment was found for total facility expenses.

Overview
This study assessed the impact of telehealth video visit consultation adoption or discontinuation by US hospitals from 2016 to 2018 through analysis of 2 quasi-natural experiments (ie, one for adoption and one for discontinuation). After conducting a number of robustness checks to validate our findings, we can conclude that, for this period, telehealth adoption resulted in an average increase in total ambulatory visits, total facility expenses, and hospital revenue in comparison with the control group of similar hospitals that neither offered, nor had adopted, telehealth services during this same period. Telehealth discontinuation resulted in an average reduction in outpatient services revenue compared with the control group of similar hospitals that did not discontinue telehealth during this period. Furthermore, in our robustness check, we found telehealth discontinuation to reduce total facility expenses over time, suggesting that telehealth investments are costly and cannot simply rely on existing communications infrastructure (ie, it is not the case that little to no additional costs are involved).

Principal Findings
First, we found that telehealth adoption for US hospitals from 2016 to 2018 resulted in increased visits, expenses, and revenue in comparison with the control group. These findings are similar to those of another study that found telehealth not only increased use (ie, resulted in more visits), but also increased costs [3]. However, this previous study focused on direct-to-consumer telehealth for a payer-based patient population in California as opposed to telehealth offered by hospitals throughout the United States. Thus, we contribute by demonstrating a similar trend at the national level and for hospital-based (provider-based) telehealth as opposed to payer-supported direct-to-consumer telehealth. The implications of our findings are that providers switching from not offering telehealth to offering telehealth can expect higher visit volumes but not necessarily significant increases in profits, especially given that the coefficient for increased expenses (US $33.39 million) is slightly higher than the coefficient for increased revenue (US $32.60 million) in our telehealth adoption results. The results make sense because it has been found that offering telehealth can increase provider workload [40], reduce workflow efficiency (at first) [23,41], and result in billing and payment issues [42]. Furthermore, given that payment parity laws are only now becoming more commonplace for telehealth and are still subject to significant variability [43], revenue from additional telehealth visits may be less than expected, especially if visits that were typically in person are now being replaced with video-based visits. Thus, telehealth adoption may provide more convenience for patients but may have mixed impacts on provider performance, likely requiring a significant investment by providers in overcoming barriers at least in the short term, as was also found in other telehealth studies such as those in the area of telestroke [44].
Second, we found that telehealth discontinuation had a negative impact on outpatient services revenue. The implication is that once telehealth is offered, performance may subsequently suffer if it is discontinued. Thus, careful thought must be given to what might happen with patient expectations once telehealth is offered, even if only for a short time. However, we also note that, although the observed decline in visit volume might be expected to be responsible for loss in revenue, we did not find a significant impact on total ambulatory visits in comparison with the control group when telehealth was discontinued. This means that the revenue loss may be attributable to a decline in other outpatient services such as wellness and prevention programs, observation programs, supplies, laboratory tests, or other services, which suggests a spillover effect. Future research could examine this effect in more detail to gain a deeper understanding of potential spillover effects between discontinuing a digital service and other outpatient services offered. Most importantly, spillover effects aside, our results demonstrate that offering a digital service may change expectations, which cannot simply be reverted if telehealth is then no longer offered in a future period.

Limitations
We note that this study is limited by the binary nature of the response variable in that telehealth is a yes or no variable rather than an extent of use or assimilation variable. We also note that our data dates to before the COVID-19 pandemic period in which telehealth adoption and use significantly increased at first but subsequently significantly declined [8,45]. Future research could consider whether the effects found in this study are consistent with the postpandemic period, once more data are available. This study is also limited by a lack of detail in regard to the mechanisms that cause the effects we observed. This is also a significant opportunity for future research. Finally, our data are limited to the United States.

Additional Thoughts on Future Research
In addition to studying the spillover effects of telehealth adoption and discontinuation decisions, as well as determination of whether the effects found here remain consistent after the pandemic once more data are available, future research could consider price optimization for service channel differences such as in-person versus video visits and establish recommendations for optimal mixes of visit types, conditional on patient conditions and provider expertise. Given that the relationship among telehealth use, costs, and revenue is complex, uncertain, and mixed, more research is needed on service mix optimization.
We further note that our results are specific to US hospitals. Future research could consider whether these results are consistent with telehealth being adopted and discontinued in other countries and regions, as well as any unique conditions that may affect telehealth differently in other areas.
Finally, telehealth impacts, especially from adoption of telehealth, are likely to change over time. For instance, costs associated with telehealth may decrease in some ways as efficiencies are gained over time but increase in other ways such as potentially more technical and scheduling staff being required to support a mix of in-person and telehealth visits. Therefore, an excellent future area for future research will be a more fine-grained analysis of telehealth-specific costs over a longer period of time.

Conclusions
In conclusion, this study offers insights into the effects of telehealth adoption and discontinuation by US hospitals from 2016 to 2018. It is our hope that these results will inform health care providers, administrators, and policy makers regarding expected performance outcomes when telehealth adoption and discontinuation decisions are made.

Conflicts of Interest
None declared.