Original Paper
Abstract
Background: There is great interest in using artificial intelligence (AI) to screen for skin cancer. This is fueled by a rising incidence of skin cancer and an increasing scarcity of trained dermatologists. AI systems capable of identifying melanoma could save lives, enable immediate access to screenings, and reduce unnecessary care and health care costs. While such AI-based systems are useful from a public health perspective, past research has shown that individual patients are very hesitant about being examined by an AI system.
Objective: The aim of this study was two-fold: (1) to determine the relative importance of the provider (in-person physician, physician via teledermatology, AI, personalized AI), costs of screening (free, 10€, 25€, 40€; 1€=US $1.09), and waiting time (immediate, 1 day, 1 week, 4 weeks) as attributes contributing to patients’ choices of a particular mode of skin cancer screening; and (2) to investigate whether sociodemographic characteristics, especially age, were systematically related to participants’ individual choices.
Methods: A choice-based conjoint analysis was used to examine the acceptance of medical AI for a skin cancer screening from the patient’s perspective. Participants responded to 12 choice sets, each containing three screening variants, where each variant was described through the attributes of provider, costs, and waiting time. Furthermore, the impacts of sociodemographic characteristics (age, gender, income, job status, and educational background) on the choices were assessed.
Results: Among the 383 clicks on the survey link, a total of 126 (32.9%) respondents completed the online survey. The conjoint analysis showed that the three attributes had more or less equal importance in contributing to the participants’ choices, with provider being the most important attribute. Inspecting the individual part-worths of conjoint attributes showed that treatment by a physician was the most preferred modality, followed by electronic consultation with a physician and personalized AI; the lowest scores were found for the three AI levels. Concerning the relationship between sociodemographic characteristics and relative importance, only age showed a significant positive association to the importance of the attribute provider (r=0.21, P=.02), in which younger participants put less importance on the provider than older participants. All other correlations were not significant.
Conclusions: This study adds to the growing body of research using choice-based experiments to investigate the acceptance of AI in health contexts. Future studies are needed to explore the reasons why AI is accepted or rejected and whether sociodemographic characteristics are associated with this decision.
doi:10.2196/46402
Keywords
Introduction
Skin cancers are the most common groups of cancers diagnosed worldwide, with more than 1.5 million new cases estimated in 2020 [
]. Melanoma is the deadliest form of skin cancer. Based on demographic changes, it is estimated that more than 500,000 new cases of melanoma and almost 100,000 deaths from melanoma should be expected worldwide by 2040 [ ]. As melanoma case numbers are expected to increase in the future, high-cost treatments will continue to put a strain on the already overburdened health care budgets. To combat the rising mortality rate of melanoma, early detection is critical. Currently, the German national treatment guidelines [ ] recommend skin cancer screening as a standardized full-body skin examination performed by dermatologists who have completed specialized training in the early detection of skin cancer. In addition, dermatologists should use dermoscopy to diagnose suspected skin cancer. Given the rising number of cases as well as increasing scarcity of trained dermatologists [ - ], there has been substantial research into the feasibility of artificial intelligence (AI) to augment or replace traditional skin cancer screening regimens [ ].AI describes machines (or computers) that mimic the cognitive functions associated with human thought, such as learning and problem-solving. These systems observe their surroundings and adopt action to reach their targets directly [
]. Further, AI has the ability to learn from images and subsequently provide an image-based diagnosis. Dermatology, as an image-based field of medicine, retains a dominant position in the AI evolution with the ability to classify skin lesions [ ].Research into the technical quality of AI-based skin cancer screening technologies has shown that these systems achieve detection rates that are on par or better than those of highly trained clinicians [
- ]. This highlights the great potential of AI for future skin cancer screening in the general population. As part of apps, AI systems offer immediate access to dermatological screening for all patients with mobile digital devices, enabling health care and treatment to be provided regardless of time and place and close to everyday life [ ]. Thus, AI systems capable of detecting melanoma and nonmelanoma skin cancer could avoid unnecessary care, reduce health care costs, offer solutions to the increasing scarcity of clinicians, and reduce the waiting times for an appointment and for a diagnosis [ - ]. However, there is a risk that some melanomas will be missed and treatment delayed if the apps incorrectly reassure the user that their lesion is of low risk [ ].Although the technical quality has improved, there is also a growing awareness that patients do not generally accept the use of AI-based systems in health care settings. There is still no consistent definition of technology acceptance in the literature. Terms such as “acceptability,” “acceptance,” and “adoption” are often employed in this context, sometimes interchangeably. Dillon and Morris [
] defined user acceptance “as the demonstrable willingness within a user group to employ IT [information technology] for the tasks it is designed to support.”Khullar et al [
] conducted an online survey to examine patients’ perspectives about applications of AI in health care, showing that 31% of respondents reported being very uncomfortable and 40.5% were somewhat uncomfortable with receiving a diagnosis from an AI algorithm that was accurate 90% of the time but incapable of explaining its rationale. Longoni et al [ ] demonstrated that consumers are very hesitant about being examined by an AI system and consumers’ willingness to pay decreases when an equivalent service is performed by an AI system. Additionally, they concluded that patients’ perceived neglect of uniqueness leads to more resistance to medical AI [ ].Past research has also identified several factors that might impact patients’ preferences to use AI-based health care services. The European Commission [
] interviewed citizens of the 28 member states of the European Union (N=27,900) and concluded that younger participants with a high educational level are more likely to use online health care services. This finding was also replicated in oncology patients, where younger patients indicated higher acceptance of and a greater intention to use digital tools and apps to manage their cancer [ ]. The European Commission [ ] also found that the opinion on AI strongly depends on exposure to related information and knowledge. This relationship is also supported by a series of experiments showing that resistance to the utilization of medical AI is driven by the subjective difficulty of understanding algorithms [ ].Concerning skin cancer, previous research has shown that patients were generally reluctant to use AI-based systems in the field of dermatology. Snoswell et al [
] examined the consumer preference and willingness to pay for mobile teledermoscopy services in Australia using a discrete-choice experiment (N=199). They found that patients prefer a trained medical professional to be involved in their skin cancer screening and that patients are less willing to pay money for teledermatology [ ]. However, Snoswell et al [ ] did not take into account sociodemographic factors that may have had an impact on the patients’ decisions. In a multicenter clinical study assessing the performance of automated diagnosis of melanoma with a self-completion questionnaire (N=65), Fink et al [ ] found that most patients agreed that computer-assisted diagnoses are trustworthy and may generally improve the diagnostic performance of physicians. However, participants rejected the idea of AI-based systems completely replacing physicians and instead strongly favored hybrid solutions in which diagnoses by a physician are supported by automated systems [ ].To date, only three studies have directly addressed the question of which factors are associated with patients’ preferences regarding AI-based skin cancer screening [
- ]. Ghani et al [ ] studied public interest in teledermatology, which was found to be positively associated with a younger age, higher educational attainment, and higher household income. Chang et al [ ] examined sociodemographic differences in teledermatology acceptability with a cross-sectional survey (N=13,996), showing that respondents who were interested in teledermatology were more frequently 18-39 years of age, men, college graduates, and tablet or smartphone users. Similarly, young age, male gender, a previous history of melanoma, and higher educational level were significantly associated with a more positive attitude toward skin cancer–related apps [ ]. However, it is unclear whether these results from questionnaires can be replicated in choice-based experiments that rely to a lesser degree on introspection and are thus one step closer to actual behavior [ ].As described above, provider and costs for a skin cancer screening have high relevance for the user [
, , , ]. For this choice-based conjoint analysis, we further added the attribute waiting time for a diagnosis, because studies have shown a strong negative correlation between patient satisfaction and waiting time [ , ]. AI provides the opportunity to get a skin cancer screening immediately, without any waiting time [ ]. Due to the shortage of medical professionals, waiting time for a skin cancer diagnosis is also an important attribute for the user [ , ].The aim of this study was two-fold based on the following two research questions: (1) How important are the attributes provider, costs for screening, and waiting time for diagnosis for participants’ preference for skin cancer screening? (2) Are sociodemographic characteristics, especially age, systematically related to the relative importance scores of participants to the various attributes?
Methods
Study Design
This cross-sectional study used a choice-based conjoint analysis to examine the acceptance of medical AI for a skin cancer screening from the user perspective. Conjoint analysis is a quantitative marketing research method that quantifies the value consumers place on the attributes of a product [
]. Respondents are asked to make a choice between 2 or more different choice sets, where each set is described in terms of several predefined attributes, each with different levels. Given a sufficient number of choices per respondent, it is then possible to statistically estimate the importance of each attribute and level for the choice in terms of part-worth utilities. This method offers a behavioral approach and is less susceptible to social desirability and other biases [ ].This study systematically manipulated three attributes (provider, cost, and waiting time) for a hypothetical skin cancer screening. Participants were presented with 12 different choice sets one after another, each consisting of three different modes of skin cancer screenings that were generated by combining different levels of the three attributes (see
for an example). The choice sets were generated by the conjointly algorithm using default settings [ ].Survey
Before the survey was conducted, it was tested with the “think-aloud” method by three volunteers to find out if there were any comprehensibility problems. For this purpose, the pretest participants had to speak their thoughts aloud while completing the survey [
].The questionnaire started with informed consent, where participants were informed about the nature and scope of the survey and about the protection of their data. Before starting the questionnaire, participants completed the consent form and agreed to participate in the anonymous study. The participants then moved on to the choice-based conjoint task, which consisted of 12 different choice sets. The participant’s task for each choice set was to indicate the skin cancer screening that they most prefer (ie, they selected one of the three options as their preferred choice). After responding to the choice sets, participants were asked whether they had undergone a skin cancer screening in the last year and at which type of provider. Finally, the sociodemographic characteristics (age, gender, education, status, income) were assessed. Finally, the survey asked again whether the data could be used for analysis in anonymized form in case respondents changed their minds during the course of the survey and to filter out people who just wanted to “click through” without seriously answering the questions.
Participants
Recruitment was based on a convenience sample through the social environment; individuals were asked to participate in the open voluntary survey shared with contacts via WhatsApp and Instagram. Standard procedures for conducting and reporting online surveys [
] were followed. Furthermore, conjointly’s default methods were used to identify and bar potential duplicate entries from the same user. Data were collected during the time period of September 29, 2022, through October 20, 2022.The link to the survey was clicked 383 times by unique site visitors. Of these potential respondents, 126 (32.9%) people filled out the conjoint survey completely and gave their agreement for processing their data. In total, 220 (57.4%) respondents opened the link but did not complete the survey and another 33 (8.6%) people were disqualified from the study because they answered the survey several times. Three people (0.8%) did not give their agreement to process their data and a single respondent (0.3%) was excluded because the survey was answered too quickly. Respondents took an average of 4.7 minutes to complete the survey.
provides an overview of the respondents’ sociodemographic characteristics. There was a relatively equal proportion of participants identifying as male and female. The average age of the participants was 37.6 years and the median age was 29 years.Variable | Participants, n (%) | ||
Gender | |||
Male | 58 (46.0) | ||
Female | 67 (53.2) | ||
Other | 1 (0.8) | ||
Education | |||
Still a student | 1 (0.8) | ||
School-leaving qualification | 25 (19.8) | ||
Vocational qualification | 34 (27) | ||
University degree | 58 (46) | ||
Doctorate | 2 (1.6) | ||
Other degree | 3 (2.4) | ||
Not specified | 3 (2.4) | ||
Employment status | |||
Elementary/high school student | 2 (1.6) | ||
University student | 29 (23) | ||
Apprentice | 11 (8.7) | ||
Employee | 58 (46) | ||
Civil servant | 6 (4.8) | ||
Self-employed | 8 (6.3) | ||
Not employed | 1 (0.8) | ||
Retired without income | 9 (7.1) | ||
Other | 1 (0.8) | ||
Not specified | 1 (0.8) | ||
Monthly income (Euro; 1€=US $1.09) | |||
<250 | 5 (4.0) | ||
250-499 | 8 (6.3) | ||
500-999 | 17 (13.5) | ||
1000-1499 | 12 (9.5) | ||
1500-1999 | 13 (10.3) | ||
2000-2999 | 26 (20.6) | ||
3000-3999 | 16 (12.7) | ||
4000-4999 | 6 (4.8) | ||
>5000 | 13 (10.3) | ||
Not specified | 10 (7.9) |
Ethical Considerations
Our online study was conducted in accordance with the American Psychological Association’s Ethical Principles of Psychologists and Code of Conduct. In particular, data collection was anonymous; harmless to participants; and did not involve deception, injure, or place participants under high levels of physical or emotional stress. In line with 2023 guidelines of the German Research Foundation, formal ethical approval was not required because our study did not include aspects that would necessitate a statement, per subsection two of the "Information on proposals in the field of psychology" [
]. Informed consent was obtained from all participants after the purpose of the study and the data collection were outlined in the survey introduction. Participants indicated their consent by clicking a button. Study data and identifiers were anonymized during the data collection and data analysis to maintain confidentiality. No compensation was awarded to participants.Results
provides an overview of the relative importance values of the attributes and part-worth values of each level for each attribute as determined by conjointly to answer the first research question [ ]. A treatment by a physician that is completely compensated by insurance and has no waiting time emerged as the most preferred mode of treatment. Overall, provider was the most important attribute, followed by costs and waiting time. For all attributes, we found two levels with part worths around zero and one positive and negative level. For provider, the physician had a positive part worth and the AI system had a negative part worth, while both the personalized AI and teledermatology had near-zero part worths. For waiting time, immediate results had a positive part worth and a 4-week wait had a negative part worth, while a 1-day and 1-week wait had similar near-zero part worths.
Attribute | Part worth (95% CI) | Relative importance, % (95% CI) | |||
Provider | 38.6 (35.3 to 41.6) | ||||
AIa | –0.15 (–0.17 to –0.13) | ||||
Personalized AI | –0.06 (–0.08 to –0.04) | ||||
Physician treatment | 0.21 (0.19 to 0.24) | ||||
Electronic consultation with physician (teledermatology) | 0.005 (–0.01 to 0.02) | ||||
Costs for screeningb | 31.6 (29.0 to 34.0) | ||||
0€ (completely covered by health insurance) | 0.15 (0.13 to 0.16) | ||||
10€ copayment | 0.06 (0.06 to 0.07) | ||||
25€ copayment | –0.03 (–0.04 to –0.03) | ||||
40€ copayment | –0.18 (–0.19 to –0.16) | ||||
Waiting time for diagnosis | 29.8 (27.2 to 32.3) | ||||
Immediate | 0.10 (0.09 to 0.11) | ||||
1 day | 0.082 (0.07 to 0.09) | ||||
1 week | –0.004 (–0.01 to 0.003) | ||||
4 weeks | –0.18 (–0.20 to –0.17) |
aAI: artificial intelligence.
b1€=US $1.09.
shows an overview of the relationships between sociodemographic characteristics and the relative importances to answer the second research question. We found a medium-sized positive relationship between age and provider. In addition, there were two nonsignificant trends. The first indicated an inverse relationship between age and the importance of costs and the second indicated an inverse relationship between income and the importance for costs. All other importances were not systematically related to sociodemographic variables ( ).
Sociodemographic characteristics | Relative importance | ||||||
Provider | Costs for screening | Waiting time for diagnosis | |||||
Age | |||||||
Coefficient | 0.21a | 0.17 | 0.11a | ||||
P value | .02 | .05 | .25 | ||||
Gender | |||||||
Coefficient | –0.003 | –0.03 | 0.04 | ||||
P value | .97 | .71 | .60 | ||||
Education | |||||||
Coefficient | –0.04 | –0.06 | 0.07 | ||||
P value | .64 | .45 | .45 | ||||
Employment status | |||||||
Coefficient | 0.05 | –0.11 | –0.004 | ||||
P value | .53 | .22 | .96 | ||||
Income | |||||||
Coefficient | 0.02 | –0.17 | 0.11 | ||||
P value | .81 | .07 | .24 |
aPearson correlation coefficient.
Discussion
The aim of this study was to determine how important the attributes provider, costs, and waiting time are for users’ preference for skin cancer screening and to investigate whether sociodemographic characteristics, especially age, are systematically related to participants’ individual importances. We found that provider was as equally important a factor for participants’ decisions as cost and waiting time. While a physician was the most preferred level of this attribute, AI-based treatment was disliked and a personalized AI had the same value for participants as teledermatology. Concerning the relationship between sociodemographic characteristics and relative importances, we found that only age showed a reliable positive association to provider, in which younger participants place less importance on the provider than older participants. In the following, we discuss these findings in turn before discussing the limitations of the study and providing a general outlook.
Regarding the role of the provider in users’ decisions, other studies underline our results that patients exhibit hesitant behavior toward medical AI. Patients would rather not have a treatment than be examined by an AI system, even if the AI system shows the same or better accuracy as a physician [
]. However, the same study also found that patients prefer personalized AI over nonpersonalized AI. Similarly, earlier discrete-choice experiments [ ] as well as surveys [ ] found that patients prefer a trained medical professional to be involved in their skin cancer screening; 41% of respondents were open to using AI as a standalone system for skin cancer screening and 94% were open to using it as a support system for physicians [ ]. Together, existing studies indicate that personalized AI and teledermatology are generally more accepted than nonpersonalized AI for skin cancer screening, while the physician remains the most preferred option.Concerning the impact of age differences on the acceptance of AI in dermatology, our findings also support some earlier results [
- ]. Higher interest in using teledermatology [ , ] and in using skin cancer–related apps [ ] was associated with younger age. The results of cross-sectional studies back up our findings from the choice-based conjoint analysis. Based on these trends, it is possible to imagine that the acceptance of AI in skin cancer screening will rise in the future due to the aging of digital natives and their increased acceptance of AI.Regarding income and educational factors, our findings do not align with those of previous studies. Ghani et al [
] concluded that higher education attainment and a higher household income increased the interest in using teledermatology. Chang et al [ ] came to similar conclusions, indicating that college graduates showed the greatest interest in teledermatology. In addition, Steeb et al [ ] showed that a high educational level was associated with a positive attitude toward skin cancer–related apps. While we were not able to show significant relationships to income and educational background, the smaller sample size in this study compared to those of earlier studies might explain this inconsistency.Previous studies also identified gender differences in the acceptance of AI in skin cancer screening. Chang et al [
] came to the conclusion that men are more likely to use teledermatology than women. Steeb at al [ ] found similar results in which male gender was significantly associated with a positive attitude toward skin cancer–related apps. However, the gender difference that was reported in earlier studies was not visible in our data. Again, this might be a factor of sample size, but it also might also be that these gender differences reported in earlier questionnaire studies reflect differences in the technology self-concept [ ] rather than actual preferences.Several aspects must be considered in interpretation of our findings. First, the sample was not randomly selected but was based on a convenience sample. While a wide range of recruitment means were used, the results are likely not generalizable to the general public but rather more specific to highly educated young adults. Further research is needed with the target group. Although the sample size may not seem particularly large, sensitivity analysis showed that this sample size was in fact sufficient to detect a medium-sized correlation (r=0.28) with a power of 90% and error rate of 5%. Second, some participants contacted us about the meaning of the attribute waiting time because they were unsure whether this pertained to the waiting time for a diagnosis or the waiting time for an appointment. Future studies should make this distinction more explicit to study possible differential effects of these two types of waiting times.
Taken together, we believe that this study adds to the growing body of research using choice-based experiments to investigate the acceptance of AI in health contexts. This approach offers additional insights and is less susceptible to social desirability and other biases [
]. However, the choice-based conjoint analysis only allows studying a small number of potential attributes at a time [ ]. Because we included personalized AI as a level for the attribute provider, our study adopted the findings of Longoni et al [ ] that personalized AI increases patient acceptance. In addition, we examined factors that may have an impact on patients’ decision-making following the study of Snoswell et al [ ].For the future, it could be interesting to add “AI as a physician support system” to the choice set [
]. It might also be interesting to find out whether patients who perceived themselves as more individualized are less accepting of AI [ ]. Additionally, it could be interesting to explore whether specialized knowledge about AI systems would increase patient acceptance [ ] and which other factors might have an influence on patients’ acceptance. Ideally, this would not only rely on correlational evidence as used here but also on experimental evidence that shows how preferences and importances may be altered. The variables such as income and educational background cannot be manipulated easily. Nevertheless, we believe that the magnitude of these effects provides some benchmarks for future studies that aim to use experimental methods to alter preferences.In summary, while there have been technological advances in the effectiveness of AI for supporting skin cancer screening and health care more generally, we believe that the true potential of AI systems can only be realized if patients’ needs and demands are taken into account.
Acknowledgments
This research was conducted within the framework of the project “SAIL: SustAInable Lifecycle of Intelligent Socio-Technical Systems,” funded by the Ministry of Culture and Science of the State of North Rhine-Westphalia under grant NW21-059B. The sole responsibility for the content of this publication lies with the authors. Publication was funded by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) under grant 490988677 and the University of Applied Sciences Bielefeld. Generative artificial intelligence was not used in any portion of the manuscript writing.
Data Availability
The data sets generated and analyzed during this study are available in the zenodo repository [
].Authors' Contributions
IJ: study conceptualization, analysis, writing first draft, approval of final manuscript; MS: study conceptualization, analysis, approval of final manuscript; OW: study conceptualization, data collection, approval of final manuscript; GH: study conceptualization, data collection, analysis, approval of final manuscript.
Conflicts of Interest
None declared.
References
- Global burden of cutaneous melanoma in 2020 and projections to 2040. International Agency for Research on Cancer and World Health Organization. Mar 30, 2020. URL: https://www.iarc.who.int/wp-content/uploads/2022/03/pr311_E.pdf [accessed 2023-01-23]
- S3-Leitlinie Prävention von Hautkrebs. Arbeitsgemeinschaft der Wissenschaftlichen Medizinischen Fachgesellschaften. 2021. URL: https://register.awmf.org/assets/guidelines/032-052OLk_S3_Praevention-Hautkrebs_2021-09.pdf [accessed 2023-01-30]
- Bin KJ, Melo AAR, da Rocha JGMF, de Almeida RP, Cobello Junior V, Maia FL, et al. The impact of artificial intelligence on waiting time for medical care in an urgent care service for COVID-19: single-center prospective study. JMIR Form Res. Feb 01, 2022;6(2):e29012. [FREE Full text] [CrossRef] [Medline]
- Li X, Tian D, Li W, Dong B, Wang H, Yuan J, et al. Artificial intelligence-assisted reduction in patients' waiting time for outpatient process: a retrospective cohort study. BMC Health Serv Res. Mar 17, 2021;21(1):237. [FREE Full text] [CrossRef] [Medline]
- Chandler J, Woodhead P, Barker R, Payne R. BT20: Teledermatology reduces waiting times for skin cancer diagnosis. Br J Dermatol. Jul 05, 2022;187(S1):129-130. [CrossRef]
- Snoswell CL, Whitty JA, Caffery LJ, Kho J, Horsham C, Loescher LJ, et al. Consumer preference and willingness to pay for direct-to-consumer mobile teledermoscopy services in Australia. Dermatology. 2022;238(2):358-367. [CrossRef] [Medline]
- Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial intelligence in dermatology image analysis: current developments and future trends. J Clin Med. Nov 18, 2022;11(22):6826. [FREE Full text] [CrossRef] [Medline]
- Liopyris K, Gregoriou S, Dias J, Stratigos AJ. Artificial intelligence in dermatology: challenges and perspectives. Dermatol Ther (Heidelb). Dec 2022;12(12):2637-2651. [FREE Full text] [CrossRef] [Medline]
- Goyal M, Knackstedt T, Yan S, Hassanpour S. Artificial intelligence-based image classification methods for diagnosis of skin cancer: challenges and opportunities. Comput Biol Med. Dec 2020;127:104065. [FREE Full text] [CrossRef] [Medline]
- Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, Reader study level-I-level-II Groups; et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. Aug 01, 2018;29(8):1836-1842. [FREE Full text] [CrossRef] [Medline]
- Codella NCF, Nguyen Q, Pankanti S, Gutman DA, Helba B, Halpern AC, et al. Deep learning ensembles for melanoma recognition in dermoscopy images. IBM J Res Dev. Jul 2017;61(4/5):5:1-5:15. [CrossRef]
- Brinker TJ, Hekler A, Enk AH, Klode J, Hauschild A, Berking C, et al. Collaborators. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur J Cancer. May 2019;113:47-54. [FREE Full text] [CrossRef] [Medline]
- Cadario R, Longoni C, Morewedge CK. Understanding, explaining, and utilizing medical artificial intelligence. Nat Hum Behav. Dec 2021;5(12):1636-1642. [CrossRef] [Medline]
- Chuchu N, Takwoingi Y, Dinnes J, Matin RN, Bassett O, Moreau JF, et al. Cochrane Skin Cancer Diagnostic Test Accuracy Group. Smartphone applications for triaging adults with skin lesions that are suspicious for melanoma. Cochrane Database Syst Rev. Dec 04, 2018;12(12):CD013192. [FREE Full text] [CrossRef] [Medline]
- Dillon A, Morris M. User acceptance of information technology: theories and models. Ann Rev Inf Sci Technol. 1996;31:3-32. [FREE Full text] [CrossRef]
- Khullar D, Casalino LP, Qian Y, Lu Y, Krumholz HM, Aneja S. Perspectives of patients about artificial intelligence in health care. JAMA Netw Open. May 02, 2022;5(5):e2210309. [FREE Full text] [CrossRef] [Medline]
- Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res. 2019;46(4):629-650. [CrossRef]
- Attitudes towards the impact of digitisation and automation on daily life. European Commission. May 10, 2017. URL: https://digital-strategy.ec.europa.eu/en/news/attitudes-towards-impact-digitisation-and-automation-daily-life [accessed 2023-01-30]
- Kessel KA, Vogel MM, Kessel C, Bier H, Biedermann T, Friess H, et al. Mobile health in oncology: a patient survey about app-assisted cancer care. JMIR Mhealth Uhealth. Jun 14, 2017;5(6):e81. [FREE Full text] [CrossRef] [Medline]
- Fink C, Uhlmann L, Hofmann M, Forschner A, Eigentler T, Garbe C, et al. Patient acceptance and trust in automated computer-assisted diagnosis of melanoma with dermatofluoroscopy. J Dtsch Dermatol Ges. Jul 2018;16(7):854-859. [CrossRef] [Medline]
- Chang MS, Moore KJ, Hartman RI, Koru-Sengul T. Sociodemographic determinants of teledermatology acceptability. J Am Acad Dermatol. Jun 2022;86(6):1392-1394. [CrossRef] [Medline]
- Ghani M, Adler C, Yeung H. Patient factors associated with interest in teledermatology: cross-sectional survey. JMIR Dermatol. May 10, 2021;4(1):e21555. [FREE Full text] [CrossRef] [Medline]
- Steeb T, Wessely A, Mastnik S, Brinker TJ, French LE, Niesert A, et al. Patient attitudes and their awareness towards skin cancer-related apps: cross-sectional survey. JMIR Mhealth Uhealth. Jul 02, 2019;7(7):e13844. [FREE Full text] [CrossRef] [Medline]
- Hair JF. Multivariate data analysis: a global perspective. 7th edition. Upper Saddle River, NJ. Pearson Education; 2009.
- Michael M, Schaffer SD, Egan PL, Little BB, Pritchard PS. Improving wait times and patient satisfaction in primary care. J Healthc Qual. 2013;35(2):50-59; quiz 59. [CrossRef] [Medline]
- Lake AA, Speed C, Brookes A, Heaven B, Adamson AJ, Moynihan P, et al. Development of a series of patient information leaflets for constipation using a range of cognitive interview techniques: LIFELAX. BMC Health Serv Res. Jan 04, 2007;7:3. [FREE Full text] [CrossRef] [Medline]
- Chrzan K, Orme B. An overview and comparison of design strategies for choice-based conjoint analysis. Sawtooth Software. 2000. URL: https://sawtoothsoftware.com/resources/technical-papers/an-overview-and-comparison-of-design-strategies-for-choice-based-conjoint-analysis [accessed 2023-08-08]
- Orme B. Interpreting the results of conjoint analysis 2010. Sawtooth Software. 2010. URL: https://content.sawtoothsoftware.com/assets/3501e79f-3a9f-4d99-a0f5-1522525e7f3f [accessed 2023-08-08]
- The Research Methods Knowledge Base. Conjointly. URL: https://conjointly.com/kb/ [accessed 2023-02-02]
- McDonald S, Edwards HM, Zhao T. Exploring think-alouds in usability testing: an international survey. IEEE Trans Profess Commun. Mar 2012;55(1):2-19. [CrossRef]
- Eysenbach G. Improving the quality of web surveys: the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). J Med Internet Res. Sep 29, 2004;6(3):e34. [FREE Full text] [CrossRef] [Medline]
- Humanities and Social Sciences: Statement by an Ethics Committee. Deutsche Forschungsgemeinschaft. URL: https://www.dfg.de/en/research_funding/faq/faq_humanities_social_science/index.html [accessed 2023-11-13]
- Jutzi TB, Krieghoff-Henning EI, Holland-Letz T, Utikal JS, Hauschild A, Schadendorf D, et al. Artificial intelligence in skin cancer diagnostics: the patients' perspective. Front Med. 2020;7:233. [FREE Full text] [CrossRef] [Medline]
- Jackson LA, von Eye A, Fitzgerald HE, Zhao Y, Witt EA. Self-concept, self-esteem, gender, race and information technology use. Comput Hum Behav. May 2010;26(3):323-328. [CrossRef]
- Zaller J, Feldman S. A simple theory of the survey response: answering questions versus revealing preferences. Am J Polit Sci. Aug 1992;36(3):579. [CrossRef]
- Study data, code, and R script. Zenodo. URL: https://zenodo.org/records/8227363 [accessed 2023-12-13]
Abbreviations
AI: artificial intelligence |
Edited by A Mavragani; submitted 10.02.23; peer-reviewed by P Dabas, M Modi, L Vervier; comments to author 04.08.23; revised version received 17.08.23; accepted 20.11.23; published 12.01.24.
Copyright©Inga Jagemann, Ole Wensing, Manuel Stegemann, Gerrit Hirschfeld. Originally published in JMIR Formative Research (https://formative.jmir.org), 12.01.2024.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Formative Research, is properly cited. The complete bibliographic information, a link to the original publication on https://formative.jmir.org, as well as this copyright and license information must be included.