| 研究生: |
張琤華 Cheng-Hua Chang |
|---|---|
| 論文名稱: |
人類醫生或AI醫生?—人工智能醫療診斷應用之研究 Human Doctors vs. AI Doctors: A Study on the Application of AI in Medical Diagnosis |
| 指導教授: |
陳炫碩
Shiuann-Shuoh Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 企業管理學系 Department of Business Administration |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 46 |
| 中文關鍵詞: | 人工智慧 、醫療AI 、AI解釋性 、有限理性 、風險 |
| 外文關鍵詞: | Artificial Intelligence, Medical AI, AI explainability, bounded rationality, risk |
| 相關次數: | 點閱:5 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
現今醫療體系已面臨人力不足與城鄉差距等問題,在AI浪潮之下,許多相關應用已實 施在臨床體系下,然而大部分的醫護人員與病患對於醫療AI的應用持抗拒態度。本研 究針對一般民眾、醫護人員、AI領域人員,在不同疾病風險下進行變數操縱,如AI解 釋性、Simon的有限理性,討論人類對於醫療AI應用態度之影響。此外,與先前研究較 為不同的是本研究清楚界定人類對於信任AI能力與是否接納AI診斷之差別。最後,本 研究成功操作AI解釋性之變數,並證實風險的調節效果,為未來醫療AI臨床應用打下 良好基石。
The current healthcare system faces issues such as workforce shortages and urban-rural disparities. Amid the AI wave, many related applications have already been implemented in clinical systems. However, most healthcare professionals and patients resist the application of medical AI. This study investigates the attitudes toward medical AI applications among the general public, healthcare professionals, and AI specialists, by manipulating variables such as AI explainability and Simon's bounded rationality under different disease risk. Additionally, unlike previous studies, this research clearly differentiates between trust in AI capabilities and acceptance of AI diagnoses. Finally, this study successfully manipulates the variable of AI explainability and confirms the moderating effect of risk, laying a solid foundation for future clinical applications of medical AI.
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.
Bussone, A., Stumpf, S., & O'Sullivan, D. (2015). The role of explanations on trust and reliance in clinical decision support systems. 2015 international conference on healthcare informatics,
Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636-1642.
Croskerry, P. (2013). From mindless to mindful practice—cognitive bias and clinical decision making. N Engl J Med, 368(26), 2445-2448.
Damasio, A. R., Tranel, D., & Damasio, H. (1990). Individuals with sociopathic behavior caused by frontal damage fail to respond autonomically to social stimuli. Behavioural Brain Research, 41(2), 81-94.
Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302-1314.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion:
People will use imperfect algorithms if they can (even slightly) modify them.
Management Science, 64(3), 1155-1170. 25
Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a'right to an explanation'is probably not the remedy you are looking for. Duke L. & Tech. Rev., 16, 18.
Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245), 273-278.
Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). Clinical versus mechanical prediction: a meta-analysis. Psychological Assessment, 12(1), 19.
Hill, R. J., Fishbein, M., & Ajzen, I. (1977). Belief, attitude, intention and behavior: an introduction to theory and research. Contemporary Sociology, 6(2), 244.
Hong, J.-W., Wang, Y., & Lanz, P. (2020). Why is artificial intelligence blamed more?
Analysis of faulting artificial intelligence for self-driving car accidents in experimental
settings. International Journal of Human–Computer Interaction, 36(18), 1768-1774. Jussupow, E., Spohrer, K., Heinzl, A., & Gawlitza, J. (2021). Augmenting medical diagnosis
decisions? An investigation into physicians’ decision-making process with artificial
intelligence. Information Systems Research, 32(3), 713-735.
Kahn, B. E., & Baron, J. (1995). An exploratory study of choice rules favored for high-stakes
decisions. Journal of Consumer Psychology, 4(4), 305-328.
Kai-Ineman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk.
Econometrica, 47(2), 363-391.
Khan, W. U., Shachak, A., & Seto, E. (2022). Understanding decision-making in the adoption
of digital health technology: The role of behavioral economics’ prospect theory.
Journal of Medical Internet Research, 24(2), e32714.
Khullar, D., Casalino, L. P., Qian, Y., Lu, Y., Chang, E., & Aneja, S. (2021). Public vs
physician views of liability for artificial intelligence in health care. Journal of the
American Medical Informatics Association, 28(7), 1574-1577.
Lee, J., & Moray, N. (1992). Trust, control strategies and allocation of function in human-
26
machine systems. Ergonomics, 35(10), 1243-1270.
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance.
Human factors, 46(1), 50-80.
Lewis, J. D., & Weigert, A. (1985). Trust as a social reality. Social forces, 63(4), 967-985. Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial
intelligence. Journal of Consumer Research, 46(4), 629-650.
Machina, M. J., & Siniscalchi, M. (2014). Ambiguity and ambiguity aversion. In Handbook of
the Economics of Risk and Uncertainty (Vol. 1, pp. 729-807). Elsevier. Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific
technology: An investigation of its components and measures. ACM Transactions on
Management Information Systems (TMIS), 2(2), 1-25.
Miller, D. D., & Brown, E. W. (2018). Artificial intelligence in medical practice: the question
to the answer? The American Journal of Medicine, 131(2), 129-133.
Nasr-Esfahani, E., Samavi, S., Karimi, N., Soroushmehr, S. M. R., Jafari, M. H., Ward, K., &
Najarian, K. (2016). Melanoma detection by analysis of clinical images using convolutional neural network. 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
Oh, S., Kim, J. H., Choi, S.-W., Lee, H. J., Hong, J., & Kwon, S. H. (2019). Physician confidence in artificial intelligence: an online mobile survey. Journal of Medical Internet Research, 21(3), e12422.
Ostrom, E. (1998). A behavioral approach to the rational choice theory of collective action: Presidential address, American Political Science Association, 1997. American Political Science Review, 92(1), 1-22.
Price, W. N. (2018). Big data and black-box medical algorithms. Science Translational Medicine, 10(471), eaao5333.
Richardson, J. P., Smith, C., Curtis, S., Watson, S., Zhu, X., Barry, B., & Sharp, R. R. (2021).
27
Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digital
Medicine, 4(1), 140.
Rossi, J. G., Rojas-Perilla, N., Krois, J., & Schwendicke, F. (2022). Cost-effectiveness of
artificial intelligence as a decision-support system applied to the detection and grading of melanoma, dental caries, and diabetic retinopathy. JAMA Network Open, 5(3), e220269-e220269.
Russell, S. J. (1997). Rationality and intelligence. Artificial intelligence, 94(1-2), 57-77. Simon, H. A. (1945). Administrative Behavior, 1997 edition. In: The Free Press. New York. Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of
Economics, 99-118.
Simon, H. A. (1986). Rationality in psychology and economics. Journal of Business, S209-
S224.
Simon, H. A. (1990). Invariants of human behavior. Annual Review of Psychology, 41(1), 1-
20.
Sqalli, M. T., & Al-Thani, D. (2019). AI-supported health coaching model for patients with
chronic diseases. 2019 16th International Symposium on Wireless Communication
Systems (ISWCS),
Stern, M. J., & Coleman, K. J. (2015). The multidimensionality of trust: Applications in
collaborative natural resource management. Society & Natural Resources, 28(2), 117-
132.
Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial
intelligence. Nature Medicine, 25(1), 44-56.
Watson, D. S., Krutzinna, J., Bruce, I. N., Griffiths, C. E., McInnes, I. B., Barnes, M. R., &
Floridi, L. (2019). Clinical applications of machine learning algorithms: beyond the black box. BMJ, 364.