跳到主要內容

簡易檢索 / 詳目顯示

研究生: 游凱亘
Kai-Hsuan Yu
論文名稱: 人工智能工作績效期待與滿意度關係之研究 -期望落差或安慰劑效果
Study on the Relationship between Performance Expectations and Satisfaction with Artificial Intelligence: Expectancy Disconfirmation or Placebo Effect
指導教授: 陳炫碩
Shiuann-Shuoh Chen
口試委員:
學位類別: 碩士
Master
系所名稱: 管理學院 - 企業管理學系
Department of Business Administration
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 80
中文關鍵詞: 期望落差安慰劑效應人工智能期望感知品質滿意度
外文關鍵詞: Expectancy Disconfirmation, Placebo Effect, Artificial Intelligence, Expectations, Perceived Quality, Satisfaction
相關次數: 點閱:25下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究探討了⼈⼯智能(AI)在⼯作績效中的期望與滿意度之間的關係,並特別
    關注期望落差(Expectancy Disconfirmation)與安慰劑效應(Placebo Effect)。在現今
    社會中,AI 技術應⽤範圍廣泛,從簡單的⽇常任務到複雜的專業領域,⽣成式 AI 更
    是顯著改變了⼈們的⽣活。然⽽,社會對 AI 的期望及其滿意度存在差異,尤其是在相
    同的績效表現下,⼈們對 AI 和⼈類的評價標準往往不同。
    本研究的主要⽬的是探討在不同情境下,受試者對 AI 與⼈類的期望、感知品質及
    滿意度的關係,並檢驗期望落差和安慰劑效應的存在。研究⽅法包括設計實驗問卷,
    將受試者分為兩組,分別告知績效的完成對象為⼈類或是 AI,在兩種情境中進⾏測
    試。
    研究結果顯⽰,在兩種情境中,當受試者對 AI 的期望較⾼時,能顯著提⾼其感知
    品質,這反映了安慰劑效應的存在;然⽽,對⼈類專家的⾼期望則未能顯著提升感知
    品質,未觀察到安慰劑效應,進⼀步⽀持了期望落差的理論。此外,本研究也發現,
    期望對滿意度的直接影響在所有情境下均不顯著,但感知品質對滿意度的影響則具有
    顯著的正向關聯。
    這些發現有助於更好地理解 AI 技術應⽤中的⼼理機制,並為未來改進 AI 技術應
    ⽤提供了有價值的⾒解。


    This study investigates the relationship between expectations and satisfaction
    regarding the work performance of Artificial Intelligence (AI), with a particular focus
    on Expectancy Disconfirmation and the Placebo Effect. In today’s society, AI
    technologies are widely applied across various domains, ranging from simple daily
    tasks to complex professional fields. The emergence of generative AI has significantly
    transformed people’s lives. However, a gap remains between societal expectations of
    AI and the satisfaction derived from its performance—especially when AI and
    humans achieve similar performance outcomes, yet are evaluated by different
    standards.
    The main objective of this study is to examine how participants’ expectations,
    perceived quality, and satisfaction differ between AI and human performers under
    different scenarios, and to test the presence of expectancy disconfirmation and the
    placebo effect. An experimental questionnaire was designed in which participants
    were randomly divided into two groups. Each group was told that the performance in
    the scenario was completed either by a human or an AI, and their responses were
    measured across two different tasks.
    The results indicate that in both scenarios, high expectations of AI significantly
    increased perceived quality, suggesting the presence of the placebo effect. Conversely,
    high expectations of human experts did not significantly enhance perceived quality,
    and no placebo effect was observed—supporting the theory of expectancy
    disconfirmation. Additionally, the study found that while expectations had no significant direct effect on satisfaction in any scenario, perceived quality had a
    significant positive effect on satisfaction.
    These findings contribute to a better understanding of the psychological mechanisms
    behind AI application and offer valuable insights for improving the integration and
    acceptance of AI technologies in various fields.

    中⽂摘要 ii ABSTRACT iii 第⼀章、 緒論 1 1-1 研究背景與動機 1 1-2 研究⽬的 2 第⼆章、 ⽂獻探討 4 2-1 期望 4 2-2 感知性能/感知品質 4 2-3 滿意度 5 2-4 期望落差 5 2-5 安慰劑效應 6 第三章、 研究⽅法與假設 7 3-1 研究架構與假說 7 3-2 實驗設計 8 3-3 資料分析⽅法 17 第四章、 研究結果與討論 19 4-1 問卷回收與統計 19 4-2 敘述性統計分析 29 4-3 信效度分析 20 4-4 結構⽅程式模型 23 第五章、 結論與建議 31 5-1. 結論 31 5-2. 研究限制 33 5-3. 未來之建議 34 第六章、 參考⽂獻 35 第七章、 附錄 39 7-1. 問卷 A 39 7-2. 問卷 B 54 圖⽬錄 圖 1.本研究架構 7 圖 2.模仿宮崎駿畫⾵之作品 10 圖 3.模型⼀整體結構模型分析 26 圖 4.模型⼆整體結構模型分析 26 圖 5.模型三整體結構模型分析 27 圖 6.模型四整體結構模型之分析 27 表⽬錄 表 1 情境⼀:告知任務是第⼀名畫家所完成之研究變項 11 表 2 情境⼆:告知任務是經驗豐富的⽼師傅所完成之研究變項 12 表 3 告知任務是⽣成式 AI 所完成之研究變項 14 表 4 情境⼆:告知任務是 AI 影像辨識所完成之研究變項 15 表 5 個⼈變項分析結果(N=277)20 表 6 研究變項之信度分析(N=277)21 表 7 路徑分析模型所刪除之題項 24 表 8 四個模型的整體適配度 24 表 9 拔靴法檢驗中介效果 29 表 10 本研究假設與結果 31

    Abbas, A. W., & Hussain, M. (2021). UMT artificial intelligence review (UMT-AIR).
    Abrate,G., Quinton, S., & Pera, R. (2021). The relationship between price paid and hotel.
    review ratings: Expectancy-disconfirmation or placebo effect? Tourism Management, 85,
    104314.
    Bohlmann, J. D., Rosa, J. A., Bolton, R. N., & Qualls, W. J. (2006). The effect of group.
    interactions on satisfaction judgments: satisfaction escalation. Marketing Science, 25(4),
    301-321.
    Fornell, C., Johnson, M. D., Anderson, E. W., Cha, J., & Bryant, B. E. (1996). The American
    customer satisfaction index: Nature, purpose, and findings. Journal of Marketing, 60(4), 7-
    18.
    Giese, J. L., & Cote, J. A. (2000). Defining consumer satisfaction. Academy of Marketing.
    Science Review, 4, 1-24.
    Habell, J., Alavi, S., Schmitz, C., Schneider, J. V., & Wieseke, J. (2016). When do customers.
    get what they expect? Understanding the ambivalent effects of customers’
    service
    expectations on satisfaction. Journal of Service Research, 19(4), 361-379.
    Holm, S., & Ploug, T. (2023). Population preferences for AI system features across eight
    different decision-making contexts. PLoS One, 1;18(12): e0295277.
    Kocielnik, R., Amershi,S., & Bennett, P. N. (2019). Will you accept an imperfect AI?
    exploring designs for adjusting end-user expectations of AI systems. Proceedings of the
    2019 CHI Conference on Human Factors in Computing Systems, Paper No.: 411, 1- 14.
    Korteling, J. E., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C.,
    & Eikelboom, A. R. (2021). Human- versus artificial intelligence. Frontiers in Artificial
    Intelligence, 4, 622364.
    Laird, J. E., Shultz, T., Thagard, P. (2019). How does current AI stack up against human
    intelligence?
    McKinney, V., Yoon, K., & Zahedi, F. (2002). The measurement of web-customer
    satisfaction: An expectation and disconfirmation approach. Information Systems Research,
    13(3), 296-315.
    Meurisch, C., Mihale-Wilson, C. A., Hawlitschek, A., Giger, F., Muller, F., Hinz, O.,
    Muhlhauser, M. (2020). Exploring user expectations of proactive AI systems. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(4), Article
    No.: 146, 1-22.
    Park, S., Kim, H. K., Song, Y., Bang, S., Kim, J., Park, J., Park, J. (2022). Impact of
    expectation and performance on the user experience of AI systems. ICIC express letters
    Part B Applications: an international journal of research and surveys, 13(1), 2022.1.
    Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion.
    Advances in Experimental Social Psychology, 19, 123-205.
    Rust, R. T., Inman, J. J., Jia, J., & Zahorik, A. (1999). What you don't know about customer-
    perceived quality: The role of customer expectation distributions. Marketing Science, 18(1),
    77-92.
    Seymour, M., Yuan, L., Dennis, A. R., & Riemer, K. (2020). Facing the artificial:
    Understanding affinity, trustworthiness, and preference for more realistic digital humans.
    Proceedings of the 53rd Hawaii International Conference on System Sciences.
    Shen, J., Zhang, C., Jiang, B., Chen, J., Song, J., Liu, Z., He, Z., Wong, S. Y., Fang, P.H., &
    Ming, W. K., (2019). Artificial intelligence versus clinicians in disease diagnosis: Systematic
    review. JMIR Med Inform, 16;7(3): e10010.
    Tam, K. Y., & Ho, S. Y. (2005). Web personalization as a persuasion strategy: An
    elaboration likelihood model perspective. Information Systems Research, 16(3), 271-291.
    Voudouris, K., Crosby, M., Beyret,B., Hernández-Orallo, J., Shanahan, M., Halina, M., &.
    Cheke, L. G. (2020). Direct human-AI comparison in the animal-AI environment. Frontiers
    in Psychology, 13, 711821.
    Veenhoven, R. (1996). Developments in satisfaction-research. Social Indicators Research,
    37(1), 1-46.
    Wu, P. H., Kuo, C. Y., Wu, H. K., Jen, T. H., & Hsu, Y. S. (2018). Learning benefits of
    secondary school students' inquiry-related curiosity: A cross-grade comparison of the
    relationships among learning experiences, curiosity, engagement, and inquiry abilities.
    Science Education, 102, 917-950.

    QR CODE
    :::