| 研究生: |
游凱亘 Kai-Hsuan Yu |
|---|---|
| 論文名稱: |
人工智能工作績效期待與滿意度關係之研究 -期望落差或安慰劑效果 Study on the Relationship between Performance Expectations and Satisfaction with Artificial Intelligence: Expectancy Disconfirmation or Placebo Effect |
| 指導教授: |
陳炫碩
Shiuann-Shuoh Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 企業管理學系 Department of Business Administration |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 中文 |
| 論文頁數: | 80 |
| 中文關鍵詞: | 期望落差 、安慰劑效應 、人工智能 、期望 、感知品質 、滿意度 |
| 外文關鍵詞: | Expectancy Disconfirmation, Placebo Effect, Artificial Intelligence, Expectations, Perceived Quality, Satisfaction |
| 相關次數: | 點閱:26 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究探討了⼈⼯智能(AI)在⼯作績效中的期望與滿意度之間的關係,並特別
關注期望落差(Expectancy Disconfirmation)與安慰劑效應(Placebo Effect)。在現今
社會中,AI 技術應⽤範圍廣泛,從簡單的⽇常任務到複雜的專業領域,⽣成式 AI 更
是顯著改變了⼈們的⽣活。然⽽,社會對 AI 的期望及其滿意度存在差異,尤其是在相
同的績效表現下,⼈們對 AI 和⼈類的評價標準往往不同。
本研究的主要⽬的是探討在不同情境下,受試者對 AI 與⼈類的期望、感知品質及
滿意度的關係,並檢驗期望落差和安慰劑效應的存在。研究⽅法包括設計實驗問卷,
將受試者分為兩組,分別告知績效的完成對象為⼈類或是 AI,在兩種情境中進⾏測
試。
研究結果顯⽰,在兩種情境中,當受試者對 AI 的期望較⾼時,能顯著提⾼其感知
品質,這反映了安慰劑效應的存在;然⽽,對⼈類專家的⾼期望則未能顯著提升感知
品質,未觀察到安慰劑效應,進⼀步⽀持了期望落差的理論。此外,本研究也發現,
期望對滿意度的直接影響在所有情境下均不顯著,但感知品質對滿意度的影響則具有
顯著的正向關聯。
這些發現有助於更好地理解 AI 技術應⽤中的⼼理機制,並為未來改進 AI 技術應
⽤提供了有價值的⾒解。
This study investigates the relationship between expectations and satisfaction
regarding the work performance of Artificial Intelligence (AI), with a particular focus
on Expectancy Disconfirmation and the Placebo Effect. In today’s society, AI
technologies are widely applied across various domains, ranging from simple daily
tasks to complex professional fields. The emergence of generative AI has significantly
transformed people’s lives. However, a gap remains between societal expectations of
AI and the satisfaction derived from its performance—especially when AI and
humans achieve similar performance outcomes, yet are evaluated by different
standards.
The main objective of this study is to examine how participants’ expectations,
perceived quality, and satisfaction differ between AI and human performers under
different scenarios, and to test the presence of expectancy disconfirmation and the
placebo effect. An experimental questionnaire was designed in which participants
were randomly divided into two groups. Each group was told that the performance in
the scenario was completed either by a human or an AI, and their responses were
measured across two different tasks.
The results indicate that in both scenarios, high expectations of AI significantly
increased perceived quality, suggesting the presence of the placebo effect. Conversely,
high expectations of human experts did not significantly enhance perceived quality,
and no placebo effect was observed—supporting the theory of expectancy
disconfirmation. Additionally, the study found that while expectations had no significant direct effect on satisfaction in any scenario, perceived quality had a
significant positive effect on satisfaction.
These findings contribute to a better understanding of the psychological mechanisms
behind AI application and offer valuable insights for improving the integration and
acceptance of AI technologies in various fields.
Abbas, A. W., & Hussain, M. (2021). UMT artificial intelligence review (UMT-AIR).
Abrate,G., Quinton, S., & Pera, R. (2021). The relationship between price paid and hotel.
review ratings: Expectancy-disconfirmation or placebo effect? Tourism Management, 85,
104314.
Bohlmann, J. D., Rosa, J. A., Bolton, R. N., & Qualls, W. J. (2006). The effect of group.
interactions on satisfaction judgments: satisfaction escalation. Marketing Science, 25(4),
301-321.
Fornell, C., Johnson, M. D., Anderson, E. W., Cha, J., & Bryant, B. E. (1996). The American
customer satisfaction index: Nature, purpose, and findings. Journal of Marketing, 60(4), 7-
18.
Giese, J. L., & Cote, J. A. (2000). Defining consumer satisfaction. Academy of Marketing.
Science Review, 4, 1-24.
Habell, J., Alavi, S., Schmitz, C., Schneider, J. V., & Wieseke, J. (2016). When do customers.
get what they expect? Understanding the ambivalent effects of customers’
service
expectations on satisfaction. Journal of Service Research, 19(4), 361-379.
Holm, S., & Ploug, T. (2023). Population preferences for AI system features across eight
different decision-making contexts. PLoS One, 1;18(12): e0295277.
Kocielnik, R., Amershi,S., & Bennett, P. N. (2019). Will you accept an imperfect AI?
exploring designs for adjusting end-user expectations of AI systems. Proceedings of the
2019 CHI Conference on Human Factors in Computing Systems, Paper No.: 411, 1- 14.
Korteling, J. E., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C.,
& Eikelboom, A. R. (2021). Human- versus artificial intelligence. Frontiers in Artificial
Intelligence, 4, 622364.
Laird, J. E., Shultz, T., Thagard, P. (2019). How does current AI stack up against human
intelligence?
McKinney, V., Yoon, K., & Zahedi, F. (2002). The measurement of web-customer
satisfaction: An expectation and disconfirmation approach. Information Systems Research,
13(3), 296-315.
Meurisch, C., Mihale-Wilson, C. A., Hawlitschek, A., Giger, F., Muller, F., Hinz, O.,
Muhlhauser, M. (2020). Exploring user expectations of proactive AI systems. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(4), Article
No.: 146, 1-22.
Park, S., Kim, H. K., Song, Y., Bang, S., Kim, J., Park, J., Park, J. (2022). Impact of
expectation and performance on the user experience of AI systems. ICIC express letters
Part B Applications: an international journal of research and surveys, 13(1), 2022.1.
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion.
Advances in Experimental Social Psychology, 19, 123-205.
Rust, R. T., Inman, J. J., Jia, J., & Zahorik, A. (1999). What you don't know about customer-
perceived quality: The role of customer expectation distributions. Marketing Science, 18(1),
77-92.
Seymour, M., Yuan, L., Dennis, A. R., & Riemer, K. (2020). Facing the artificial:
Understanding affinity, trustworthiness, and preference for more realistic digital humans.
Proceedings of the 53rd Hawaii International Conference on System Sciences.
Shen, J., Zhang, C., Jiang, B., Chen, J., Song, J., Liu, Z., He, Z., Wong, S. Y., Fang, P.H., &
Ming, W. K., (2019). Artificial intelligence versus clinicians in disease diagnosis: Systematic
review. JMIR Med Inform, 16;7(3): e10010.
Tam, K. Y., & Ho, S. Y. (2005). Web personalization as a persuasion strategy: An
elaboration likelihood model perspective. Information Systems Research, 16(3), 271-291.
Voudouris, K., Crosby, M., Beyret,B., Hernández-Orallo, J., Shanahan, M., Halina, M., &.
Cheke, L. G. (2020). Direct human-AI comparison in the animal-AI environment. Frontiers
in Psychology, 13, 711821.
Veenhoven, R. (1996). Developments in satisfaction-research. Social Indicators Research,
37(1), 1-46.
Wu, P. H., Kuo, C. Y., Wu, H. K., Jen, T. H., & Hsu, Y. S. (2018). Learning benefits of
secondary school students' inquiry-related curiosity: A cross-grade comparison of the
relationships among learning experiences, curiosity, engagement, and inquiry abilities.
Science Education, 102, 917-950.