| 研究生: |
陳于庭 Yu-Ting Chen |
|---|---|
| 論文名稱: |
應用情境對使用者相信AI程度之影響 Examining the Influence of Application Context on Users’ Trust in AI |
| 指導教授: |
陳炫碩
Shiuann-shuoh Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 企業管理學系 Department of Business Administration |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 中文 |
| 論文頁數: | 45 |
| 中文關鍵詞: | 人工智慧信任 、應用情境 、AI素養 、風險程度 、推薦系統 |
| 外文關鍵詞: | AI trust, recommendation systems, application context, AI literacy, perceived risk |
| 相關次數: | 點閱:14 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究旨在探討使用者在不同應用情境下對人工智慧(AI)推薦系統之信任程度,並進一步檢驗個人AI素養與任務風險程度是否對此關係產生調節作用。隨著AI技術逐漸滲透日常生活,信任成為使用者是否接受AI建議的關鍵因素,而應用場域的屬性與使用者特質可能會顯著影響信任形成歷程。為此,本研究設計操弄式實驗問卷,情境分為功利性與享樂性兩類,風險程度分為高風險與低風險,並以AI素養作為連續調節變數,收集有效樣本共325份,透過多元線性迴歸進行假說驗證。
實證結果顯示:(1)應用情境具有顯著主效應,功利性任務相較於享樂性任務,能顯著提升使用者對AI推薦之信任程度;(2)AI素養與應用情境存在顯著交互作用,高AI素養使用者在功利性情境下展現更高信任,而在享樂性情境中則變化不大,顯示AI素養為條件式調節因子;(3)風險程度亦對應用情境與信任之關係產生調節效果,在低風險下功利性情境可顯著提升信任,但在高風險條件下則效果遞減甚至趨於一致。
本研究結果不僅補充過去文獻對「應用屬性 × 使用者特徵」交互影響的理解,也突顯AI應用推廣策略應視情境類型與風險敏感度調整介面設計與信任建構方式,對AI系統開發與信任管理具有重要理論與實務貢獻。
This study investigates how users' trust in artificial intelligence (AI) recommendation systems varies across different application contexts and whether AI literacy and perceived risk level moderate this relationship. As AI technologies increasingly integrate into everyday decision-making, trust plays a critical role in user acceptance. However, how contextual attributes and individual characteristics interact to shape trust remains underexplored. To address this gap, this research adopts a scenario-based experimental design, manipulating application type (utilitarian vs. hedonic) and task risk (high vs. low), with AI literacy measured as a continuous moderator. A total of 325 valid responses were collected and analyzed using multiple linear regression.
The results reveal that: (1) application context significantly affects trust, with utilitarian tasks generating higher trust in AI recommendations compared to hedonic ones; (2) a significant interaction between AI literacy and application type indicates that individuals with higher AI literacy exhibit greater trust in utilitarian contexts, but show little change in hedonic scenarios, suggesting a conditional moderation effect; (3) perceived risk also moderates the effect of application type on trust—under low-risk conditions, utilitarian contexts significantly enhance trust, while under high-risk conditions, the difference diminishes.
This study enriches the understanding of the interplay between task attributes and user traits in shaping trust in AI. The findings highlight the importance of adaptive trust design in AI systems, suggesting that interface strategies should be tailored according to context type and user sensitivity to risk, offering valuable theoretical and practical implications for AI deployment and trust calibration.
Batra, R., & Ahtola, O. T. (1991).Measuring the hedonic and utilitarian sources of consumer attitudes.Marketing Letters, 2(2), 159–170.
Choi, J. H. (2012). Financial statement comparability and the informativeness of stock prices about future earnings. Contemporary Accounting Research, 29(2), 422–464.
Cheng, Y.–C., Lin, P.–W., & Huang, C.–H. (2024). Task complexity and human–machine interaction design effects on trust. MIS Quarterly, 48(1), 123–145.
Garvey, A. M., Carnahan, S., & Hsu, G. (2022). Bad news? Send an AI. Good news? Send a human. Journal of Marketing Research, 59(5), 912–931.
Kim, J., Huang, Y., & Zhu, R. (2024). To dispose or eat? The impact of perceived healthiness on consumption decisions for about-to-expire foods. Journal of Consumer Research, 50(3), 345–362.
Longoni, C., & Cian, L. (2020). Artificial intelligence in utilitarian vs. hedonic contexts: The “word-of-machine” effect. Journal of Marketing, 84(5), 103–120.
Liu, L., & Sundar, S. S. (2018).
Should machines express sympathy and empathy? Experiments with a health advice chatbot.Cyberpsychology, Behavior, and Social Networking.
Morewedge, C. K., Monga, A., Palmatier, R. W., Shu, S. B., & Small, D. A. (2021). Evolution of consumption: A psychological ownership framework. Journal of Marketing, 85(1), 196–218.
Puntoni, S., Reczek, R. W., Giesler, M., & Botti, S. (2020). Consumers and artificial intelligence: An experiential perspective. Journal of Marketing, 84(1), 42–60.
Piller, F. T., Srour, M., & Marion, T. J. (2024). Generative AI, innovation, and trust. The Journal of Applied Behavioral Science, 60(4), 613–622.
Petty, R. E., & Cacioppo, J. T. (1986).The elaboration likelihood model of persuasion.
Advances in Experimental Social Psychology.
Tully, S. M., Reczek, R. W., & Wagner, U. (2025). Express lower: Artificial intelligence literacy predicts greater AI receptivity. Journal of Consumer Psychology, 35(1), 67–89.