跳到主要內容

簡易檢索 / 詳目顯示

研究生: 葉書宇
Shu-Yu Ye
論文名稱: 推薦失敗後,擬人化聊天機器人能重建信任嗎? —社會臨場感和恢復期望的中介角色
Can Anthropomorphic Chatbots Rebuild Trust After Recommendation Failure? — The Mediating Roles of Social Presence and Recovery Expectations
指導教授: 洪秀婉
Shiu-Wan Hung
洪秀婉
Shiu-Wan Hung
口試委員:
學位類別: 碩士
Master
系所名稱: 管理學院 - 企業管理學系
Department of Business Administration
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 85
中文關鍵詞: 擬人化聊天機器人心理期望整合模型社會臨場感期望理論任務關鍵性
外文關鍵詞: Anthropomorphic Chatbots, Psychological Expectation Integration Model, Social Presence, Recovery Expectation Theory, Task Criticality
相關次數: 點閱:13下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著人工智慧技術日益成熟,擬人化聊天機器人(Anthropomorphic Chatbots)逐漸融入服務互動中,以提升使用者體驗與情感連結。然而,當演算法誤判導致推薦服務失敗時,不僅破壞功能性信賴,也因期望落差放大情感失望,進而影響使用者對整體服務評價與使用意願。而在推薦失敗後如何解析使用者心理歷程以重建使用者信任,成為值得關注的議題。因此本研究利用社會臨場感區分心智化與自我參照歷程構面,整合恢復期望理論以及信任理論,建構一個心理期望整合模型以釐清使用者信任重建過程,最後加入任務關鍵性觀察其在恢復期望對信任影響間之干擾效果,更完整的解釋在面臨擬人化聊天機器人推薦失敗情境下使用者之心路歷程。本研究採用網路與實體問卷交互發放方式,共計回收有效問卷454份,以線性結構方程式進行研究假說之分析。本研究經實證分析結果發現,擬人化設計需透過自我參照歷程的中介機制才能促使社會臨場感產生並影響使用者之恢復期望,進一步證實透過互動過程引發使用者的情感投射與社會連結時,更容易賦予聊天機器人人類心智。此外,情感型恢復期望對信任建構具有最強正向影響,顯示使用者特別重視聊天機器人是否能透過情感理解與同理心來緩解負面情緒。另一方面,任務關鍵性僅在功能型恢復期望與信任之間存在負向的干擾作用,代表任務屬性差異將導致功能性補償遭遇理性信任阻力,而情感性補償建構之情感信任,較不依賴任務屬性風險。本研究的研究結果能為能夠提供學術與企業對於聊天機器人擬人化設計與補償機制策略的參考方向與建議。


    As AI technologies mature, anthropomorphic chatbots are increasingly used in service interactions to enhance user experience and emotional engagement. However, algorithmic errors in recommendation services not only undermine functional trust but also amplify emotional disappointment due to unmet expectations, negatively affecting overall service evaluations and usage intentions. Understanding how to reconstruct user trust by uncovering their psychological processes after recommendation failures is therefore crucial. This study employs Social Presence Theory to distinguish mentalization from self‐referencing, integrates Recovery Expectation Theory and trust theory to develop a Psychological Expectation Integration Model, and incorporates task criticality as a moderator to explain user responses in anthropomorphic chatbot recommendation‐failure scenarios. Using a mixed online and paper survey, 454 valid responses were collected and analyzed via structural equation modeling. Results reveal that anthropomorphic design must operate through self‐referencing to elicit social presence, which in turn shapes users’ recovery expectations. Emotional recovery expectations exert the strongest positive effect on trust reconstruction, indicating that users particularly value empathetic understanding to alleviate negative emotions. Task criticality negatively moderates only the link between functional recovery expectations and trust, suggesting that functional compensation faces rational resistance in high‐risk tasks, whereas emotional compensation builds affective trust regardless of task risk. These findings clarify the mediating roles of social presence and recovery expectations in rebuilding trust after recommendation failures and offer theoretical and practical guidance for designing anthropomorphic chatbots and effective compensation strategies.

    摘要 i Abstract ii 誌謝 iii 圖目錄 0 表目錄 0 第一章 緒論 1 1-1 研究背景與動機 1 1-2 研究目的 4 1-3 研究流程 5 第二章 文獻探討 6 2.1 擬人化 6 2.1.1 聊天機器人 7 2.1.2 推薦服務 8 2.2 心理期望整合模型 8 2.2.1社會臨場感 8 2.2.2恢復期望理論 11 2.4任務關鍵性 14 2.5信任 15 第三章 研究方法 17 3-1 研究架構 17 3-2 研究假設 18 3-3 衡量構面與變數的操作型定義 24 3-4 研究樣本與問卷 25 3-4-1 研究樣本 25 3-4-2 研究問卷 27 3-5 統計分析方法 30 3-5-1 信度分析 30 3-5-2 效度檢定 30 3-5-3 假設驗證 31 第四章 資料分析與研究驗證 36 4-1 樣本基本資料分析 36 4-2 研究構面敘述性統計分析 39 4-3 測量模型之信效度分析 40 4-3-1信度分析 40 4-3-2 效度分析 41 4-4 結構模型路徑分析 44 4-4-1 研究模型與模型配適度檢定 44 4-4-2 路徑分析 45 4-4-3 中介效果分析驗證 46 4-4-4 實證討論 47 第五章 結論 51 5-1 研究結果 51 5-2 管理意涵 52 5-2-1 學術面 52 5-2-2 實務面 53 5-2-3 政策面 55 5-3 研究限制與後續研究建議 57 參考文獻 58 附錄 問卷 69

    Barney, C., Hancock, T., Esmark Jones, C. L., Kazandjian, B., & Collier, J. E. (2022). Ideally human-ish: How anthropomorphized do you have to be in shopper-facing retail technology? Journal of Retailing, 98(4), 685–705.
    Klowait, N. (2018). The quest for appropriate models of human-likeness: Anthropomorphism in media equation research. AI & Society, 33(4), 527–536.
    Haupt, M., Rozumowski, A., Freidank, J., & Haas, A. (2023). Seeking empathy or suggesting a solution? Effects of chatbot messages on service failure recovery. Electronic Markets, 33, Article 56.
    Misra, R., Malik, G., & Singh, P. (2025). A localized and humanized approach to chatbot banking companions: Implications for financial managers. Management Decision. Advance online publication.
    Brunswicker, S., Zhang, Y., Rashidian, C., & Linna, D. W. (2025). Trust through words: The systemize-empathize-effect of language in task-oriented conversational agents. Computers in Human Behavior, 165, Article 108516.
    Lalot, F., & Bertram, A.-M. (2025). When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot. Journal of Experimental Psychology: General, 154(2), 533–551.
    Murtaza, Z., Sharma, I., & Carbonell, P. (2024). Examining chatbot usage intention in a service encounter: Role of task complexity, communication style, and brand personality. Technological Forecasting and Social Change, 198, Article 123806.
    Agnihotri, A., & Bhattacharya, S. (2024). Chatbots’ effectiveness in service recovery. International Journal of Information Management, 76, Article 102679.
    Liu, W., Jiang, M., Li, W., & Mou, J. (2024). How does the anthropomorphism of AI chatbots facilitate users' reuse intention in online health consultation services? Technological Forecasting and Social Change, 198, 123407.
    Okafuji, Y., Ishikawa, T., Matsumura, K., Baba, J., & Nakanishi, J. (2024). Pseudo-eating behavior of service robot to improve the trustworthiness of product recommendations. Advanced Robotics, 38(1), 1–15.
    Biocca, F., Harms, C., & Burgoon, J. K. (2003). Toward a more robust theory and measure of social presence: Review and suggested criteria. Presence: Teleoperators and Virtual Environments, 12(5), 456–480.
    Raza, A., Tsiotsou, R., Sarfraz, M., & Ishaq, M. I. (2023). Trust recovery tactics in financial services: The moderating role of service failure severity. International Journal of Bank Marketing, 41(7), 1611–1639.
    Bozic, B., & Kuppelwieser, V. G. (2019). Customer trust recovery: An alternative explanation. Journal of Retailing and Consumer Services, 49, 208–218.
    Klowait, N. (2018). The quest for appropriate models of human-likeness: Anthropomorphism in media equation research. AI & Society, 33(4), 527–536.
    Li, B., Yao, R., & Nan, Y. (2024). How does anthropomorphism promote consumer responses to social chatbots: Mind perception perspective. Internet Research. Advance online publication.
    Simas, G., & Ulbricht, V. R. (2024). Human-AI interaction: An analysis of anthropomorphization and user engagement in conversational agents with a focus on ChatGPT. In Intelligent Human Systems Integration 2024 (pp. 456–467). AHFE.
    Lee, J., & Lee, D. (2023). User perception and self-disclosure towards an AI psychotherapy chatbot according to the anthropomorphism of its profile picture. Telematics and Informatics, 84, 102052.
    Heppner, H., Schiffhauer, B., & Seelmeyer, U. (2024). Conveying chatbot personality through conversational cues in social media messages. Computers in Human Behavior: Artificial Humans, 2(1), 100044.
    Tsai, W.-H. S., Liu, Y., & Chuan, C.-H. (2021). How chatbots’ social presence communication enhances consumer engagement: The mediating role of parasocial interaction and dialogue. Journal of Research in Interactive Marketing, 15(3), 460–482.
    Cai, D., Li, H., & Law, R. (2022). Anthropomorphism and OTA chatbot adoption: A mixed methods study. Journal of Travel & Tourism Marketing, 39(2), 228–255.
    Przegalinska, A., Ciechanowski, L., Stroz, A., Gloor, P., & Mazurek, G. (2019). In bot we trust: A new methodology of chatbot performance measures. Business Horizons, 62(6), 785–797.
    Trang, N. T. M., & Shcherbakov, M. (2021). Enhancing Rasa NLU model for Vietnamese chatbot. International Journal of Open Information Technologies, 9(1), 1–9.
    Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: Changing user needs and motivations. Interactions, 25(5), 38–43.
    Tang, J., Wang, Y., Zhou, X., Guo, J., & Li, C. (2024). Can chatbot anthropomorphism and empathy mitigate the impact of customer anger on satisfaction? In Wisdom, Well-Being, Win-Win (pp. 84–95). Springer.
    Cao, B., Li, Z., & Jiang, C. (2024). When chatbots make errors: Cognitive and affective pathways to understanding forgiveness of chatbot errors. Telematics and Informatics, 91, 102189.
    Jannach, D., Manzoor, A., Cai, W., & Chen, L. (2021). A survey on conversational recommender systems. ACM Computing Surveys, 54(5), Article 105.
    Liang, K.-H., Shi, W., Oh, Y., Wang, H.-C., Zhang, J., & Yu, Z. (2022). Dialoging resonance: How users perceive, reciprocate and react to chatbot’s self-disclosure in conversational recommendations.
    Wilkinson, D., Alkan, Ö., Liao, Q. V., Mattetti, M., Vejsbjerg, I., Knijnenburg, B. P., & Daly, E. (2021). Why or why not? The effect of justification styles on chatbot recommendations. ACM Transactions on Information Systems, 39(4), Article 42.
    Konya-Baumbach, E., Biller, M., & von Janda, S. (2023). Someone out there? A study on the social presence of anthropomorphized chatbots. Computers in Human Behavior, 139, 107513.
    Janson, A. (2023). How to leverage anthropomorphism for chatbot service interfaces: The interplay of communication style and personification. Computers in Human Behavior, 149, 107954.
    Huang, W., & Hew, K. F. (2025). Facilitating online self-regulated learning and social presence using chatbots: Evidence-based design principles. IEEE Transactions on Learning Technologies, 18, 56–71.
    Lee, S., Lee, N., & Sah, Y. J. (2020). Perceiving a mind in a chatbot: Effect of mind perception and social cues on co-presence, closeness, and intention to use. International Journal of Human–Computer Interaction, 36(10), 930–940.
    Gilead, M., & Ochsner, K. N. (Eds.). (2021). The neural basis of mentalizing. Springer Nature Switzerland AG.
    Escalas, J. E. (2007). Self-referencing and persuasion: Narrative transportation versus analytical elaboration. Journal of Consumer Research, 33(4), 421–429.
    Jin, F., & Zhang, X. (2025). Artificial intelligence or human: When and why consumers prefer AI recommendations. Information Technology & People, 38(1), 279–303.
    Kim, Y., & Sundar, S. S. (2012). Anthropomorphism of computers: Is it mindful or mindless? Computers in Human Behavior, 28(1), 241–250.
    Choi, S., & Zhou, J. (2023). Inducing consumers’ self-disclosure through the fit between chatbot’s interaction styles and regulatory focus. Journal of Business Research, 166, Article 114127.
    Gu, C., Zhang, Y., & Zeng, L. (2024). Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: A perspective based on attribution and CASA theories. Humanities and Social Sciences Communications, 11, Article 79.
    Takahashi, H., Saito, C., Okada, H., & Omori, T. (2013). An investigation of social factors related to online mentalizing in a human-robot competitive game. Japanese Psychological Research, 55(2), 144–153.
    Chen, Q., Gong, Y., Lu, Y., & Luo, X. (2025). The golden zone of AI’s emotional expression in frontline chatbot service failures. Internet Research, 35(3), 1065–1103.
    Olawade, D. B., Wada, O. Z., Odetayo, A., David-Olawade, A. C., Asaolu, F., & Eberhardt, J. (2024). Enhancing mental health with artificial intelligence: Current trends and future prospects. Journal of Medicine, Surgery, and Public Health, 3, Article 100099.
    Takahashi, H., Saito, C., Okada, H., & Omori, T. (2008). An investigation of social factors related to online mentalizing in a human-robot competitive game. Japanese Psychological Research, 50(3), 182–196.
    Wien, A. H., & Peluso, A. M. (2021). Influence of human versus AI recommenders: The roles of product type and cognitive processes. Journal of Business Research, 137, 13–27.
    Liu, W., Jiang, M., Li, W., & Mou, J. (2024). How does the anthropomorphism of AI chatbots facilitate users' reuse intention in online health consultation services? The moderating role of disease severity. Technological Forecasting and Social Change, 198, 123407.
    Choi, S., & Zhou, J. (2023). Inducing consumers’ self-disclosure through the fit between chatbot’s interaction styles and regulatory focus. Computers in Human Behavior, 145, 107785.
    Liang, K.-H., Shi, W., Oh, Y., Zhang, J., & Yu, Z. (2021). Discovering chatbot’s self-disclosure’s impact on user trust, affinity, and recommendation effectiveness.
    Liang, K.-H., Shi, W., Oh, Y., Zhang, J., & Yu, Z. (2024). Dialoging resonance in human-chatbot conversation: How users perceive and reciprocate recommendation chatbot's self-disclosure strategy. International Journal of Human–Computer Studies, 183, 103205.
    Zhang, Y., Lu, Y., & Pan, Y. (2024). It’s better than nothing: The influence of service failures on user reusage intention in AI chatbot. Electronic Commerce Research and Applications, 67, 101421.
    Ozuem, W., Ranfagni, S., Willis, M., Howell, K. E., & Helal, G. (2024). Exploring the relationship between chatbots, service failure recovery and customer loyalty: A frustration–aggression perspective. Technological Forecasting and Social Change, 198, 122915.
    Zhang, Y., Chen, Y., & Zhao, M. (2024). Is AI chatbot recommendation convincing customer? An analytical response based on the elaboration likelihood model. Computers in Human Behavior, 148, 107809.
    Song, S., Kim, M. J., & Park, E. (2024). Exploring the impact of user-participated customization in experiencing chatbot failure. Telematics and Informatics, 83, 102086.
    Lu, Y., & Zhang, L. (2025). Balancing identity diversity and product contexts: Understanding consumer trust in AI-enhanced chatbot services. Journal of Retailing and Consumer Services, 75, 103489.
    Nguyen, T., Phong, N. D., & Bui, H. T. (2023). The impact of AI chatbots on customer trust: An empirical investigation in the hotel industry. Computers in Human Behavior, 139, 107492.
    Tan, H., Jiang, Y., & Li, H. (2024). To err is bot, not human: Asymmetric reactions to chatbot service failures. Journal of Business Research, 170, 114231.
    Yun, J., & Park, J. (2022). The effects of chatbot service recovery with emotion words on customer satisfaction, repurchase intention, and positive word-of-mouth. Frontiers in Psychology, 13, 922503.
    Park, J. (2022). The effects of chatbot service recovery with emotion words on customer satisfaction, repurchase intention, and positive word-of-mouth. Frontiers in Psychology, 13, 922503.
    Park, E., & Fan, A. (2024). Chatbots in complaint handling: The moderating role of humor. Journal of Business Research, 172, 114342.
    Lv, W., & Huang, Y. (2022). Apology or gratitude? The effect of communication recovery strategies for service failures of AI devices. Journal of Business Research, 142, 805–816.
    Wang, X., Xiao, S., & Zhang, M. (2024). The impact of default options on tourist intention post tourism chatbot failure: The role of service recovery and emoticon. Tourism Management, 101, 104760.
    Forgas, J. P. (1995). Mood and judgment: The affect infusion model (AIM). Psychological Bulletin, 117(1), 39–66.
    Song, H., & Kim, J. (2022). Effects of different service failure types and recovery strategies on customer satisfaction and repurchase intention. Technological Forecasting and Social Change, 174, 121234.
    Song, M., Du, J., Xing, X., & Mou, J. (2022). Should the chatbot “save itself” or “be helped by others”? The influence of service recovery types on consumer perceptions of recovery satisfaction. Electronic Commerce Research and Applications, 55, 101199.
    Zhou, X., & Chang, Y. (2024). Informational or emotional? Exploring the relative effects of chatbots’ self-recovery strategies on consumer satisfaction. Journal of Retailing and Consumer Services, 76, 103779.
    Song, M., Xing, X., Duan, Y., & Mou, J. (2023). I can feel AI failure: The impact of service failure type and failure assessment on customer recovery expectation. Industrial Management & Data Systems, 123(12), 2949–2975.
    Zhang, J., Lu, Y., Wang, X., Liu, L., & Feng, Y. (2024). Emotional expressions of care and concern by customer service chatbots: A dual-path model of service failure recovery. Decision Support Systems, 186, 114314.
    Følstad, A., Law, E. L.-C., Siebert, J. S., & Bjørkli, C. A. (2024). Conversational breakdown in a customer service chatbot: Impact of task order and criticality on user trust and emotion. International Journal of Human–Computer Studies, 182, 103600.
    Murtaza, Z., Sharma, I., & Carbonell, P. (2024). Examining chatbot usage intention in a service encounter: Role of task complexity, communication style, and brand personality. Technological Forecasting and Social Change, 209, 123806.
    Ng, S. W. T., & Zhang, R. (2025). Trust in AI chatbots: A systematic review. Telematics and Informatics, 84, 102240.
    Chen, Q., Yin, C., & Gong, Y. (2023). Would an AI chatbot persuade you: An empirical answer from the elaboration likelihood model. Information Technology & People, 38(2), 937–962.
    Wang, S., Yan, Q., & Wang, L. (2023). Task-oriented vs. social-oriented: Chatbot communication styles in electronic commerce service recovery. Electronic Commerce Research, 25, 1793–1825.
    Chen, Q., Lu, Y., Gong, Y., & Xiong, J. (2023). Can AI chatbots help retain customers? Impact of AI service quality on customer loyalty. Internet Research, 33(1), 1–27.
    Chen, A., Pan, Y., Li, L., & Yu, Y. (2022). Are you willing to forgive AI? Service recovery from medical AI service failure. Industrial Management & Data Systems, 122(12), 2540–2557.
    Zheng, Y., Duan, Y., Sun, M., & Wang, W. (2023). How chatbots’ anthropomorphism affects user satisfaction: The mediating role of perceived warmth and competence. International Journal of Information Management, 71, 102659.
    Jieon, Y., & Lee, H. (2023). Do anthropomorphic chatbots increase counseling satisfaction and reuse intention? The moderated mediation of social rapport and social anxiety. Computers in Human Behavior, 143, 107737.
    Rogers, T. B., Kuiper, N. A., & Kirker, W. S. (1977). Self-reference and the encoding of personal information. Journal of Personality and Social Psychology, 35(9), 677–688.
    Munnukka, J., Talvitie-Lamberg, K., Tuunanen, T., & Hiltunen, I. (2022). Anthropomorphism and social presence in human–virtual service assistant interactions: The role of dialog length and attitudes. Computers in Human Behavior, 128, 107104.
    Croes, E. A. J., Antheunis, M. L., Schouten, A. P., & Krahmer, E. J. (2023). You go first! The effects of self-disclosure reciprocity in chatbot conversations. Computers in Human Behavior, 139, 107505.
    Goddard, K., Roy, D., & Brukner, Y. L. (2024). Comparing perceived empathy and intervention strategies of human versus AI counselors. Computers in Human Behavior Reports, 11, 100174.
    Pujiarti, R. N., Lee, B., & Yi, M. Y. (2022). Enhancing user’s self-disclosure through chatbot’s co-activity and conversation atmosphere visualization. International Journal of Human–Computer Interaction, 38(20), 1891–1908.
    Lee, Y.-C., Yamashita, N., & Huang, Y. (2020). Designing a chatbot as a mediator for promoting deep self-disclosure to a real mental health professional. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), Article 31.
    Braggaar, A., Verhagen, J., Martijn, G., & Liebrecht, C. (2024). Conversational repair strategies to cope with errors and breakdowns in customer service chatbot conversations. In A.
    Følstad, A., Law, E. L.-C., & van As, A. (2024). Conversational breakdown in a customer service chatbot: Impact of task order and criticality on user trust and emotion. ACM Transactions on Computer–Human Interaction.
    Pradeepa, N. P., Asokk, D., Prasanna, S., & Alam, A. S. (2024). Investigating chatbot users’ e-satisfaction and patronage intention through social presence and flow: Indian online travel agencies (OTAs). Journal of Systems and Information Technology, 26(1), 89–114.
    Mas Bakar, R., Damara, Z. F., & Mansyur, A. Y. (2020). Post-service recovery emotion and customer trust: The role of satisfaction as mediation. Jurnal Manajemen dan Pemasaran Jasa, 13(1), 17–28.
    Xu, X., Liu, Y., & Gursoy, D. (2022). Enhancing customer satisfaction with chatbots: The influence of communication styles and consumer attachment anxiety. Frontiers in Psychology, 13, 1034567.
    Chaparro-Peláez, J., Agudo-Peregrina, Á. F., & Pascual-Miguel, F. J. (2015). Conjoint analysis of drivers and inhibitors of e-commerce adoption. Journal of Business Research, 68(7), 1559–1565.
    Christoforakos, L., Kende, A., & Greitmeyer, T. (2021). Connect with me: Exploring influencing factors in a human-technology relationship based on regular chatbot use. Frontiers in Digital Health, 3, Article 689999.
    Tax, S. S., Brown, S. W., & Chandrashekaran, M. (1998). Customer evaluations of service complaint experiences: Implications for relationship marketing. Journal of Marketing, 62 (2), 60–76.
    Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.
    Croes, E. A. J., Antheunis, M. L., Goudbeek, M. B., & Wildman, N. W. (2022). “I am in your computer while we talk to each other”: A content analysis on the use of language-based strategies by humans and a social chatbot in initial human-chatbot interactions. International Journal of Human–Computer Interaction, 39(10), 2155–2173.
    Goddard, Q., Moton, N., Hudson, J., & He, H. A. (2024). A chatbot won’t judge me: An exploratory study of self-disclosing chatbots in introductory computer science classes. In Proceedings of the 2024 Western Canadian Conference on Computing Education (WCCCE) (Article No. 7). ACM.
    Bolger,N., & Laurenceau, J.-P. (2013). Intensive longitudinal methods: An introduction to diary and experience sampling research. The Guilford Press.
    Wu, W., Carroll, I. A., & Chen, P.-Y. (2018). A single-level random-effects cross-lagged panel model for longitudinal mediation analysis. Behavior Research Methods, 50(5), 2111–2124.

    QR CODE
    :::