| 研究生: |
李少剛 Shao-Gang Lee |
|---|---|
| 論文名稱: |
生成式AI影片與觀影者投入關係之研究 The impact of different video presentation styles and whether the content is generated by ChatGPT on engagement |
| 指導教授: |
陳炫碩
Ken Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 企業管理學系 Department of Business Administration |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 中文 |
| 論文頁數: | 45 |
| 中文關鍵詞: | 生成式AI 、投入 、認知投入 、行為投入 、情感投入 、大型語言模型 |
| 外文關鍵詞: | cognitive engagement, emotional engagement |
| 相關次數: | 點閱:10 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,大型語言模型(LLM)如ChatGPT和Claude的快速發展,已在多個領域展示了傑出的表現。LLM能夠理解和生成人類語言,進而幫用戶回答日常生活的問題到撰寫文章等多種應用,可說是一個近乎全能的導師。該趨勢引發我們思考,讓學習者知曉教學內容由LLM生成的情況下,對於學習成效會有什麼影響。此外,影片常被用來傳遞訊息的方式,而不同的影片呈現方式例如加入講解者或視覺提示對學習成效有不同的影響。
本研究透過網路表單搜集了180份問卷進行回歸分析,探討不同影片呈現方式及知情條件下對於投入(Engagement)的影響。準確來說,目的可以分為兩種。第一,在特定的知情條件下(是否知情影片的講稿內容為ChatGPT生成),比較不同的影片呈現方式在認知、行為和情感投入是否有差異,藉此了解哪種影片呈現方式比較能提升學習者的投入。第二,在特定影片呈現方式下,比較不同的知情條件在三種投入是否有差異,可了解是否能支持大型語言模型快速生成內容,以減少人類內容撰寫者的工作負擔。
研究結果顯示,在影片當中加入講解者對認知、行為以及情感投入不會有顯著影響。另外,在影片加入視覺提示對認知與行為投入有顯著提升,但是對情感投入並不會有顯著差異。而另一研究結果顯示,不管有沒有告知受試者影片當中的講稿是ChatGPT生成的,對於受試者的三種投入都沒有顯著影響。
In recent years, the rapid development of large language models(LLMs)has demonstrated outstanding performance in multiple domains. This trend prompts us to consider how learners' engagement would be affected when informed that the content is generated by LLMs. Additionally, video is a popular medium used to convey information, and different video presentation styles, such as incorporating visual cues, can have different impacts on engagement.
This study aims to investigate the effects of different video presentation styles and disclosure conditions on engagement. Specifically, there are two research purposes. First, we compared the differences in engagement across different video presentation styles to find out which presentation styles can better enhance engagement. Second, we compared the differences in the three types of engagement across different disclosure conditions to discover whether we can support LLM to generate content.
The results showed that incorporating instructors in the videos did not significantly affect engagement. However, adding visual cues significantly enhanced cognitive and behavioral engagement, but did not significantly affect emotional engagement. Another finding revealed that participants' engagement was not significantly impacted, regardless of whether they were informed that the lecture scripts were generated by ChatGPT.
Agarwal, R. and J. Prasad (1998). "A conceptual and operational definition of personal innovativeness in the domain of information technology." Information systems research 9(2): 204-215.
Al-Obaydi, L. H., et al. (2023). "What I know, what I want to know, what I learned: Activating EFL college students' cognitive, behavioral, and emotional engagement through structured feedback in an online environment." Frontiers in Psychology 13: 1083673.
Chan, L. K., et al. (2010). "Advantages of video trigger in problem-based learning." Medical teacher 32(9): 760-765.
Chtouki, Y., et al. (2012). The impact of YouTube videos on the student's learning. 2012 international conference on information technology based higher education and training (ITHET), IEEE.
Craig, S. D., et al. (2002). "Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features and redundancy." Journal of educational psychology 94(2): 428.
Fredricks, J. A., et al. (2004). "School engagement: Potential of the concept, state of the evidence." Review of educational research 74(1): 59-109.
Gunser, V. E., et al. (2022). The pure poet: How good is the subjective credibility and stylistic quality of literary short texts written with an artificial intelligence tool as compared to texts written by human authors? Proceedings of the Annual Meeting of the Cognitive Science Society.
Huschens, M., et al. (2023). "Do You Trust ChatGPT?--Perceived Credibility of Human and AI-Generated Content." arXiv preprint arXiv:2309.02524.
King, J., et al. (2023). "Individual differences in selective attention and engagement shape students’ learning from visual cues and instructor presence during online lessons." Scientific Reports 13(1): 5075.
Mayer, R. E. (2002). Multimedia learning. Psychology of learning and motivation, Elsevier. 41: 85-139.
Naveed, H., et al. (2023). "A comprehensive overview of large language models." arXiv preprint arXiv:2307.06435.
Rigby, J. M., et al. (2016). Watching movies on netflix: investigating the effect of screen size on viewer immersion. Proceedings of the 18th international conference on human-computer interaction with mobile devices and services adjunct.
Scheiter, K. and A. Eitel (2015). "Signals foster multimedia learning by supporting integration of highlighted text and diagram elements." Learning and Instruction 36: 11-26.
Shin, D. (2021). "The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI." International Journal of Human-Computer Studies 146: 102551.
Sweller, J. (1994). "Cognitive load theory, learning difficulty, and instructional design." Learning and Instruction 4(4): 295-312.
Trowler, V. (2010). "Student engagement literature review." The higher education academy 11(1): 1-15.
Xu, R., et al. (2023). "ChatGPT vs. Google: a comparative study of search performance and user experience." arXiv preprint arXiv:2307.01135.