| 研究生: |
曹程富 Cheng-Fu Cao |
|---|---|
| 論文名稱: |
發展深度學習為基礎之即時腦波人機介面於元宇宙環境下的應用 Development and application of a real-time brain-computer interface based on deep learning in the metaverse environment |
| 指導教授: |
李柏磊
Po-Lei Lee |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 112 |
| 語文別: | 中文 |
| 論文頁數: | 64 |
| 中文關鍵詞: | 腦機介面 、想像運動 、虛擬實境 、動作觀察 、深度學習 、持續學習 |
| 外文關鍵詞: | Brain-computer interface, Motor imagery, Virtual reality, Action observation, Deep learning, Continual learning |
| 相關次數: | 點閱:25 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
想像運動(Motor imagery,MI)是腦機介面(Brain-computer interface,BCI)中常見的控制方式,此領域現已有成熟的研究體系與許多落地的實例。然而基於MI-BCI的系統仍存在幾個挑戰需要解決,例如,受試者需要接受長時間的訓練才能使用系統,大幅增加時間成本,且腦電訊號具有高變異度與非平穩的特性,訊號會因時間和受試者的不同而產生差異。因此本研究提出結合虛擬實境(Virtual reality,VR)的想像運動訓練系統,共四類想像運動(左手、右手、雙腳、休息),系統建立分成線下資料蒐集與即時回饋兩階段。線下階段中,受試者在VR環境內通過觀察虛擬人物動作的方式來輔助MI執行,利用線下資料訓練深度學習網路,作為後續即時回饋MI分類的基礎。受試者將在即時回饋階段以想像運動實時控制虛擬人物在元宇宙中行走,我們引入持續學習(Continual learning)概念,使用回饋資料微調(Fine-tuning)模型參數,持續提升模型性能。實驗共五位受試者參與,結果顯示線下模型的平均準確率達52.8%比相關研究模型(S3T,EEGNet,DeepConvNet以及ShallowConvNet)的性能表現好,即時回饋模型也從47.4%的平均準確度提升到66.2%,進步幅度達18.8%。我們也透過ERD/ERS來分析線下階段與回饋階段資料,結果表明動作觀察有助於MI的執行,模型在即時回饋期間抓取的資料也具合理以及可解釋性。本研究提出之系統在未來可望作為新的MI-BCI訓練方向。
Motor imagery (MI) is a common control method in brain-computer interface (BCI). There is already a mature research system and many practical examples in this field. However, there are still several challenges to be solved in the system based on MI-BCI. For example, the subjects need to receive long-term training to use the system, which greatly increases the time cost, and the EEG signals have high variability and non-stationary characteristics. Will vary with time and subject. Therefore, this study proposes an imaginary exercise training system combined with virtual reality (VR), with a total of four classes of motor imagery (left hand, right hand, both feet, and rest), and the establishment of the system is divided into two stages: offline data collection and real-time feedback. In the offline stage, the subjects observe the actions of virtual characters in the VR environment to assist MI execution. We use the offline data to train the deep learning network as the basis for the subsequent real-time feedback of MI classification. In the real-time feedback stage, the subjects will use MI to control the virtual character to walk in the metaverse in real time, We introduce the concept of continuous learning and use feedback data to fine-tune model parameters to continuously improve model performance. A total of five subjects participated in the experiment, and the results showed that the average accuracy of the offline model reached 52.8%, better than related works (S3T, EEGNet, DeepConvNet and ShallowConvNet), and the average accuracy of the real-time feedback model increased from 47.4% to 66.2%, an improvement of 18.8%. We also use ERD/ERS to analyze offline and feedback data, and the results show that action observation is helpful for the execution of MI, and the data captured by the model during real-time feedback stage is also reasonable and interpretable. The system proposed in this study is expected to serve as a new MI-BCI training direction in the future.
〔1〕 S. Saha et al., "Progress in brain computer interface: Challenges and opportunities," Frontiers in Systems Neuroscience, vol. 15, p. 578875, 2021.
〔2〕 F. Turi, M. Clerc, and T. Papadopoulo, "Long multi-stage training for a motor-impaired user in a BCI competition," Frontiers in Human Neuroscience, vol. 15, p. 647908, 2021.
〔3〕 S. Saha and M. Baumert, "Intra-and inter-subject variability in EEG-based sensorimotor brain computer interface: a review," Frontiers in computational neuroscience, vol. 13, p. 87, 2020.
〔4〕 H. Morioka et al., "Learning a common dictionary for subject-transfer decoding with resting calibration," NeuroImage, vol. 111, pp. 167-178, 2015.
〔5〕 M. Tangermann et al., "Review of the BCI competition IV," Frontiers in neuroscience, p. 55, 2012.
〔6〕 Y. Song, Q. Zheng, B. Liu, and X. Gao, "EEG conformer: Convolutional transformer for EEG decoding and visualization," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 710-719, 2022.
〔7〕 H. Adeli and S. Ghosh-Dastidar, Automated EEG-based diagnosis of neurological disorders: Inventing the future of neurology. CRC press, 2010.
〔8〕 S. L. Oh, Y. Hagiwara, U. Raghavendra, R. Yuvaraj, N. Arunkumar, M. Murugappan, and U. R. Acharya, "A deep learning approach for Parkinson’s disease diagnosis from EEG signals," Neural Computing and Applications, vol. 32, pp. 10927-10933, 2020.
〔9〕 S. J. Smith, "EEG in the diagnosis, classification, and management of patients with epilepsy," Journal of Neurology, Neurosurgery & Psychiatry, vol. 76, no. suppl 2, pp. ii2-ii7, 2005.
〔10〕 R. Mane, T. Chouhan, and C. Guan, "BCI for stroke rehabilitation: motor and beyond," Journal of neural engineering, vol. 17, no. 4, p. 041001, 2020.
〔11〕 P. Chowdhury, S. K. Shakim, M. R. Karim, and M. K. Rhaman, "Cognitive efficiency in robot control by Emotiv EPOC," in 2014 International Conference on Informatics, Electronics & Vision (ICIEV), 2014: IEEE, pp. 1-6.
〔12〕 O. Hawsawi and S. K. Semwal, "EEG headset supporting mobility impaired gamers with game accessibility," in 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2014: IEEE, pp. 837-841.
〔13〕 T. Xu, Y. Zhou, Z. Wang, and Y. Peng, "Learning emotions EEG-based recognition and brain activity: A survey study on BCI for intelligent tutoring system," Procedia computer science, vol. 130, pp. 376-382, 2018.
〔14〕 A. Frisoli, C. Loconsole, D. Leonardis, F. Banno, M. Barsotti, C. Chisari, and M. Bergamasco, "A new gaze-BCI-driven control of an upper limb exoskeleton for rehabilitation in real-world tasks," IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 6, pp. 1169-1179, 2012.
〔15〕 G. Rizzolatti, L. Fadiga, V. Gallese, and L. Fogassi, "Premotor cortex and the recognition of motor actions," Cognitive brain research, vol. 3, no. 2, pp. 131-141, 1996.
〔16〕 V. Gallese, L. Fadiga, L. Fogassi, and G. Rizzolatti, "Action recognition in the premotor cortex," Brain, vol. 119, no. 2, pp. 593-609, 1996.
〔17〕 G. Di Pellegrino, L. Fadiga, L. Fogassi, V. Gallese, and G. Rizzolatti, "Understanding motor events: a neurophysiological study," Experimental brain research, vol. 91, pp. 176-180, 1992.
〔18〕 M. Franceschini, M. Agosti, A. Cantagallo, P. Sale, M. Mancuso, and G. Buccino, "Mirror neurons: action observation treatment as a tool in stroke rehabilitation," Eur J Phys Rehabil Med, vol. 46, no. 4, pp. 517-523, 2010.
〔19〕 C. F. Berrol, "Neuroscience meets dance/movement therapy: Mirror neurons, the therapeutic process and empathy," The Arts in Psychotherapy, vol. 33, no. 4, pp. 302-315, 2006.
〔20〕 L. Q. Uddin, M. Iacoboni, C. Lange, and J. P. Keenan, "The self and social cognition: the role of cortical midline structures and mirror neurons," Trends in cognitive sciences, vol. 11, no. 4, pp. 153-157, 2007.
〔21〕 F. Filimon, J. D. Nelson, D. J. Hagler, and M. I. Sereno, "Human cortical representations for reaching: mirror neurons for execution, observation, and imagery," Neuroimage, vol. 37, no. 4, pp. 1315-1328, 2007.
〔22〕 S. Vogt, F. Di Rienzo, C. Collet, A. Collins, and A. Guillot, "Multiple roles of motor imagery during action observation," Frontiers in human neuroscience, vol. 7, p. 807, 2013.
〔23〕 J. J. Gonzalez-Rosa et al., "Action observation and motor imagery in performance of complex movements: Evidence from EEG and kinematics analysis," Behavioural Brain Research, vol. 281, pp. 290-300, 2015.
〔24〕 D. L. Eaves, M. Riach, P. S. Holmes, and D. J. Wright, "Motor imagery during action observation: a brief review of evidence, theory and future research opportunities," Frontiers in neuroscience, vol. 10, p. 514, 2016.
〔25〕 P. Holmes and C. Calmels, "A neuroscientific review of imagery and observation use in sport," Journal of motor behavior, vol. 40, no. 5, pp. 433-445, 2008.
〔26〕 D. Wen, B. Liang, Y. Zhou, H. Chen, and T.-P. Jung, "The current research of combining multi-modal brain-computer interfaces with virtual reality," IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 9, pp. 3278-3287, 2020.
〔27〕 Y. Zinchenko et al., "Virtual reality is more efficient in learning human heart anatomy especially for subjects with low baseline knowledge," New Ideas in Psychology, vol. 59, p. 100786, 2020.
〔28〕 J. A. Pineda, D. S. Silverman, A. Vankov, and J. Hestenes, "Learning to control brain rhythms: making a brain-computer interface possible," IEEE transactions on neural systems and rehabilitation engineering, vol. 11, no. 2, pp. 181-184, 2003.
〔29〕 A. Vourvopoulos and S. Bermúdez i Badia, "Motor priming in virtual reality can augment motor-imagery training efficacy in restorative brain-computer interaction: a within-subject analysis," Journal of neuroengineering and rehabilitation, vol. 13, no. 1, pp. 1-14, 2016.
〔30〕 F. Škola and F. Liarokapis, "Embodied VR environment facilitates motor imagery brain–computer interface training," Computers & Graphics, vol. 75, pp. 59-71, 2018.
〔31〕 J. W. Choi, B. H. Kim, S. Huh, and S. Jo, "Observing actions through immersive virtual reality enhances motor imagery training," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 7, pp. 1614-1622, 2020.
〔32〕 G. Pfurtscheller, R. Scherer, R. Leeb, C. Keinrath, C. Neuper, F. Lee, and H. Bischof, "Viewing moving objects in virtual reality can change the dynamics of sensorimotor EEG rhythms," Presence, vol. 16, no. 1, pp. 111-118, 2007.
〔33〕 J. W. Choi, S. Huh, and S. Jo, "Improving performance in motor imagery BCI-based control applications via virtually embodied feedback," Computers in Biology and Medicine, vol. 127, p. 104079, 2020.
〔34〕 R. S. Calabrò et al., "The role of virtual reality in improving motor performance as revealed by EEG: a randomized clinical trial," Journal of neuroengineering and rehabilitation, vol. 14, no. 1, pp. 1-16, 2017.
〔35〕 S. Aggarwal and N. Chugh, "Signal processing techniques for motor imagery brain computer interface: A review," Array, vol. 1, p. 100003, 2019.
〔36〕 V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, "EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces," Journal of neural engineering, vol. 15, no. 5, p. 056013, 2018.
〔37〕 R. Zhang, Q. Zong, L. Dou, and X. Zhao, "A novel hybrid deep learning scheme for four-class motor imagery classification," Journal of neural engineering, vol. 16, no. 6, p. 066004, 2019.
〔38〕 A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
〔39〕 D. Kostas, S. Aroca-Ouellette, and F. Rudzicz, "BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data," Frontiers in Human Neuroscience, vol. 15, p. 653659, 2021.
〔40〕 Y. Song, X. Jia, L. Yang, and L. Xie, "Transformer-based spatial-temporal feature learning for EEG decoding," arXiv preprint arXiv:2106.11170, 2021.
〔41〕 J. Xie et al., "A transformer-based approach combining deep learning network and spatial-temporal information for raw EEG classification," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 30, pp. 2126-2136, 2022.
〔42〕 G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, "Continual lifelong learning with neural networks: A review," Neural networks, vol. 113, pp. 54-71, 2019.
〔43〕 G. M. Van de Ven and A. S. Tolias, "Three scenarios for continual learning," arXiv preprint arXiv:1904.07734, 2019.
〔44〕 A. Buttfield, P. W. Ferrez, and J. R. Millan, "Towards a robust BCI: error potentials and online learning," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 14, no. 2, pp. 164-168, 2006.
〔45〕 M. K. Hazrati and A. Erfanian, "An online EEG-based brain–computer interface for controlling hand grasp using an adaptive probabilistic neural network," Medical engineering & physics, vol. 32, no. 7, pp. 730-739, 2010.
〔46〕 C. Brunner, R. Leeb, G. Müller-Putz, A. Schlögl, and G. Pfurtscheller, "BCI Competition 2008–Graz data set A," Institute for Knowledge Discovery (Laboratory of Brain-Computer Interfaces), Graz University of Technology, vol. 16, pp. 1-6, 2008.
〔47〕 R. T. Schirrmeister et al., "Deep learning with convolutional neural networks for EEG decoding and visualization," Human brain mapping, vol. 38, no. 11, pp. 5391-5420, 2017.
〔48〕 G. Pfurtscheller and F. L. Da Silva, "Event-related EEG/MEG synchronization and desynchronization: basic principles," Clinical neurophysiology, vol. 110, no. 11, pp. 1842-1857, 1999.
〔49〕 L. Van der Maaten and G. Hinton, "Visualizing data using t-SNE," Journal of machine learning research, vol. 9, no. 11, 2008.
〔50〕 Z. K. Agnew, R. J. Wise, and R. Leech, "Dissociating object directed and non-object directed action in the human mirror system; implications for theories of motor simulation," PloS one, vol. 7, no. 4, p. e32517, 2012.
〔51〕 N. Braun, S. Debener, N. Spychala, E. Bongartz, P. Sörös, H. H. Müller, and A. Philipsen, "The senses of agency and ownership: a review," Frontiers in psychology, vol. 9, p. 535, 2018.
〔52〕 H. Nagai and T. Tanaka, "Action observation of own hand movement enhances event-related desynchronization," IEEE transactions on neural systems and rehabilitation engineering, vol. 27, no. 7, pp. 1407-1415, 2019.
〔53〕 M. Song and J. Kim, "A paradigm to enhance motor imagery using rubber hand illusion induced by visuo-tactile stimulus," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 27, no. 3, pp. 477-486, 2019.
〔54〕 J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, "Brain–computer interfaces for communication and control," Clinical neurophysiology, vol. 113, no. 6, pp. 767-791, 2002.
〔55〕 E. Niedermeyer and F. L. da Silva, Electroencephalography: basic principles, clinical applications, and related fields. Lippincott Williams & Wilkins, 2005.