| 研究生: |
林鼎國 Ting-Kuo Lin |
|---|---|
| 論文名稱: |
基於類神經網路之即時虛擬樂器演奏系統 Real-Time Virtual Instruments Based On Neural Network System |
| 指導教授: |
施國琛
Timothy K. Shih |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2015 |
| 畢業學年度: | 103 |
| 語文別: | 英文 |
| 論文頁數: | 74 |
| 中文關鍵詞: | 手勢辨別 、類神經網路 、虛擬樂器 |
| 外文關鍵詞: | Virtual Instruments |
| 相關次數: | 點閱:13 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近幾年來人機互動領域的相關研究越來越頻繁地出現在我們生活周遭,這些相關的應用與研究不但帶給我們生活上的便利也提高使用者的工作效率,使用這些相關應用可以降低成本開銷,同時也讓使用者透過不同類型的人機互動方式以更自然、更迅速且更直觀的操控向電腦傳達需求,隨著科技進步與精密設備的推出現今電腦能夠替使用者完成的工作越來越多元,例如我們隨手可得的智慧型手機只需要輕觸面板就可以執行手機內的應用程式,或是透過攝影機如手機鏡頭 、 Kinect 、 Leap Motion 、 Google Glass …等等設備辨識影像進一步向電腦傳達指令,當電腦接收到指令後替使用者完成指定的工作,近幾年透過新穎人機互動技術與電腦互動的相關研究逐漸變得純熟,而這些相關研究突破傳統與電腦互動及溝通的方式,這也使得人機互動逐漸成為現代人生活的一部分。
在這篇論文將會介紹使用Leap Motion實作虛擬樂器的方法,同時搭配MIDI軟體讓虛擬樂器可以演奏的音色更多,然後我們會先透過幾個手部的特徵資訊訓練類神經網路,再將訓練完的類神經網路加入系統進一步辨別預定好的手勢觸發指定的功能,加入類神經網路後系統依然可以保持即時執行。
Research in the field of Human-Computer Interaction (HCI) has become more and more frequently in our life. These related applications not only make our lives more convenient and efficiency, but also reduce the overhead costs. Users can more naturally, quickly and intuitively convey commands through different types of HCI applications to computer. With advances in computer technology and precision equipment, computers can complete multivariate works for users. For example, the use of smartphones just touching the panel, then the alarm clock, navigation, photograph applications will be executed. And even the use of camera devices like Kinect , Leap Motion , Creative Sen3d , etc. Through recognizing the images obtained from camera devices convey commands to the computer. And the computer complete the assigned work for users when receives commands. In recent years, the researches about innovative technology and human-computer interaction become skillful. These studies break the traditional way of interacting with a computer also makes HCI becoming a part of life.
Leap Motion was used to captured the hand information of users in this paper. Further recognize the hand gesture of users to reduce the burden of operating virtual instrument. We train a neural network to analyze the information captured from the Leap Motion, then convey commands that we predefined to computer. Finally, this paper will show that our system could maintain in real time and stable state.
[1] Pedro Neto, Dário Pereira, and J. Norberto Pires, Member, IEEE and A. Paulo Moreira, Member, IEEE, “Real-Time and Continuous Hand Gesture Spotting: an Approach Based on Artificial Neural Networks,” 2013 IEEE International Conference on Robotics and Automation (ICRA) Karlsruhe, Germany, May 6-10, 2013.
[2] Deyou Xu, “A Neural Network Approach for Hand Gesture Recognition in Virtual Reality Driving Training System of SPG,” The 18th International Conference on Pattern Recognition, Artillery Academy at Nanjing, China, 2006.
[3] G.R.S. Murthy, R.S. Jadon, Department of Computer Applications Madhav Institute of Technology and Science, Gwalior, M.P. India, “Hand Gesture Recognition using Neural Networks,” 2010 IEEE 2nd International Advance Computing Conference, 2010.
[4] Yan Wen, Chuanyan Hu, Guanghui Yu, and Changbo Wang, “A Robust Method of Detecting Hand Gestures Using Depth Sensors,” Haptic Audio Visual Environments and Games (HAVE), 2012 IEEE, pp.72-77, 2012.
[5] Rajesh Mapari, and Dr. Govind Kharat, “Hand Gesture Recognition using Neural Network,” International Journal of Computer Science and Network (IJCSN) Volume 1, Issue 6, December 2012.
[6] Jos´e Manuel Palacios, and Carlos Sag¨u´es, ”Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors ”, Sensors 2013.
[7] Sharad Vikram, and Lei Li, “Handwriting and Gestures in the Air, Recognizing on the Fly,” CHI 2013 Extended Abstracts, Paris, France, April 27, May 2, 2013.
[8] Foti Coleca, Andreea State, Sascha Klement, Erhardt Barth, Thomas Martinetz, “Self-organizing maps for hand and full body tracking”, Neurocomputing147 (2015)174–184.
[9] Trong-Nguyen Nguyen, Duc-Hoang Vo, Huu-Hung Huynh, and Jean Meunier, “Geometry-based Static Hand Gesture Recognition using Support Vector Machine”, 2014 13th International Conference on Control, Automation, Robotics & Vision Marina Bay Sands, Singapore, 10-12th December 2014 (ICARCV 2014)
[10] M.P. Paulraj, S. Yaacob, M.S. bin Zanar Azalan, and R. Palaniappan,“A phoneme based sign language recognition system using skin color segmentation,” IEEE 6th International Colloquium on Signal Processing and Its Applications (CSPA), pp.1-5, 2010.
[11] X. Wen and Y . Niu, “A Method for Hand Gesture Recognition Based on Morphology and Fingertip-Angle”, The 2nd International Conference on Computer and Automation Engineering (ICCAE), vol. 1, pp.688-691,2010.
[12] M. Kawulok, J. Kawulok, and J. Nalepa, “Spatial based skin detection using discriminative skin presence features”, Pattern Recogn. Lett., 2013.
[13] Mokhtar M. Hasan, and Pramoud K. Mirsa, “Brightness Factor Matching For Gesture Recognition System Using Scaled Normalization”, International Journal of Computer Science & Information Technology (IJCSIT), vol. 3(2), 2011.
[14] M. Andersen, T. Jensen, P. Lisouski, A. Mortensen, M. Hansen, T. Gregersen, and P. Ahrendt, “Kinect Depth Sensor Evaluation for Computer Vision Applications”, Technical Report ECE-TR-6, Department of Engineering Electrical and Computer Engineering, Aarhus University, Denmark, 2012.
[15] J. Lambrecht and J. Krüger, “Spatial programming for industrial robots based on gestures and augmented reality”, in Proc. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Portugal, 2012, pp. 466–472.
[16] D. Kelly, J. Mc Donald, and C. Markham, “Weakly supervised training of a sign language recognition system using multiple instance learning density matrices”, IEEE Trans. Systems, Man Cybernetics–Part B, vol. 41, no. 2, pp. 526–541, 2011.
[17] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images”, in Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, june 2011, pp. 1297 –1304.
[18] V. Frati and D. Prattichizzo, “Using kinect for hand tracking and rendering in wearable haptics”, in World Haptics Conference (WHC), 2011 IEEE, june 2011, pp. 317 –321.
[19] M. Tang, “Recognizing hand gestures with microsoft’s kinect”, Technical Report of Department of Electrical Engineering, Stanford University, March 2011.
[20] D. Xu, Y.L. Chen, C. Lin, X. Kong, and X. Wu, “Real-Time Dynamic Gesture Recognition System Based on Depth Perception for Robot Navigation”, In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Guangzhou, China, 11–14 December 2012; pp. 689–694.
[21] Z. Zafrulla, H. Brashear, T. Starner, H. Hamilton, and P. Presti, “American sign language recognition with the Kinect”, In Proceedings of the 13th International Conference on Multimodal Interfaces, Alicante, Spain, 14–18 November 2011; pp. 279–286.
[22] Y. Wen, C. Hu, G. Yu, and C. Wang, “A Robust Method of Detecting Hand Gestures Using Depth Sensors”, In Proceedings of the 2012 IEEE International Workshop on Haptic Audio Visual Environments and Games (HAVE), Munich, Germany, 8–9 October 2012; pp. 72–77.
[23] Timothy K. Shih, “Spider King: Virtual Musical Instruments based on Microsoft Kinect”, The 6th IEEE International Conference on Ubi-Media Computing, 2013, Aizu-Wakamatsu, Japan, November 2-4.
[24] Iason Oikonomidis, Nikolaos Kyriazis, and Antonis A. Argyros, “Efficient model-based 3D tracking of hand articulations using Kinect”, in Proceedings of the 22nd British Machine Vision Conference (BMVC) , 2011, University of Dundee, UK, Aug. 29-Sep.1.
[25] J.L. Raheja, A. Chaudhary, and K. Singal, “Tracking of Fingertips and Centers of Palm Using KINECT”, Computational Intelligence, Modelling and Simulation (CIMSiM), 2011, pp. 248-252.
[26] 蘇木春,張孝德 (2004),機器學習:類神經網路、模糊系統以及基因演算法則(修訂二版),台灣,全華圖書。