跳到主要內容

簡易檢索 / 詳目顯示

研究生: 林士傑
Shih-Chieh Lin
論文名稱: 以視覺為基礎之餵食機器人
The development of the Vision-based Meal-assistant Robot
指導教授: 蘇木春
Mu-chun Su
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
畢業學年度: 95
語文別: 中文
論文頁數: 70
中文關鍵詞: 餵食機器人嘴唇辨識頭動滑鼠倒傳遞演算法類神經網路
外文關鍵詞: neural network, back-propagation algorithm, head mouse, lip detection, meal-assistance robot
相關次數: 點閱:8下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 餵食機器人是一種能夠幫助使用者自主進食的機器人。『進食』是每個人每天三餐必須進行的行為,對於身體障礙者來說,每次進食都必須仰賴他人的幫助,進而造成家人或親友的不便。因此,我們期望能夠透過餵食機器人來取代人力的方式,不僅能減輕家屬的負擔,並且提高身障者的自主性。
    以影像為基礎的自動化餵食機器人,是由三個AI-1001伺服馬達(servo motor)以及兩個KRS-2350HV伺服馬達構成的5自由度(5-DOF)機器手臂。藉由CCD攝影機擷取進食時,外在環境的資訊,透過類神經網路系統運算,控制5自由度的機器手臂連結湯匙,規劃、修正手臂移動的位置,把目標食物送至使用者的嘴邊,讓身障者不需要固定進食的位置,僅須憑藉著頭部動作,就能夠達到自主進食之目的。


    Meal-assistance robot is a kind of robot to assist the disabled in eating foods by himself. Everyone must eat everyday, but for the disabled, they must depend on other people each time when they eat. It makes caretakers inconvenience. We hope the robot can replace the human to feed the disabled, so that the disabled can be more self-determined and the load of the caretaker can be reduced.
    The vision-based meal-assistance robot is the 5-DOF mechanical arm composed of two servo motors KRS-2350HV and three AI Motors AI-1001. The robot gets the environment information as inputs from CCD Camera. It controls the arm with spoon and plans the arm’s path by the way of neural network in order to feed the disabled. The disabled does not use both hand to control the robot, and need not to fix their mouth for eating. They only use head mouse to choose the target of foods which they like, and eat the food freely by themselves.

    摘 要 i ABSTRACT ii 誌 謝 iii 目 錄 v 圖目錄 viii 表目錄 xi 一、序 論 1 1-1   研究動機 1 1-2   研究目的 2 二、餵食機器人相關研究介紹 4 2-1   餵食機器人Handy1 4 2-2   神奈川工科大學的餵食機器人 6 2-3   餵麵條機器人 7 2-4 餵食機器人My Spoon 8 三、以視覺為基礎之餵食機器人 10 3-1   系統硬體介紹 14 3-1-1 監控系統 15 3-1-2 PWM模組(Pulse Width Modulation) 17 3-1-3 AI-1001伺服馬達 18 3-1-4 KRS-HV 2350伺服馬達 21 3-1-5 轉動感測模組 23 3-2   系統軟體介紹 24 3-3   影像辨識追蹤輸入單元 28 3-3-1 色彩空間轉換 28 3-3-2 平滑濾波器 30 3-3-3 色彩偵測 31 3-3-4 侵蝕與膨脹(Erosion & Dilation) 32 3-3-5 標號演算法 34 3-3-6 特徵擷取 36 3-3-7 移動偵測 39 3-3-8 遮蔽處理 40 3-4   人機介面系統 41 3-4-1 影像式頭控滑鼠 41 3-4-2 影像式餐盤 44 3-5   控制系統 45 3-5-1 類神經網路 45 3-5-2 路徑規劃 50 3-5-3 路徑修正 51 四、環境介紹與實驗結果 54 4-1   硬體環境介紹 54 4-2   系統操作流程 55 4-3   實驗結果 57 4-3-1 滑鼠操作測試 57 4-3-2 臉部及嘴唇位置辨識測試 58 4-3-4 距離誤差測試 61 4-3-5 倒傳遞演算法訓練結果測試 62 4-3-6 挖取食物時間及成功率測試 64 五、結論及未來展望 65 5-1   結論 65 5-2   未來展望 66 參考文獻 68

    [1] M. Betke, J. Gips and P. Fleming, “The camera mouse: visual tracking of body feature to provide computer access for people with severe disabilities,” IEEE Trans. on Neural Systems and Rehabilitation Engineering, vol. 10, no. 1, pp. 1-10, 2002.
    [2] J. Y. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm,” Intel Corporation Microprocessor Research Laboratory, 1999.
    [3] T. S. Caetano and D. A. C. Barone, “A probabilistic model for the human skin color”, in Proc. of the 11th IEEE Int. Conf. on Image Analysis and Processing, Palermo, 2001, pp. 279–283.
    [4] D. Chai and K. N. Ngan, “Face segmentation using skin-color map in videophone applications,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 9, no. 4, pp. 551-564, 1999.
    [5] Q. C. Chen, G. H. Deng, X. L. Wang, and H. J. Huang, “An Inner Contour Based Lip Moving Feature Extraction Method For Chinese Speech,” in Proc. of the IEEE Fifth Int. Conf. on Machine Learning and Cybernetics, Dalian, Aug. 12-16, 2006, pp. 3859-3864.
    [6] R. S. Feris, T. E. Campos, and R. M. Cesar, “Detection and tracking of facial features in video sequences,” in Mexican Int. Conf. on Artificial Intelligence, 2000, pp. 129-137.
    [7] C. Garcia, G. Zikos, and G. Tziritas, “Face Detection in Color Images using Wavelet Packet Analysis,” in Proc. of the 6th IEEE Int. Conf. on Multimedia Computing and Systems, Florence, 1999, pp. 703-708.
    [8] A. J. Glenstrup and T. E. Nielsen, “Eye Controlled Media: Present and Future State,” Thesis of Bachelor in Information Psychological Laboratory, University of Copenhagen, Denmark, 1995.
    [9] G. Gomez, P. E. Hotz, “Investigations on the robustness of an evolved learning mechanism for a robot arm,” in Proc. of the 8th Int. Conf. on Intelligent Autonomous Systems, Amsterdam, 2004, pp. 818-827.
    [10] R. L. Hsu, M. Abdel-Mottaleb, and A.K. Jain, “Face Detection in Color Images”, IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.24, pp.696-706, 2002.
    [11] T. E. Hutchinson, K. P. White, Jr., W. N. Martin, K. C. Reichert, and L. A. Frey, “Human-Computer Interaction Using Eye-Gaze Input,” IEEE Trans. on Systems, Man and Cybernetics, vol. 19, no. 6,pp. 1527-1534, Dec. 1989.
    [12] S. T. Iqbal, X. S. Zheng, and B. P. Bailey, “Task-evoked papillary response to mental workload in human-computer interaction,” in Conf. on Human Factors in Computing systems, 2004, pp.1477-1480.
    [13] Sumio Ishii, Shinji Tanaka, Fumiaki Hiramatsu, “Meal Assistance Robot for Severely Handicapped People” in IEEE Int. Conf. on Robotic and Automation, 1995, p.7803-1965-6/95.
    [14] S. Kawato and J. Ohya, “Two-step approach for real-time eye tracking with a new filtering technique,” in Int. Conf. on Systems, Man and Cybernetics, 2000, pp. 1366-1371.
    [15] Y. Kuniyoshi, Y. Yorozu, M. Inaba, and H. Inoue, “From Visuo-Motor Self Learning to Early Imitation-A Neural Architecture for Humanoid Learning,” in Proc. of the 2003 IEEE Int. Conf. on Robotics & Automation, Taipei, Taiwan, Sep. 14-19, 2003.
    [16] C. Lin and K. C. Fan, “Human Face Detection Using Geometric Triangle Relationship,” in Proc. of the IEEE 15th Int. Conf on Pattern Recognition, 2000, vol. 2, pp. 941-944.
    [17] T. M. Martinetz, H. J. Ritter, and K. J. Schulten, “Three-dimensional neural net for learning visuomotor coordination of a robot arm”, IEEE Trans. on Neural Networks, vol. 1, no. 1, March 1990.
    [18] C. H. Morimoto and M. Flickner, “Real-Time Multiple Face Detection Using Active Illumination,” in Proc. of the Fourth IEEE Int. Conf. on Automatic Face and Gesture Recognition, March 2000, pp. 1-6.
    [19] L. D. Stefano and A. Bulgarelli, “A Simple and Efficient Connected Components Labeling Algorithm”, in Proc. of 1999 Int. Image Analysis and Processing Conf., pp. 322-327.
    [20] M. C. Su, C. H. Chou, E. Lai, and J. Lee, “A New Approach to Fuzzy Classifier Systems and its Application in Self-Generating Neuro-Fuzzy Systems”, Neurocomputing, vol. 69, pp. 584-614, Jan. 2006.
    [21] M. C. Su, S. Y. Su, and G. D. Chen, “A low cost vision-based human-computer interface for people with sever disabilities,” Biomedical Engineering-Applications, Basis, & Communications, vol. 17, no. 6, pp. 10-18, 2005.
    [22] S. H. Yeh, “Human facial animation based on real image sequence”, M.S. thesis, Dept. of Computer Science and Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan, 2001.
    [23] Dr Robot , Inc., Available: http://www.drrobot.com/
    [24] iRobot Corporation, Available: http://www.irobot.com/
    [25] KONDO KAGAKU Corporation, LTD., Available: http://www.kondo-robot.com/
    [26] LOGITECH Corporation, Available: http://www.logitech.com.tw/index.asp
    [27] MEGAROBICS LTD. Corporation, Available: http://www.megarobotics.com/index_e.htm
    [28] SECOM, Inc. Avalable:http://www.secom.co.jp/english/myspoon/index.html
    [29] WINBOND Electronics Corporation, Available: http://www.winbond.com/
    [30] 吳成柯、程湘君、載善榮、雲立實 譯,數位影像處理,儒林圖書有限公司,民國九十年。
    [31] 蘇木春、張孝德,機器學習:類神經網路、模糊系統以及基因演算法則,全華科技圖書股份有限公司,民國九十二年。

    QR CODE
    :::