跳到主要內容

簡易檢索 / 詳目顯示

研究生: 徐子勝
Tzu-sheng Hsu
論文名稱: 以立體視覺為基礎之機械手臂應用系統
A Stereo Vision-Based Robot Arm system and its applications
指導教授: 蘇木春
Mu-Chun Su
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
畢業學年度: 98
語文別: 中文
論文頁數: 93
中文關鍵詞: 多層感知機雙眼立體視覺基因演算法攝影機校正
外文關鍵詞: Genetic Algorithm, binocular stereo vision, camera calibration, MLP
相關次數: 點閱:11下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出了一套機械手臂之應用系統,本系統設計一具有5個自由度(degree of freedom)之機械手臂,機械手臂前端有夾具可以抓取空間中的目標物,並以雙攝影機實現目標物追蹤。為了由影像追蹤目標物,作者提出一個以立體視覺為基礎之機械手臂定位方法,首先透過攝影機校正方法(camera calibration)求出雙攝影機之內外部參數予以校正,並結合雙眼立體視覺(binocular stereo vision)求出物體在世界座標系統中實際重心位置。雖然物體的重心位置經過校正,但在估算與攝影機中心的距離時仍有些微誤差,因此本系統利用多層感知機(Multilayer perceptrons)修正此誤差。接著計算出順向運動學(Forward kinematics)所得之端點座標與目標物實際位置之誤差值,並以其誤差值當作基因演算法(Genetic Algorithms)的適應函數 (fitness function)推算出機械手臂中各馬達的角度,達到機械手臂定位之目的。結果驗證部分,我們將目標物放在隨機放置在機械手臂的工作範圍內,並透過上述演算法將目標物夾取到目的地。


    This thesis presents a vision-based robot arm with 5 degrees of freedom and an end-effector attached to the end of the robot arm for grasping an object. In order to locate a target from the image, a stereo vision-based robot arm system is implemented. A stereo calibration algorithm is adopted for estimating the two cameras’ intrinsic and extrinsic parameters. With the estimated parameters, the stereo-vision system can estimate the 3D position of the object in the world coordinate system. A trained multi-layer perception is then used for compensating the location estimation errors incurred by the inaccurate parameters estimated from the calibration procedure. In the following, the Genetic Algorithms (GA) is adopted to solve the forward kinematics problem in order to compute the angles of each motor. The performance of the robot arm was tested by several real-life experiments of griping a target at different positions.

    第一章緒論.......................................1 1-1 研究動機.....................................1 1-2 研究目的.....................................2 1-3 論文架構.....................................2 第二章相關研究...................................3 2-1機器視覺應用機械手臂相關......................3 第三章硬體介紹...................................8 3-1 機械手臂馬達及機構...........................9 3-1-1 馬達介紹...................................10 3-1-2機械手臂結構................................12 3-2 SSC-32馬達控制器.............................16 第四章系統架構...................................17 4-1立體視覺測距..................................18 4-1-1 亮度矯正...................................18 4-1-2 K-means群聚演算法..........................20 4-1-3 立體比對(Stereo match).....................25 4-1-4 距離估測...................................28 4-1-5相機參數取得................................31 4-2 以MLP解誤差..................................33 4-3機械手臂取物..................................38 4-3-1 機械手臂之正向運動學.......................40 4-3-2反向運動學..................................42 4-3-2-1 一般反向運動學...........................42 4-3-2-2 基因演算法用於反向運動學.................45 4-4語音辨識模組..................................49 第五章實驗.......................................51 5-1 立體視覺測距.................................51 5-2 機械手臂定位.................................54 5-3 以立體視覺控制機械手臂取物...................55 5-4 語音控制機械手臂取物.........................58 第六章結論與未來展望.............................61 6-1 結論.........................................61 6-2 未來展望.....................................62 參考文獻.........................................63 附錄一 立體視覺測距實驗表........................66 附錄二 立體視覺測距實驗表........................73 附錄三 機械手臂定位實驗表........................79

    [1] P. I. Corke, “Visual Control of Robot Manipulators-A Review,” in K.Hashimoto, editor, Visual Servoing, pp. 1-32. World Scientific, 1994.
    [2] J. Stuckler and S. Behnke, “Integrating indoor mobility, object manipulation, and intuitive interaction for domestic service tasks,” in International Conference on Humanoid Robots, 2009. Humanoids 2009, 9th IEEE-RAS , pp.506-513, Dec. 7-10, 2009.
    [3] C. Y. Lin and Y. P. Chiu, “The DSP based catcher robot system with stereo vision,” in Proceedings of the 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp.897-903, July 2-5, 2008.
    [4] A. Nakashima, Y. Sugiyama, and Y. Hayakawa, “Paddle Juggling of one Ball by Robot Manipulator with Visual Servo,” in International Conference on Control, Automation, Robotics and Vision,2006
    [5] J. K. Oh and C. H. Lee, “Development of a stereo vision system for industrial robots,” in International Conference on Control, Automation and Systems, 2007, ICCAS ''07, pp.659-663, Oct. 17-20, 2007.
    [6] V. Lippiello, B. Siciliano, and L. Villani, “Position-Based Visual Servoing in Industrial Multirobot Cells Using a Hybrid Camera Configuration,” IEEE Transactions on Robotics ,vol.23, no.1, pp.73-86, Feb. 2007.
    [7] P. Hynes, G. I. Dodds , and A. J. Wilkinson, “Uncalibrated visual-servoing of a dual-arm robot for surgical tasks,” in Proceedings 2005 IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp. 151- 156, June 27-30, 2005.
    [8] 江東毅,「由影像輸入之機械臂書法系統」,國立台灣科技大學電機工程研究所碩士論文,民國九十一年五月。
    [9] 王允上,“機器人單晶片微電腦控制”,全華圖書股份有限公司,台北市,民國九十七年二月。
    [10] Grand Wing Servo (GWS) ,Available: http://www.gws.com.tw/chinese/product/product.htm, June 28,2010
    [11] Lynxmotion, Inc. , Electronics Guides, Available: http://www.lynxmotion.com/images/html/build136.htm , June 28,2010
    [12] S. T. Barnard and W.B. Thompson, “Disparity analysis of images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, pp. 330-340, 1980.
    [13] A. D. Kulkarni, Computer Vision and Fuzzy-Neural Systems, Prentice Hall, Inc., 2001.B. Heisele and W. Ritter, “Obstacle detection based on color blob flow,” in Proceedings Intelligent Vehicles Symposium 1995, pp. 282-286, Detroit, 1995.
    [14] D. Marr and T. Poggio, “Cooperative computation of stereo disparity,” Science, vol. 194, pp. 283-287, 1976.
    [15] I. Ashdown , ” Octree color quantization,” in Radiosity-Aprogrammer’s Perspective. New York: Wiley, 1994.
    [16] J. T. Tou and R. C. Gonazlez, “Pattern Recognition Principles,” Reading MA: Addison-Wesley 1974.
    [17] O. Verevka, “The local K-means algorithm for color image quantization,” M.Sc. dissertation, Univ. Alberta, Edmonton, AB, Canada, 1995.
    [18] Wikimedia Foundation, Inc., k-means clustering,Available: http://en.wikipedia.org/wiki/K-means_clustering , June 28 , 2010
    [19] 謝易錚,「以立體視覺實作盲人輔具系統」,國立中央大學資訊工程研究所碩士論文,民國九十五年七月。
    [20] M. C. Su, Y. Z. Hsieh, D. Y. Huang, et al., “A Vision-Based Travel Aid for the Blind,” in Pattern Recognition Theory and Application, E. A. Zoeller Eds. pp. 73-89, Nova Science Publishers, New York, 2008.
    [21] P. B. Chou and C. M. Brown, “The theory and practice of Bayesian image labeling,” International Journal of Computer Vision, vol. 4, no. 3, pp. 185-210, 1990.
    [22] 吳成柯等,“數位影像處理”,儒林圖書有限公司,台北市,民國九十年十月。
    [23] 于仕琪、劉瑞禎,“學習OpenCV”,清華大學出版社,北京,民國九十八年10月。
    [24] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol.22, no.11, pp. 1330- 1334, Nov 2000.
    [25] Z. Zhang, “Flexible Camera Calibration By Viewing a Plane From Unknown Orientations,” in International Conference on Computer Vision (ICCV''99), Corfu, Greece, pp. 666-673, September 1999.
    [26] 蘇木春、張孝德,“機器學習:類神經網路、模糊系統以及基因演算法則”,修訂二版,全華圖書股份有限公司,台北市,民國九十三年。
    [27] L. W. Tsai, Robot analysis:The Mechanics of Serial and Parallel Manipulators. John Wiley & Sons, Inc. , United States of America, 1999.
    [28] 巫憲欣,「以系統晶片發展具機器視覺之機械手臂運動控制」,國立台灣科技大學機械工程研究所碩士論文,民國九十五年六月。
    [29] 鍾明蒼「身體障礙者之聲控人機介面」,淡江大學電機工程學系控制組碩士論文,民國九十年六月。
    [30] 王小川,“語音訊號處理”,全華科技圖書股份有限公司,台北市,民國九十四年二月。

    QR CODE
    :::