| 研究生: |
蔣錫沅 Shi-Yuan Chiang |
|---|---|
| 論文名稱: |
無人搬運車之雙導引功能開發 |
| 指導教授: |
王文俊
Wen-June Wang |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 中文 |
| 論文頁數: | 70 |
| 中文關鍵詞: | 無人搬運車 、ROS(機器人作業系統) 、雷射導航 、室內導航與定位 、深度學習 |
| 外文關鍵詞: | AGV, ROS (Robot Operating System), Indoor-navigation, Laser-guiding, Deep learning |
| 相關次數: | 點閱:25 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文主要目的改良了現有廠房內的自動導引車(無人搬運車)系統,目前廠房內的自動導引車主要是循磁帶行進,功能簡單。因此增強現有導引系統並賦予以機器視覺辨識充電站及電梯之功能,再利用已建立的WIFI網路介面存取技術,讓控制中心可以即時監控車身電量及車行速度。
本論文提出一個基於機器人作業系統(Robot Operating System, ROS)之實現方法,ROS為分散式的架構,並以點對點網路將所有的處理序連接在一起交換資訊。本論文在Linux環境下以ROS開發軟體系統,並整合嵌入式系統NVIDIA Jetson TX1、86Duino one,實現軟硬體協同設計。
首先著重在增強現有導引技術,原磁帶導引的車輛加上雷射測距儀,建立起室內座標系統,讓自動導引車實踐室內建圖、定位與導航,並透過室內導航達成四個主要新增功能:(1)室內自主導航;(2) 閃避動態障礙物;(3)自動尋找並導航至最近磁軌點;(4)記憶tag點位置達成切換路徑。如此以磁帶及雷射互相交替使用,磁帶就可以少裝設一些,也可以不用淨空所有地面,並且依據流程自動地切換磁帶導引與雷射導引模式。另外則是以目前最新的即時物件檢測架構YOLO (You- Only-Look-Once)融合ROS架構中,以深度學習的技術成功利用影像辨識出充電站及電梯之功能。結合以上所述,成功使其運作模式更為智慧與方便。
The key purpose of this paper is to improve the performances of the existing automatic guided vehicle (AGV) system in the factory. Usually, the main function of the AGV only simply follows the magnetic tape in the factory. Therefore, our thought is to enhance the current guiding system and to recognize charging stations and elevators via the machine vision, after that, combining the system with the Internet so that the control center can immediately monitor the power and the speed of the AGV.
In this paper, a proposed method is implemented our goals base on Robot Operating System (ROS). The architecture of ROS is a distributed system. The ROS uses peer-to-peer network to link all processes to exchange data. In the Linux environment, the AGV system is built to develop the software system by means of the ROS which combines with embedded system NVIDIA Jetson TX1, 86Duino one to achieve hardware and software co-design.
Firstly, a laser rangefinder is added to the AGV to help enhance the existing guiding technologies. Then establishing an indoor coordinate system to detect the AGV position and orientation anytime, and using the indoor coordinates to carry out the following four main functions:(1) Indoor autonomous navigation;(2) Dynamic obstacle avoidance;(3) Finding the nearest magnetic point and navigation automatically;(4) Recording the position and number of RFID tags. Using laser and magnetic tape automatically alternate with each other to implement guiding mode, so the magnetic tape can be installed less, the AGV also become more intelligent and convenient. Next, we use the real-time deep learning based object detection system, namely YOLO (You-Only-Look-Once), which incorporates with the ROS to recognize the charging station and the elevator by pure image. According to the previous description, we have succeeded to make the AGV more intelligent and convenient.
[1] 行政院,“行政院生產力4.0發展方案”。
[2] 顏御軒(王文俊教授指導),“無人搬運車之物聯網功能實現”,國立中央大學電機工程學系碩士論文,2017年,7月。
[3] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part I,” IEEE Robotics & Automation Magazine, vol. 13, pp. 99-110, 2006.
[4] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (SLAM): part II,” IEEE Robotics & Automation Magazine, vol. 13, pp. 108-117, 2006.
[5] A. J. Davison, I. D. Reid, N.D. Molton, and O. Stasse, “MonoSLAM: real time single camera SLAM,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 29, pp.1052-1067, 2007.
[6] Y. T. Wang, D. Y. Hung, and C .H. Sun, “Improving data association in robot SLAM with monocular vision,” Journal of Information Science and Engineering, vol. 27, pp.1-15, 2011.
[7] M. Bosse and R. Zlot, “Map matching and data association for large-scale two-dimensional laser scan-based slam,” The International Journal of Robotics Research, vol. 27, pp. 667- 691, 2008.
[8] C. Y. Yang, J. S. Yang, and F. L. Lian, “Safe and smooth: mobile agent trajectory smoothing by SVM,” International Journal of Innovation Computing Information and Control, vol. 8, pp. 4959-4978, 2012.
[9] D. G. Kirkpatrick, “Efficient computation of continuous skeletons,” in Proc. of 20th IEEE Annual Symposium on Foundations of Computer Science,1979, pp. 18-27.
[10] Y. Liu, J. Gao, D. Liu, and Z. Wang, “The design of control system and study on control algorithm of laser navigation mobile robot,” in 2010 3rd International Congress on Image and Signal Processing, Yantai, China, Oct. 2010, pp. 4276-4280.
[11] E. Jung, H. Cho, J. Do, J. Kim, and S. Kim, “Implementation of laser navigation system using particle filter,” in 2011 11th International Conference on Control, Automation and Systems, Gyeonggi-do, Korea, Oct. 2011, pp. 1636-1638.
[12] S. Ishibushi, A. Taniguchi, T. Takano, Y. Hagiwara, and T. Taniguchi, “Statistical localization exploiting convolutional neural network for an autonomous vehicle,” in 2015 IEEE 41st Annual Conf. on Industrial Electronics Society, Yokohama, Japan, Nov. 9-12, 2015, pp. 1369-1375.
[13] K. Jung, J. Y. Kim, J.G. Kim, E. Jung, and S. Kim, “Positioning accuracy improvement of laser navigation using UKF and FIS,” Robot Auton. Syst., vol. 62, no. 9, pp. 1241-1247, Sept. 2014.
[14] Y. J. Lee, J. H. Suh, J. W. Lee, and K. S. Lee, “Adaptive PID control of an AGV system using humoral immune algorithm and neural network identifier technique,” in Proc. of 2004 IEEE Int. Conf. on Control Applications, Taipei, Taiwan, Sep. 2004, pp. 1576-1581.
[15] C. Kirsch, F. Kuenemund, D. Hess, and C. Roehrig, “Comparison of localization algorithms for AGVs in industrial environments,” in 2012 Proc. 7th German Conference on Robotics, Munich, May. 2012, pp. 183-188.
[16] A. Aggarwal, A. Kukreja, and P. Chopra, “Vision based collision avoidance by plotting a virtual obstacle on depth map,” in Proc. of 2010 IEEE International Conference on Information and Automation, Harbin, China, Jun. 2010, pp. 532-536.
[17] F. P. Ferreira, M. S. Marçal, and J. A. Fabro, “Trajectory planning in dynamic environments for an industrial AGV, integrated with fuzzy obstacle avoidance,” in Porc. of 2015 12th Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics, Uberlandia, Jun. 2015, pp. 347-352.
[18] A. J. Bostel and V. K. Saigar, “Dynamic control systems for AGVs,” IEEE Trans. Computing & Control Engineering, vol. 7, pp. 169-176, Aug, 1996.
[19] J. Young and M. Simic, “LIDAR and monocular based overhanging obstacle detection,” Procedia Computer Science, vol.60, pp. 1423–1432, 2015.
[20] K. Watanabe, T. Kato, and S. Maeyama, “Obstacle avoidance for mobile robots using an image-based fuzzy Controller,” in Proc. of the 39th Annual Conf. on IEEE Industrial Electronics Society (IECON 2013), Vienna, Austria, Nov. 2013, pp. 6390-6395.
[21] https://zh.wikipedia.org/wiki/%E6%A9%9F%E5%99%A8%E4%BA%BA%E4%BD%9C%E6%A5%AD%E7%B3%BB%E7%B5%B1,2017年6月。
[22] https://hollyqood.wordpress.com/2015/12/01/ros-slam-2-hector-slam-2d%E5%9C%B0%E5%9C%96%E5%BB%BA%E7%BD%AE/,2017年6月。
[23] D. Fox, “KLD-sampling: adaptive particle filters,” Advances in Neural Information Processing Systems, pp. 713-720, 2001.
[24] http://wiki.ros.org/costmap_2d,2017年6月。
[25] http://nthucad.cs.nthu.edu.tw/~yyliu/personal/nou/04ds/dijkstra.html,2017年6月。
[26] http://xilinx.eetrend.com/article/9900,2017年6月。
[27] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 779-788.
[28] M. A. Sadeghi and D. Forsyth, “30hz object detection with dpm v5,” in European Conference on Computation Vision, 2014, pp. 65-79.
[29] M. Lim, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, 2013.
[30] C. Szegedy, W. Lui, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, et al, “Going deeper with convolutions,” in Proc. of IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1-9.