| 研究生: |
林宜臻 Janice Lin |
|---|---|
| 論文名稱: |
基於深度學習之戶外導航機器人 |
| 指導教授: |
王文俊
Wen-June Wang |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2018 |
| 畢業學年度: | 106 |
| 語文別: | 中文 |
| 論文頁數: | 63 |
| 中文關鍵詞: | 深度學習 、Google Maps API 、機器人導引 、模糊控制 |
| 外文關鍵詞: | deep learning, Google Maps API, robot navigation, fuzzy control |
| 相關次數: | 點閱:17 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文實現一個在戶外自主行走及導航的機器人系統。整體架構以嵌入式開發版Jetson TX1為主控核心,搭配一台攝影機與手機作為控制依據,並且結合深度學習、影像處理及馬達控制等技術,實現以模糊控制為基礎的機器人系統。
在控制流程上,首先利用深度學習技術來辨識攝影機的影像,辨識後機器人可以分辨道路與障礙物。在道路辨識方面,其辨識效果不會因為不同光線或道路色彩不一而影響。在辨識障礙物方面,其可以是道路上常出現的物件,如人、車等等,且不需要有特定的特徵。另外,本論文以自行撰寫的手機APP對機器人導航,利用手機的GPS與電子羅盤感測器,取得機器人的經緯度位置及方向角做為控制的依據,再結合Google Maps API進行全域性的路線規劃,使機器人得知欲行走的方向及路線。最後,將這些資訊計算出機器人行走的導引軌跡,使機器人可以因應即時的路況並依照規劃的路線行走。根據導引軌跡,設計直走與轉彎的模糊控制器及左右旋轉機制控制馬達,以完成整體機器人的控制系統。
使用者可以使用手機APP自行選擇目的地,機器人會根據APP所規劃的路線以及導引軌跡的資訊,自動行走在馬路上並抵達使用者所指定的地方。
An outdoor automatic driving and navigation robot system is achieved in the thesis. The control system is implemented in an embedded development board Jetson TX1, along with a camera and a smartphone. Advanced technology such as deep learning, image processing, and motor control are combined to implement fuzzy-based robot system.
At the beginning of control flow, deep learning is utilized to analyze the images recorded by camera, so the robot is able to find the road regions and distinguish ordinary objects such as people and cars; particular characteristics are not required. Furthermore, custom smartphone application utilizes GPS and electronic compass sensors to get the position and direction of the robot as two information for navigation. Then, combining with Google Maps API, the smartphone application provides global route planning to the robot. Finally, navigation trajectory is computed via the combination of image recognition by deep learning and navigation information by smartphone application; therefore, robot is able to deal with traffic condition immediately and walk along the planned path. Fuzzy controllers for going straight and turning based on navigation trajectory are designed to complete entire robot control system.
Users could select destination via smartphone application; robot would automatically arrive demanded place according to the route defined by application and the result of deep learning after image processing.
[1] C. H. Chao, B. Y. Hsueh, M. Y. Hsiao, S. H. Tsai, and T. H. S. Li, "Fuzzy target tracking and obstacle avoidance of mobile robots with a stereo vision system," International Journal of Fuzzy Systems, vol. 11, no. 3, 2009, pp. 183-191.
[2] K. Watanabe, T. Kato, and S. Maeyama, "Obstacle avoidance for mobile robots using an image-based fuzzy controller," 2013 IEEE Annual Conference of Industrial Electronics Society, Vienna, 2013, pp. 6392-6397.
[3] C. K. Chang, C. Siagian, and L. Itti, "Mobile robot monocular vision navigation based on road region and boundary estimation," 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, 2012, pp. 1043-1050.
[4] D. C. Hernandez, V. D. Hoang, A. Filonenko, and K. H. Jo, "Vision-based heading angle estimation for an autonomous mobile robots navigation," 2014 IEEE International Symposium on Industrial Electronics (ISIE), Istanbul, 2014, pp. 1967-1972.
[5] A. Chand, "Navigation strategy and path planning for autonomous road crossing by outdoor mobile robots," 2011 IEEE International Conference on Advanced Robotics (ICAR), Tallinn, 2011, pp. 161-167.
[6] M. Y. Ju and J. R. Lee, "Vision-based mobile robot navigation using active learning concept," 2013 IEEE International Conference on Advanced Robotics and Intelligent Systems, Tainan, 2013, pp. 122-129.
[7] Y. Nie, Q. Chen, T. Chen, Z. Sun, and B. Dai, "Camera and lidar fusion for road intersection detection," 2012 IEEE Symposium on Electrical & Electronics Engineering (EEESYM), Kuala Lumpur, 2012, pp. 273-276.
[8] D. Fernandez and A. Price, "Visual detection and tracking of poorly structured dirt roads," 2005 IEEE International Conference on Advanced Robotics, Seattle, WA, 2005, pp. 553-560.
[9] T. Kinattukara and B. Verma, "Wavelet based fuzzy clustering technique for the extraction of road objects," 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Istanbul, 2015, pp. 1-7.
[10] I. K. Somawirata and F. Utaminingrum, "Road detection based on the color space and cluster connecting," 2016 IEEE International Conference on Signal and Image Processing (ICSIP), Beijing, 2016, pp. 118-122.
[11] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, 2017.
[12] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
[13] L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs," arXiv preprint arXiv:1606.00915, 2016.
[14] V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A deep convolutional encoder-decoder architecture for scene segmentation," arXiv preprint arXiv:1511.00561, 2015.
[15] F. Yu and V. Koltun, "Multi-scale context aggregation by dilated convolutions," arXiv preprint arXiv:1511.07122, 2015.
[16] A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, "Enet: A deep neural network architecture for real-time semantic segmentation," arXiv preprint arXiv:1606.02147, 2016.
[17]伺服馬達SmartMotor相關網站,https://www.animatics.com/, 2017年6月。
[18] G. J. Brostow, J. Fauqueur, and R. Cipolla, "Semantic object classes in video: A high-definition ground truth database," Pattern Recognition Letters, vol. 30, no. 2, 2009, pp.88-97.
[19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826.
[20] C. Szegedy et al., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9.
[21]Google Maps API相關網站,https://code.google.com/apis/maps/,2017年6月。
[22]Android studio相關網站,https://developer.android.com/studio/,2017年6月。
[23] R. Laganiere, "Compositing a bird's eye view mosaic," 2000 Conference on Vision Interface, Montreal, Canada, 2000, pp.382-387.
[24] D. York, "Least-squares fitting of a straight line," Canadian Journal of Physics, vol. 44, no. 5, 1966, pp. 1079-1086.
[25]王文俊,認識 Fuzzy-第三版,全華科技圖書股份有限公司,2008 年 6 月。