| 研究生: |
陳冠霖 Kuan-Lin Chen |
|---|---|
| 論文名稱: |
羽球自動收集與檢測之智慧機器人 |
| 指導教授: |
王文俊
Wang,Wen-June |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2022 |
| 畢業學年度: | 111 |
| 語文別: | 中文 |
| 論文頁數: | 70 |
| 中文關鍵詞: | 羽球偵測 、羽毛完整度辨識分析 、六軸機械手臂 、座標轉換 、運動學 、ROS 、無人自走車 |
| 外文關鍵詞: | shuttlecock |
| 相關次數: | 點閱:10 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文旨在設計一個羽球自動收集與檢測系統,此系統使用無人搬運車透過影像辨識將羽球場上的羽球撿取到車上載回到六軸機械手臂的位置附近基地,然後人工拿到機器手底下平台,再透過六軸機械手臂夾取羽球到攝影機辨識羽球的完好度,從而進行好壞球的分類。
本論文之研究項目如下,在無人搬運車收集羽球的部分,透過裝置在無人搬運車上的網路攝影機完成以下三點:(1)使用深度學習偵測與辨識羽球,(2)透過相機針孔模型演算法計算出目標物品與相機之間的相對位置,移動無人車到目標物品位置,並透過ROS控制馬達將羽球收集到無人搬運車上,(3)使用AprilTag引導無人搬運車回到基地。在羽球影像方面,則透過安裝在機械手臂末端的深度攝影機以及檢測羽球使用的網路攝影機所輸出的影像完成以下四個工作:(1)使用深度學習網路偵測羽球球頭以及中心的位置以及角度,(2)計算出目標物品與相機之間的相對位置,(3)使用深度學習網路與影像處裡將羽球進行完好度的分析。另外在機械手臂的運動控制方面,完成以下程序。(1)建置虛擬環境,(2)計算機器手臂運作模型的轉換矩陣,(3)求得羽球球頭目標點的座標,並且以逆運動學控制機械手臂到羽球球頭的目標點。綜合上述條件,便可以讓無人車完成羽球收集與機械手臂可以完成夾取羽球分類的兩個任務。
本研究在Linux環境下使用機器人作業系統(Robot Operating System, ROS)開發軟體系統,透過ROS分散式的架構與點對點網路,將所有資訊收集在一起並進行資料傳遞並整合,實現軟體硬體協同的設計。本論文在實際羽球場上實驗中,無人搬運車可以收集球場上羽球,機械手臂正確夾取率為92.1%,羽球分類正確率則為82%,成果顯示本論文確實能成功建立了一套撿拾並分類羽球的系統。
The thesis aims to design a shuttlecock automatic collecting and checking system. At first, the shuttlecocks on the court are detected and picked up by the AGV (Automated Guided Vehicle) and then brought back to the base. After collecting all shuttlecocks, the six degrees of freedom (6DOF) robot arm picked up each and identified its integrity.
The research topics of this thesis are described as follows: At the part of AGV collecting the shuttlecocks and through the monocular vision from the Web camera which is mounted on the AGV, we complete (1) detecting and identifying shuttlecocks, (2) calculating the relative position between the target object and the camera and (3) guiding the AGV back to the base based on the AprilTag recognition. At the part of the shuttlecocks image, based on the images from a depth camera installed at the end of the robotic arm and the webcam, we complete (1) using a deep learning technique to calculate the position and angle of the shuttlecock’s head and body center, (2) calculating the relative position between the shuttlecock and the camera, (3) measuring the integrity of the shuttlecock. In addition, the following procedures are required to be complete regarding motion control of the robotic arm. (1) build a virtual environment, (2) calculate the transformation matrix of the robot arm operation, (3) obtain the coordinates of the shuttlecock’s head, and control the robot arm to the target point of the shuttlecock head with inverse kinematics. After all, the AGV can complete shuttlecocks collection, and the robotic arm can complete shuttlecocks picking up and integrity checking.
This thesis uses the Robot Operating System (ROS) to develop a software system in the Linux environment. Through the distributed architecture of ROS and the peer-to-peer network, all information is collected, transmitted, and integrated to achieve the design of software and hardware collaboration. In the experiment of this study on the actual badminton court, the AGV can collect all the shuttlecocks on the court, and the accuracy rates of the robot arm clamping and classification are 92.5% and 83%, respectively. It is concluded that the thesis establishes the system that can pick up the shuttlecocks on the court and identify the shuttlecock’s integrity.
[1] 羽球撿球機-https://www.bilibili.com/s/video/BV1gZ4y197sM,2022年5月。
[2] M. Iqbal and R. Omar, "Automatic Guided Vehicle (AGV) Design Using an IoT-based RFID for Location Determination,"2020 International Conference on Applied Science and Technology (iCAST),Padang,Oct.2020, pp. 489-494.
[3] S. Quan and J. Chen, "AGV Localization Based on Odometry and LiDAR," 2019 2nd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), Shanghai, Nov. 2019.
[4] R. Chakma et al., "Navigation and Tracking of AGV in ware house via Wireless Sensor Network," 2019 IEEE 3rd International Electrical and Energy Conference (CIEEC), Beijing, Sep.2019, pp. 1686-1690.
[5] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," Proc. IEEE Conference Computer Vision and Pattern Recognition, Columbus ,Jun. 2014, pp. 580-587.
[6] R. B. Girshick, "Fast R-CNN," Proc. International Conference on Computer Vision Pattern Recognition, Santiago, Dec. 2015, pp. 1440-1448.
[7] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: towards real time object detection with region proposal networks," Proc. IEEE Transactions on Pattern Analysis Machine Intelligence, 2017, vol. 39, pp. 1137-1149.
[8] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: unified, real-time object detection," arXiv preprint, arXiv:1506.02640,2015.
[9] G. S. Jocher, A.; Borovec, J.; NanoCode012; ChristopherSTAN; Changyu, L.; Laughing; tkianai; yxNONG; Hogan, A.; et al., "ultralytics/yolov5," 2022,
doi: https://doi.org/10.5281/zenodo.3908559
[10] L. Jianguo, L Weidong, G, Li-e, and L. Le, "Detection and localization of underwater targets based on monocular vision," in Proc. The 2nd International Conference on Advanced Robotics and Mechatronics, Hefei, Aug. 2017, pp. 100-105.
[11] X. Li, and Lu Wang, "A monocular distance estimation method used in video sequence," in Proc. International Conference on Information and Automation, Shenyang, Jun. 2012, pp. 390-394
[12] D. Bao and P. Wang, "Vehicle distance detection based on monocular vision," in Proc. IEEE International Conference Progress in Informatics and Computing, Shanghai, Dec. 2016, pp. 187-191.
[13] Robot end effector – Wikipedia, June 2019.
Available at : https://en.wikipedia.org/wiki/Robot_end_effector
[14] A. Khan, C. Xiangming, Z. Xingxing and W. L. Quan, "Closed form inverse kinematics solution for 6-DOF underwater manipulator," in Proc. International Conference on Fluid Power and Mechatronics (FPM), Harbin ,2015, pp. 1171-1176.
[15] J.-J. Kim and J.-J. Lee, "Trajectory optimization with particle swarm optimization for manipulator motion planning," in Proc. IEEE Transactions on Industrial Informatics, vol. 11, pp. 620-631, no, Mar. 2015.
[16] P. Beeson and B. Ames, "TRAC-IK: An open-source library for improved solving of generic inverse kinematics," IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), Seoul, 2015, pp. 928-935.
[17] S. Kumar, N. Sukavanam and R. Balasubramanian, "An optimization approach to solve the inverse kinematics of redundant manipulator, " International Journal of Information and System Sciences (Institute for Scientific Computing and Information), vol. 6, no. 4, pp. 414-423, no,2010.
[18] J. Vannoy and J. Xiao, "Real-time adaptive motion planning (RAMP) of mobile manipulators in dynamic environments with unforeseen changes," in Proc. IEEE Transactions on Robotics, vol. 24, pp. 1199-1212, no, Oct. 2008.
[19] J. J. Kuffner Jr and S. M. LaValle, "RRT-Connect: An efficient approach to single-query path planning," in Proc. IEEE International Conference on Robotics and Automation, San Francisco, Aug. 2000, pp. 995-1001.
[20] I. H. Choi and Y. G. Kim, "Head pose and gaze direction tracking for detecting a drowsy driver," International Conference on Big Data and Smart Computing, Bangkok, 2014, pp. 241-244.
[21] P. I. Corke, "A Simple and Systematic Approach to Assigning Denavit–Hartenberg Parameters," in IEEE Transactions on Robotics, vol. 23, no. 3, pp. 590-594, June 2007, doi: 10.1109/TRO.2007.896765.
[22] NVIDIA® Jetson AGX Xavier
https://www.nvidia.com/zh-tw/autonomous-machines/embedded-systems/jetson-agx-xavier/ ,2022年6月
[23] ROW0146
https://shop.playrobot.com/products/robot-row0146,2022年6月。
[24] C310-Webcam
https://www.logitech.com/zh-tw/products/webcams/c310-hd-webcam.960-000631.html,2022年6月。
[25] AX-12 Motor
https://emanual.robotis.com/docs/en/dxl/ax/ax-12a/,2022年6月。
[26] Cjscope-RZ760
https://www.cjscope.com.tw/product/detail/117,2022年6月
[27] 達明機器人
https://www.valin.com/sites/default/files/asset/document/Omron-Collaborative-Robots-TM5-Series-Datasheet.pdf,2022年6月。
[28] Intel® RealSense™ Depth Camera D435i
https://www.intelrealsense.com/zh-hans/depth-camera-d435i/,2022年6月。
[29] Logitech-c920
https://www.logitech.com/zh-tw/products/webcams/c920e-business-webcam.960-001360.html,2022年6月。
[30] Ros
http://wiki.ros.org/ROS/Tutorials,2022年6月。
[31] Moveit
https://ros-planning.github.io/moveit_tutorials/,2022年6月。
[32] YOLOv5
https://docs.ultralytics.com/,2022年6月。
[33] Yolov5-Train Custom Data
https://github.com/ultralytics/yolov5/wiki/Train-Custom-Data,2022年6月。
[34] RolabelImg -Github
https://github.com/cgvict/roLabelImg,2022年6月。
[35] Cantzler, H. "Random sample consensus (ransac)." Institute for Perception, Action and Behaviour, Division of Informatics, University of Edinburgh,1981
[36] Canny edge detector
https://en.wikipedia.org/wiki/Canny_edge_detector , 2022 年 6 月
[37] Camera Calibration and 3-D Vision - MATLAB & Simulink https://www.mathworks.com/help/vision/ref/cameracalibrator-app.html , 2022 年 6 月。
[38] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 2000, vol. 22, no. 11, pp. 1330-1334.
[39] Base control in Ros -
http://wiki.ros.org/pr2_controllers/Tutorials/Using%20the%20base%20controller%20with%20odometry%20and%20transform%20information,2022年6月。
[40] E. Olson, "AprilTag: A robust and flexible visual fiducial system," 2011 IEEE International Conference on Robotics and Automation, Shanghai , 2011, pp. 3400-3407,doi: 10.1109/ICRA.2011.5979561.
[41] 達明機器人
https://www.tm-robot.com/zh-hant/regular-payload/,2020年5月。
[42] Intel RealSense Help Center D400 Series
https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2022年6月。
[43] J. Cho, S. Park and S. Chien, "Hole-Filling of RealSense Depth Images Using a Color Edge Map," in IEEE Access, vol. 8, pp. 53901-53914, 2020, doi: 10.1109/ACCESS.2020.2981378.
[44] L. Xiao, "A review of solutions for perspective-n-point problem in camera pose estimation," Journal of Physics: Conference Series,Ancona, Sep. 2018, vol. 1087.
[45] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,"
Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
[46] Intel RealSense Help Center D400 Series
https://support.intelrealsense.com/hc/en-us/community/posts/360037076293-Align-color-and-depth-images,2022年6月。