| 研究生: |
楊仁皓 Jen-Hao Yang |
|---|---|
| 論文名稱: |
精微產品組裝的智能人機協作系統 Intelligent Human-Robot Collaboration System for Fine Product Assembly |
| 指導教授: |
林錦德
Chin-Te Lin |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 中文 |
| 論文頁數: | 98 |
| 中文關鍵詞: | 人機協作 、手勢辨識 、人類行為辨識 、深度學習 |
| 相關次數: | 點閱:19 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
製造業正面臨著勞動力短缺的問題,同時也面臨訂單量少交期短的挑戰。傳統的生產方式已無法應對這些需求。在這種情況下,機器手臂成為了一個關鍵的角色。協作型機器手臂可以與人類協同作業,能適應「少量多樣」的生產模式。這些協作型機器手臂具有安全性、靈活性和人機協作等特點,能夠在同一工作場域與人類共同工作。儘管如此,協作型機器手臂的應用仍然面臨一些挑戰,例如缺少人類行為辨識的功能以及整合各種系統的需求。
本研究的目的是解決人類和機器人在共享工作空間中工作時的效率問題,所提出的智能人機協作系統使用立體相機進行人類行為辨識,並使用穿戴手環來檢測人類手勢。接著發展深度學習模型,依據相機和手環收集的數據識別人類行為,並控制機械臂輔助人類操作。結果證明了所提出的方法在即時識別人類行為方面的可行性和有效性,因此系統能夠更全面地理解使用者的動作意圖,並在合適的時間讓機器人介入組裝。這樣的系統設計能夠充份提升現有的人機協作應用中之人類行為辨識技術的性能,促進人與機器人能夠在同一場域中協同作業的可能性。
The manufacturing industry is facing labor shortages and the challenge of low order volumes with shorter deliery. Conventional production methods are no longer able to cope with these demands, in which case robotic arms are becoming a key player. Collaborative robotic arms can work in tandem with humans and can adapt to the 'small amount, varied' production model. These collaborative robotic arms are characterized by safety, flexibility, and human-machine collaboration, and are able to work with humans in the same workplace. Nevertheless, the application of collaborative robotic arms still faces some challenges, such as the lack of human behavior recognition and the need to integrate various systems.
The aim of this research is to address the efficiency of humans and robots when working in a shared workspace.The proposed intelligent human-robot collaborative system uses a stereo camera for human behavior recognition and a wearable bracelet to detect human gestures. Then a deep learning model is developed to recognize human behaviors using the data collected by the camera and the bracelet, in order to control a robotic arm to assist human operations. The results demonstrate that the feasibility and effectiveness of the proposed approach canreal-time recognize human behaviors in . There fore the system can understand the user's movement intentions more comprehensively and allow the robot to intervene in the assembly at the right time. The proposed system can improve the lack of human behavior recognition in existing human-robot collaborative applications, enabling humans and robots to work together in the same field.
[1] 林顯易 and 謝名豐, "工業 4.0 中的智慧機器人," 科儀新知, no. 205, pp. 12-20, 2015.
[2] J. Krüger, T. K. Lien, and A. Verl, "Cooperation of human and machines in assembly lines," CIRP annals, vol. 58, no. 2, pp. 628-646, 2009.
[3] 蘇瑞堯, "工業機器人協作應用安全規範-國際標準 ISO 10218 系列發展," 臺灣勞工季刊, no. 68, pp. 74-80, 2021.
[4] H. Liu and L. Wang, "Gesture recognition for human-robot collaboration: A review," International Journal of Industrial Ergonomics, vol. 68, pp. 355-367, 2018.
[5] S. Pellegrinelli, H. Admoni, S. Javdani, and S. Srinivasa, "Human-robot shared workspace collaboration via hindsight optimization," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016: IEEE, pp. 831-838.
[6] S. Pellegrinelli, A. Orlandini, N. Pedrocchi, A. Umbrico, and T. Tolio, "Motion planning and scheduling for human and industrial-robot collaboration," CIRP Annals, vol. 66, no. 1, pp. 1-4, 2017.
[7] P. Tsarouchi, A.-S. Matthaiakis, S. Makris, and G. Chryssolouris, "On a human-robot collaboration in an assembly cell," International Journal of Computer Integrated Manufacturing, vol. 30, no. 6, pp. 580-589, 2017.
[8] T. B. Pulikottil, S. Pellegrinelli, and N. Pedrocchi, "A software tool for human-robot shared-workspace collaboration with task precedence constraints," Robotics and Computer-Integrated Manufacturing, vol. 67, p. 102051, 2021.
[9] P. Zheng, S. Li, L. Xia, L. Wang, and A. Nassehi, "A visual reasoning-based approach for mutual-cognitive human-robot collaboration," CIRP annals, vol. 71, no. 1, pp. 377-380, 2022.
[10] Z. Zhang, G. Peng, W. Wang, Y. Chen, Y. Jia, and S. Liu, "Prediction-based human-robot collaboration in assembly tasks using a learning from demonstration model," Sensors, vol. 22, no. 11, p. 4279, 2022.
[11] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, "Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding," IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 10, pp. 2684-2701, 2019.
[12] B. Ren, M. Liu, R. Ding, and H. Liu, "A survey on 3d skeleton-based action recognition using learning method," arXiv preprint arXiv:2002.05907, 2020.
[13] W. Zhu et al., "Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks," in Proceedings of the AAAI conference on artificial intelligence, 2016, vol. 30, no. 1.
[14] P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue, and N. Zheng, "View adaptive neural networks for high performance skeleton-based human action recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 8, pp. 1963-1978, 2019.
[15] W. Peng, X. Hong, H. Chen, and G. Zhao, "Learning graph convolutional network for skeleton-based human action recognition by neural searching," in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, no. 03, pp. 2669-2676.
[16] L. Guo, Z. Lu, and L. Yao, "Human-machine interaction sensing technology based on hand gesture recognition: A review," IEEE Transactions on Human-Machine Systems, vol. 51, no. 4, pp. 300-309, 2021.
[17] Y. Ma et al., "Hand gesture recognition with convolutional neural networks for the multimodal UAV control," in 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), 2017: IEEE, pp. 198-203.
[18] M.-K. Liu, Y.-T. Lin, Z.-W. Qiu, C.-K. Kuo, and C.-K. Wu, "Hand gesture recognition by a MMG-based wearable device," IEEE Sensors Journal, vol. 20, no. 24, pp. 14703-14712, 2020.
[19] X. Wang, D. Veeramani, and Z. Zhu, "Wearable Sensors-Based Hand Gesture Recognition for Human–Robot Collaboration in Construction," IEEE Sensors Journal, vol. 23, no. 1, pp. 495-505, 2022.
[20] S. K. Hopko, R. Khurana, R. K. Mehta, and P. R. Pagilla, "Effect of cognitive fatigue, operator sex, and robot assistance on task performance metrics, workload, and situation awareness in human-robot collaboration," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3049-3056, 2021.
[21] "ROS:Home." https://www.ros.org/ (accessed.
[22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[23] "An Intuitive Explanation of Convolutional Neural Networks." https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ (accessed.
[24] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[25] "Residual Leaning: 認識ResNet與他的冠名後繼者ResNeXt、ResNeSt." https://medium.com/ai-blog-tw/deep-learning-residual-leaning-%E8%AA%8D%E8%AD%98resnet%E8%88%87%E4%BB%96%E7%9A%84%E5%86%A0%E5%90%8D%E5%BE%8C%E7%B9%BC%E8%80%85resnext-resnest-6bedf9389ce (accessed.
[26] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500.
[27] "Recurrent Neural Networks, the Vanishing Gradient Problem, and Long Short-Term Memory." https://medium.com/@pranavp802/recurrent-neural-networks-the-vanishing-gradient-problem-and-lstms-3ac0ad8aff10 (accessed.
[28] "The Unreasonable Effectiveness of Recurrent Neural Networks." http://karpathy.github.io/2015/05/21/rnn-effectiveness/ (accessed.
[29] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[30] "Bidirectional LSTM." https://paperswithcode.com/method/bilstm (accessed.
[31] S. M. Vieira, U. Kaymak, and J. M. Sousa, "Cohen's kappa coefficient as a performance measure for feature selection," in International conference on fuzzy systems, 2010: IEEE, pp. 1-8.
[32] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626.
[33] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2921-2929.
[34] "Mediapipe." https://developers.google.com/mediapipe (accessed.
[35] "ZED Body Tracking Overview." https://www.stereolabs.com/docs/body-tracking/ (accessed.
[36] "CoolSo." https://coolsotech.com/ (accessed.
[37] "UR5." https://www.universal-robots.com/tw/%E7%94%A2%E5%93%81/ur5/ (accessed.
[38] "OMRON NX1." https://www.omron.com.tw/products/category/automation-systems/programmable-controllers/nx1/cpu-units/index.html (accessed.