跳到主要內容

簡易檢索 / 詳目顯示

研究生: 蔡劭傑
Shao-Chieh Tsai
論文名稱: 應用光定位技術於機械手臂校正之系統開發
Development of a Robotic Arm Calibration System Using Light Positioning Technology
指導教授: 林錦德
Chin-Te Lin
口試委員:
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 111
中文關鍵詞: 光定位機器學習機械手臂重定位快速校正三維定位
外文關鍵詞: Visible Light Positioning, Machine Learning, Robotic Arm, Re-localization, Fast Calibration, 3D Positioning
相關次數: 點閱:18下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在現今自動化製造環境中,機械手臂常因場域變動或產線調整需進行重定位與校正。傳統以相機或深度相機為基礎的視覺方案,存在硬體成本高與視野範圍與解析度難以兼顧的限制,難以滿足大範圍高精度校正需求。
    為解決上述問題,本研究提出一套整合光定位技術與機器學習的創新機械手臂校正方法。系統透過在空間中佈設可調變訊號的光源,並由安裝於機械手臂末端的光學感測器接收訊號,進而建立光訊號與三維座標之間的對應關係。結合訊號處理、特徵篩選與機器學習模型,實現高精度的位置估測與自動化校正。
    經由大量實驗優化硬體配置並對五種機器學習模型的效能進行了比較,這些模型包括:隨機森林迴歸(Random Forest Regression, RFR)、多層感知器(Multilayer Perceptron, MLP)、Transformer 模型,以及兩種基於柯爾莫哥洛夫-阿諾爾德網絡(Kolmogorov-Arnold Networks , KAN)的架構,其一為高效版本(Efficient- KAN)、另一則為最新版本(KAN 2.0)。實驗結果顯示KAN 2.0表現最佳,整體平均誤差為 4.407 mm,將光定位系統在三維空間的定位誤差控制在毫米等級。此外,實驗亦驗證所提訊號處理與特徵篩選技術具備良好的環境適應性。本研究成功驗證該方法能有效應用於機械手臂校正任務,具備良好的可行性與應用潛力,滿足多變生產場域的自動校正需求。


    In today’s automated manufacturing environments, robotic arms frequently need re-localization and calibration due to the operation or production line adjustments. Traditional vision-based solutions relying on high-cost visual or stereo cameras are restricted in the trade-off between achieving a wide field of view and high resolution, resulting in a challenge in meeting the demands of large-scale, high-precision calibration.
    This study proposes an innovative robotic arm calibration method that integrates light positioning technology with machine learning to address these issues. The system deployed the light sources with the modulated signals. It utilized the optical sensors mounted on the robotic arm’s end-effector to receive these signals, thereby establishing a correspondence between the optical signals and three-dimensional coordinates. High-precision position estimation and automated calibration can be achieved via signal processing, feature selection, and machine learning models.
    Through extensive experiments optimizing hardware configurations and comparing five models, mainly Random Forest Regression(RFR), Multilayer Perceptron(MLP), Transformer, Efficient-Kolmogorov-Arnold Networks,(Efficient-KAN), and Kolmogorov-Arnold Networks 2.0(KAN 2.0), KAN 2.0 was identified the best performance with an overall mean absolute error (MAE) of 4.407 mm, effectively controlling the spatial positioning error of the light positioning system in a millimeter level. Furthermore, the experiments also verified that the proposed signal processing and feature selection techniques possess good environmental adaptability.
    In conclusion, the proposed method can be effectively applied to robotic arm calibration tasks, demonstrating good feasibility and application potential, and meeting the automated calibration demands of dynamic production environments.

    AI 工具應用聲明 i 摘要 v Abstract vi 致謝 vii 目錄 viii 圖目錄 x 表目錄 xii 縮寫詞表 xiii 符號表 xiv 第一章 緒論 1 1-1 研究背景 1 1-2 研究動機 3 1-3 文獻回顧 4 1-3-1 機械手臂校正之發展 4 1-3-2 光定位技術發展與回顧 8 1-4 研究目的 11 1-5 論文架構 12 第二章 相關理論基礎 13 2-1 光定位技術 13 2-1-1 接收訊號強度(Received Signal Strength, RSS) 13 2-1-2 到達角(Angle of Arrival, AoA) 13 2-1-3 其他定位技術 14 2-1-4 定位技術選擇考量 16 2-2 機器學習 17 2-2-1 隨機森林回歸 (Random Forest Regression, RFR) 17 2-2-2 多層感知器 (Multilayer Perceptron, MLP) 18 2-2-3 Transformer 18 2-2-4 Eifficient-KAN 19 2-2-5 KAN 2.0 20 2-3 機械手臂 22 2-3-1 座標系 22 2-3-2 姿態表示方法 24 第三章 研究流程與系統架構 26 3-1 研究流程 26 3-2 系統架構設計 29 3-3 評估方法與成效指標 31 3-4 機械手臂校正方式 33 3-4-1 校正原理 33 3-4-2 軸角轉換 34 3-4-3 座標轉換 37 第四章 實驗設計 39 4-1 實驗設備 39 4-2 實驗環境 46 4-3 光定位系統設計 48 4-3-1 電路配置 48 4-3-2 訊號設計與前處理 52 4-3-3 訊號特徵分析與篩選 55 4-4 實驗設計 57 4-4-1 佔空比實驗 57 4-4-2 ADC量測範圍實驗 59 4-5 資料收集 60 4-5-1 採樣空間定義與採樣方法 60 4-5-2 自動化資料收集 62 4-6 模型訓練 64 4-6-1 訓練環境 64 4-6-2 模型輸入與輸出 65 4-6-3 訓練超參數 66 第五章 結果與討論 68 5-1 實驗與結果 68 5-1-1 CMM 68 5-1-2 機械手臂 73 5-1-3 綜合討論 84 5-2 自動化資料收集效益分析 86 第六章 結論與未來展望 87 6-1 具體貢獻 87 6-2 應用限制 88 6-3 建議及未來展望 89 參考文獻 92

    [1] IFR statistical department, “World Robotics 2024 Industrial Robots,” Sep. 2024. https://ifr.org/img/worldrobotics/Press_Conference_2024.pdf
    [2] SEMI 國際半導體產業協會(2019)。工業4.0機器人如何幫企業轉型?https://www.semi.org/zh/blogs/technology-trends/industry-4.0/robot
    [3] Z. B. Li, S. Li, and X. Luo, “An overview of calibration technology of industrial robots, ” IEEE/CAA J. Autom. Sinica, vol. 8, no. 1, pp. 23-36, Jan. 2021. doi:
    10.1109/JAS.2020.1003381.
    [4] Z. Zhou, L. Li, R. Wang, and X. Zhang, “Experimental Eye-in-hand Calibration for Industrial Mobile Manipulators,” Proceedings of the 2020 IEEE International Conference on Mechatronics and Automation, Beijing, China, 2020, pp. 582-587.doi:10.1109/ICMA49215.2020.9233585.
    [5] H. Engemann, S. Du, S. Kallweit, P. Cönen, and H. Dawar, “OMNIVIL—An Autonomous Mobile Manipulator for Flexible Production, ”Sensors, vol. 20, no. 24, pp. 7249, 2020. doi: 10.3390/s20247249.
    [6] J. Kallwies, B. Forkel, and H-J. Wuensche, “Determining and Improving the Localization Accuracy of AprilTag Detection,” 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 2020, pp. 8288-8295. doi: 10.1109/ICRA40945.2020.9197427.
    [7] I. Enebuse, M. Foo, B. S. K. K. Ibrahim, H. Ahmed, F. Supmak, and O. S. Eyobu, “A Comparative Review of Hand-Eye Calibration Techniques for Vision Guided Robots, ” IEEE Access, vol. 9, pp. 113143-113155, 2021. doi: 10.1109/ACCESS.2021.3104514.
    [8] S. Legowik, R. Bostelman, and T. Hong, “Sensor Calibration and Registration for Mobile Manipulators,” The Fifth International Conference on Advances in Vehicular Systems, Technologies and Applications (VEHICULAR 2016), Barcelona, Spain, 2016.
    [9] C.S. Wieghardt and B. Wagner, “Self-Calibration of a Mobile Manipulator Using Structured Light,” Proceedings of the 2017 18th International Conference on Advanced Robotics (ICAR), Hong Kong, China, 2017, pp. 19-24. doi: 10.1109/ICAR.2017.8023518.
    [10] U. Afroza and P. K. Choudhury, “An efficient three-dimensional indoor visible light positioning algorithm using multiple transmitters and single receiver,” Results in Optics, vol. 5, p. 100169, 2021. doi: 10.1016/j.rio.2021.100169.
    [11] R. Zhang, W.-D. Zhong, K. Qian, and D. Wu, “Image Sensor Based Visible Light Positioning System With Improved Positioning Algorithm,” IEEE Access, vol. 5, pp. 6085-6094, 2017. doi: 10.1109/ACCESS.2017.2693299.
    [12] J. Chen, D. Zeng, C. Yang, and W. Guan, “High Accuracy, 6-DoF Simultaneous Localization and Calibration Using Visible Light Positioning,” J. Lightwave Technol., vol. 40, no. 21, pp. 7039-7047, Nov. 2022. doi: 10.1109/JLT.2022.3198649.
    [13] W. Guan, S. Chen, S. Wen, Z. Tan, and H. Song, “High-Accuracy Robot Indoor Localization Scheme Based on Robot Operating System Using Visible Light Positioning,” IEEE Photonics J., vol. 12, no. 2, pp. 1-16, Apr. 2020. doi: 10.1109/JPHOT.2020.2981485.
    [14] A. H. A. Bakar, T. Glass, H. Y. Tee, F. Alam, and M. Legg, “Accurate Visible Light Positioning Using Multiple-Photodiode Receiver and Machine Learning,” IEEE Trans. Instrum. Meas., vol. 70, pp. 1-12, 2021. doi: 10.1109/TIM.2020.3024526.
    [15] Z. Cao, M. Cheng, Q. Yang, M. Tang, D. Liu, and L. Deng, “Experimental investigation of environmental interference mitigation and blocked LEDs using a memory-artificial neural network in 3D indoor visible light positioning systems,” Opt. Express, vol. 29, no. 21, pp. 33937-33951, Oct. 2021. doi: 10.1364/OE.441540.
    [16] H. Zhang, J. Cui, L. Feng, A. Yang, H. Lv, B. Lin, and H. Huang, “High-Precision Indoor Visible Light Positioning Using Deep Neural Network Based on the Bayesian Regularization With Sparse Training Point,” IEEE Photonics J., vol. 11, no. 3, pp. 1-10, Jun. 2019. doi: 10.1109/JPHOT.2019.2912156.
    [17] Z. Xie, W. Guan, J. Zheng, X. Zhang, S. Chen, and B. Chen, “A High-Precision, Real-Time, and Robust Indoor Visible Light Positioning Method Based on Mean Shift Algorithm and Unscented Kalman Filter,” Sensors, vol. 21, no. 10, p. 3468, May 2021. doi: 10.3390/s19051094.
    [18] R. Wang, G. Niu, Q. Cao, C. S. Chen, and S.-W. Ho, “A Survey of Visible-Light-Communication-Based Indoor Positioning Systems,” Sensors, vol. 24, no. 16, Art. 5197, Aug. 2024. doi: 10.3390/s24165197.
    [19] W. Sun, J. Chen, and C. Yu, “Indoor Receiving Signal Strength Based Visible Light Positioning Enabled with Equivalent Virtual Lamps,” Applied Optics, vol. 62, no. 17, pp. 4583–4590, Jun. 2023. doi: 10.1364/AO.482797.
    [20] S. S. Saleh, H. N. Kheirallah, and M. H. Aly, “Efficient Three-Dimensional Indoor Dark Light Visible Light Positioning With Received Signal Strength Technique,” Optical and Quantum Electronics, vol. 56, Art. 952, Apr. 2024. doi: 10.1007/s11082-024-06864-z.
    [21] Á. De-La-Llana-Calvo et al., “Accuracy and Precision Assessment of AoA-Based Indoor Positioning Systems Using Infrastructure Lighting and a Position-Sensitive Detector,” Sensors, vol. 20, no. 18, Art. 5359, Sept. 2020. doi: 10.3390/s20185359.
    [22] M. H. Bergen et al., “Toward the Implementation of a Universal Angle-Based Optical Indoor Positioning System,” Frontiers of Optoelectronics, vol. 11, no. 2, pp. 116–127, Apr. 2018. doi: 10.1007/s12200-018-0806-0.
    [23] L.-S. Hsu et al., “Using Data Pre-Processing and Convolutional Neural Network (CNN) to Mitigate Light Deficient Regions in Visible Light Positioning (VLP) Systems,” J. Lightw. Technol., vol. 40, no. 17, pp. 5894-5900, Sep. 1, 2022. doi:
    10.1109/JLT.2022.3184931.
    [24] L. Breiman, “Random Forests,” Machine Learning, vol. 45, no. 1, pp. 5-32, Oct. 2001. doi: 10.1023/A:1010933404324.
    [25] L. Breiman, “Bagging predictors,” Machine Learning, vol. 24, no. 2, pp. 123-140, Aug. 1996. doi: 10.1007/BF00058655.
    [26] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533-536, Oct. 1986. doi: 10.1038/323533a0.
    [27] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359-366, 1989. doi: 10.1016/0893-6080(89)90020-8.
    [28] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals, and Systems, vol. 2, no. 4, pp. 303-314, Dec. 1989. doi: 10.1007/BF02551274.
    [29] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436-444, May 2015. doi: 10.1038/nature14539.
    [30] W. Raes, N. Knudde, J. De Bruycker, T. Dhaene, and N. Stevens, “Experimental Evaluation of Machine Learning Methods for Robust Received Signal Strength-Based Visible Light Positioning,” Sensors-Basel, Article vol. 20, no. 21, pp. 1-23, Oct 27 2020, Art no. 6109. doi: 10.3390/s20216109.
    [31] A. Vaswani et al., “Attention is all you need,” in Advances in Neural Information Processing Systems, vol. 30, 2017, pp. 5998-6008. arXiv: 1706.03762
    [32] John F. Kolen; Stefan C. Kremer, “Gradient Flow in Recurrent Nets: The Difficulty of Learning LongTerm Dependencies, ” in A Field Guide to Dynamical Recurrent Networks , IEEE, 2001, pp.237-243, doi: 10.1109/9780470544037.ch14.
    [33] K. D. Salman and E. K. Hamza, “Performance Li-Fi indoor positioning systems using RSS-triangulation method,” 2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India, 2022, pp. 1741-1746. doi: 10.1109/ICAIS53314.2022.9742953.
    [34] Z. Liu et al., “Kan: Kolmogorov-arnold networks,” arXiv preprint arXiv: 2404.19756, 2024.
    [35] Blealtan and A. Dash. “An Efficient Implementation of Kolmogorov-Arnold Network.” https://github.com/Blealtan/efficient-kan.git [Accessed on May 15, 2025].
    [36] Z. Liu, P. Ma, Y. Wang, W. Matusik, and M. Tegmark, “KAN 2.0: Kolmogorov-Arnold Networks Meet Science, ” arXiv:2408.10205, 2024.
    [37] S. Leonard, “Manipulator Kinematics,” Lecture Notes, Johns Hopkins University, 2024.
    [38] Syntec Technology Co., LTD., 關節型手臂電控應用, Available: https://www.syntecclub.com/cncrel/Manual/PDF/关节型手臂电控应用.pdf [Accessed on May 15, 2025]
    [39] J. Diebel, “Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors,” Stanford University, Stanford, CA, Oct. 2006.
    [40] F. Xie, M. Xie, and C. Wang, “Using the MNL Model in a Mobile Device's Indoor Positioning, ” Biomimetics, vol. 8, no. 2, p. 252, Jun. 2023. doi: 10.3390/biomimetics8020252.
    [41] A. Dash, J. Gu, G. Wang, and N. Ansari, “Self-Supervised Learning for User Localization,” arXiv preprint arXiv:2404.15370, 2024.
    [42] (2025). myRIO-1950 Getting Started Guide and Specifications. Available: https://docs-be.ni.com/bundle/myrio-1950-getting-started/raw/resource/enus/376099b.pdf
    [43] (2025). LTE-5238A Datasheet. Available: https://www.mouser.tw/datasheet/2/239/E5238A-1143930.pdf
    [44] (2025). DAQ NI-9234 Specifications. Available: https://www.ni.com/docs/zh-TW/bundle/ni-9234-specs/page/specs.html?srsltid=AfmBOoq5upFC9TJ_WS2x-QnQtlciaW8BvceAsSTSlImKPVIQYDZtCOAZ#
    [45] (2025). OPT101 Datasheet (PDF) - Texas Instruments. Available: https://www.alldatasheet.com/datasheet-pdf/pdf/545673/TI/OPT101.html
    [46] (2025). Hexagon CMM Inspector Classic 06.08.06 Specification. Available: https://www.whcmm.com.tw/sites/default/files/inspector_whpi.pdf
    [47] (2025). Universal Robots UR5 Specification. Available:
    https://www.universal-robots.com/media/1801276/tc_199934_ur_main-product-brochure_web_1.pdf
    [48] T-H. Hung, “基於多通道 RSS 的可見光定位系統之設計與其訊號處理方法,“ 碩士論文, National Central University, 2024. [Online]. Available: https://hdl.handle.net/11296/38957r
    [49] F. A. Viana, “A tutorial on Latin hypercube design of experiments,“ Quality and reliability engineering international, vol. 32, no. 5, pp. 1975 1985, 2016. doi: 10.1002/qre.1924

    QR CODE
    :::