| 研究生: |
林浩沅 Hao-Yuan Lin |
|---|---|
| 論文名稱: |
自動化高動態範圍結合結構光用於三維點雲量測技術 Automated high dynamic range with structured light for three dimension point cloud measurement technology |
| 指導教授: |
李朱育
Ju-Yi Lee |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 光機電工程研究所 Graduate Institute of Opto-mechatronics Engineering |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 88 |
| 中文關鍵詞: | 高動態範圍 、自動化 、結構光 、立體視覺 、三維點雲 |
| 外文關鍵詞: | high dynamic range, automation, structured light, stereo vision, 3D point cloud |
| 相關次數: | 點閱:18 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究目的在於開發「自動化高動態範圍之三維點雲量測技術」,用於量測工件待測物,並建立三維點雲資料,最後透過德國VDI規範進行結果評估,觀察有加入自動化高動態範圍技術的建模成效。由於三維點雲技術是由二維影像資訊轉為三維點雲資訊,所以在影像品質上會有特別要求,且容易受到環境及光源的影響。本研究透過提出的自動化高動態範圍技術(High Dynamic Range, HDR)解決過曝問題以及排除人為操作因素後,並結合結構光(Structured Light)與立體視覺技術(Stereo Vision, SV),可重建出待測物的三維點雲資料。
高動態範圍影像技術(HDR)的原理是利用不同曝光時間的影像合成方式,並將此應用在擷取到的影像上。由於使用高動態範圍技術需要多個曝光時間參數,而曝光時間參數的選擇更是尤其重要,因此本研究加入自動化的方式,將所需要的曝光時間透過程式計算,並完全排除人為手動控制的因素。本研究解決了因為過曝導致量測不到的問題及排除人為操作因素,並針對使用自動化高動態範圍技術之結果,透過德國規範VDI 2634評估,觀察解決問題後的建模成效。評估實驗中,總共進行三組實驗,每組實驗會量測五次。在工作距離400 mm、焦距16 mm、景深範圍150 mm、視野範圍360 mm × 260 mm的情況下,XYZ軸解析度皆可達0.1 mm,準確度可達-0.15 mm ~ 0.1 mm,精確度可達0.08 mm。
The purpose of this research is to develop the "automated high dynamic range 3D point cloud measurement technology" for measuring the workpiece to be measured, and establishing 3D point cloud data, and finally evaluate the results through the German VDI standard, and observe that the automatic high dynamic range is added. Modeling effectiveness of scope techniques. Since the 3D point cloud technology is converted from 2D image information to 3D point cloud information, there are special requirements on image quality and it is easily affected by the environment and light sources. This research solves the overexposure problem through the proposed automated high dynamic range technology (HDR) and eliminates human factors, combined with structured light and stereo vision technology (SV), can reconstruct Three-dimensional point cloud data of the object to be measured.
The principle of HDR is to use image synthesis methods with different exposure times and apply this to the captured images. Since the use of HDR requires multiple exposure time parameters, and the choice of exposure time parameters is particularly important. Therefore, this research adds an automated method to calculate the required exposure time through a program, and completely eliminates manual control factors. . This research solves the problem of undetectable measurement due to overexposure and eliminates human operation factors. Aiming at the result of using automated high dynamic range technology, the German standard VDI 2634 is evaluated through the German standard VDI 2634 to observe the modeling effect after solving the problem. In the evaluation experiment, a total of three sets of experiments are performed, and each set of experiments will be measured five times. In the case of a working distance of 400 mm, a focal length of 16 mm, a depth of field of 150 mm, and a field of view of 360 mm × 260 mm, the XYZ axis resolution can reach 0.1 mm, and the accuracy can reach -0.15 mm ~ 0.1 mm. Up to 0.08 mm.
[1] G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Optics 36 (19), pp. 4463-4472 (1997).
[2] J. Salvi, J. Pag'es, and J. Batlle, “Pattern codi¬fication strategies in structured light Systems,” Pattern Recognition 37 (4), pp. 827-849 (2004).
[3] H. Zhao, B. Shi, C. Fernandez-Cull, S. K. Yeung, and R. Raskar, “Unbounded high dynamic range photography using a modulo camera,” IEEE Int. Conf. Comput. 1 (1), pp. 1-10 (2015).
[4] G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Optics 38 (31), pp. 6565-6573 (1999).
[5] D. Scharstein, and R. Szeliski, “High-accuracy stereo depth maps using structured light,” PROC. CVPR. IEEE. 1 (1), pp. 195-202 (2003).
[6] X. Han, and P. Huang, “Combined stereovision and phase shifting method: a new approach for 3-D Shape Measurement,” SOC Photo-Opt. Instru. 7389 (1), pp. 73893C (2009).
[7] Y. Zhang, and A. Yilmaz, “Structured light based 3D scanning for specular surface by the combination of gray code and phase shifting,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. XLI-B3 (1), pp. 137-142 (2016).
[8] Q. Zhang, X. Su, L. Xiang, and X. Sun, “3-D shape measurement based on complementary gray-code light,” Opt. Laser Eng. 50 (4), pp. 574-579 (2012).
[9] Y. An, and S. Zhang, “Three-dimensional absolute shape measurement by combining binary statistical pattern matching with phase-shifting methods,” Appl. Optics 56 (19), pp. 5418-5426 (2017).
[10] Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-code light,” Opt. Express 27 (16), pp. 22631-22644 (2019).
[11] S. Zhang, and S. T. Yau, “High dynamic range scanning technique,” Opt. Eng. 48 (3), pp.033604 (2009).
[12] H. Jiang, H. Zhao, and X. Li, “High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces,” Opt. Laser Eng. 50 (10), pp. 1484-1493 (2012).
[13] B. Zhang, Y. Ouyang, and S. Zhang, “High dynamic range saturation intelligence avoidance for three-dimensional shape measurement,” IEEE Acm. Int. Symp. 1 (1), pp. 981-990 (2015).
[14] L. Rao and F. Da, “High dynamic range 3D shape determination based on automatic exposure selection,” J. VIS. CO. MMUN. IMAGE R. 50 (1), pp. 217-226 (2018).
[15] Gray code優勢,取自https://zh.wikipedia.org/wiki/%E6%A0%BC%E9%9B%B7%E7%A0%81
[16] D. Lanman and G. Taubin, Build Your Own 3D Scanner: 3D Photography for Beginners (SIGGRAPH 2009 Course Notes, 2009).
[17] Gray code編碼規則,取自https://openhome.cc/Gossip/AlgorithmGossip/GrayCode.htm
[18] D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light,” Opt. Laser Eng. 50 (8), pp. 1130-1139 (2012).
[19] B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Laser Eng. 87 (1), pp. 83-89 (2016).
[20] P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Optics 45 (21), pp. 5086-5091 (2006).
[21] D. Bergmann, “New approach for automatic surface reconstruction with coded light,” SOC Photo-Opt. Instru. 2572 (1), pp. 2-9 (1995).
[22] D. Zheng, F. Da, Q. Kemao and H. S. Seah, “Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter,” Opt. Express 25 (5), pp. 4700-4713 (2017).
[23] B. Li, Y. An, D. Cappelleri, J. Xu and S. Zhang, “High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot Appl. 1 (1), pp. 86-103 (2017).
[24] H. Lin and Z. Song, “3D Reconstruction of Specular Surface via a Novel Structured Light Approach,” IEEE Int. Conf. on Information and Automation 1 (1), pp. 530-534 (2015).
[25] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on PAMI. 22 (11), pp. 1330-1334 (2000).
[26] R. Hartley, Multiple View Geometry in Computer Vision Second Edition (Cambridge University Press, 2011).
[27] H. J. Chien, “Beginner’s Guide to Fundamental Matrix, Essential Matrix and Camera Motion Recovery” (2016),取自 https://www.researchgate.net/publication/303522230_qiantanjichujuzhenbenzhijuzhenyuxiangjiyidong_Beginner's_Guide_to_Fundamental_Matrix_Essential_Matrix_and_Camera_Motion_Recovery
[28] 單應性,取自 https://zh.wikipedia.org/wiki/%E5%8D%95%E5%BA%94%E6%80%A7
[29] Image rectification,取自
https://en.wikipedia.org/wiki/Image_rectification
[30] C. Loop and Z. Zhang, “Computing Rectifying Homographies for Stereo Vision,” In. CVPR. I (1), pp. 125-131 (1999).
[31] N. Qian, “Binocular Disparity and the Perception of Depth,” Neuron 18 (1), pp. 359-368 (1997).
[32] O. Krutikova, “Creation of a Depth Map from Stereo Images of Faces for 3D Model Reconstruction,” Procedia. Comput. SCI. 104 (1), pp. 452-459 (2017).