| 研究生: |
李品翰 Pin-Han Lee |
|---|---|
| 論文名稱: |
自由曲面二維光學量測 Two-Dimensional Optical Measurement for Freeform Surfaces |
| 指導教授: |
楊宗勳
Tsung-Hsun Yang |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
理學院 - 光電科學與工程學系 Department of Optics and Photonics |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 中文 |
| 論文頁數: | 96 |
| 中文關鍵詞: | 相機姿態估計 、多視角放大率分析 、非線性最小平方法 、厚透鏡模型 |
| 相關次數: | 點閱:14 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出一種基於圖像放大率分布的多視角相機姿態估計方法,結合影像中交點放大率與幾何成像特性,透過非線性最小平方法估算相機相對姿態。在不依賴三維世界座標系的前提下,設計一組具旋轉對稱性的平面標定板圖樣,藉由放大率曲線在影像中的空間分布,反推出相機與標定板平面的相對位置與姿態。
方法核心在於利用厚透鏡模型之物像距離關係初始化相機姿態,並以影像中心與物空間中心定義放大率,建立非線性誤差模型後,透過 Levenberg-Marquardt 演算法優化使最小化實際與估計放大率之差異,並可視化損失函數進而獲得高精度的相機位姿估計結果。相較於傳統PnP方法,所提方法能在已知成像模型與相機參數情況下,提升估算穩定性並對像差具更高容錯性。
實驗結果顯示,本方法能有效還原相機在不同拍攝視角下的距離與角度變化,並且所估姿態與實際位移趨勢吻合,展現良好的適應性,適合用於多視角幾何成像的建構。
This thesis proposes a multi-view camera pose estimation method based on the distribution of image magnification, integrating thick lens imaging geometry and variation in magnification to estimate camera positions and orientations. Starting from a coarse 3D structure assumption, we construct a set of magnification-based geometric constraints and formulate an optimization process to infer camera pose from image magnification distributions and known projection geometry.
The core of the method uses the thick lens model to describe the object-image distance relationship and camera pose, analyzing the relative magnification between each point and the reference center to build a constraint model. Then, using the Levenberg–Marquardt algorithm, the pose is iteratively refined by minimizing magnification errors, while also predicting the magnification curve. To improve the robustness of initial estimation, we use the PnP method to provide an initial pose under known 2D–3D correspondences.
Experimental results show that the method can accurately estimate the relative pose between different camera viewpoints and significantly reduce estimation errors. The optimization process is robust and efficient, making it suitable for multi-view structure-from-motion tasks and dynamic scene applications.
[1] A. Sharma and A. Kulkarni, “Vision system for smart manufacturing: A review,” Proc. IEEE Eng. Informatics, 1–9 (2023).
[2] D. K. Moru, D. Agholor, and F. A. Imouokhome, “Machine vision and metrology systems: An overview,” Int. J. Data Sci. 2(2), 77–84 (2021).
[3] F. Z. Fang, X. D. Zhang, A. Weckenmann, G. X. Zhang, and C. Evans, “Manufacturing and measurement of freeform optics,” CIRP Ann. 62(2), 823–846 (2013).
[4] L. Pérez, Í. Rodríguez, N. Rodríguez, R. Usamentiaga, and D. F. García, “Robot guidance using machine vision techniques in industrial environments: A comparative review,” Sensors 16(3), 335 (2016).
[5] G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009).
[6] F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 36, 266–272 (2006).
[7] C. S. Fraser, “Photogrammetric camera component calibration: A review of analytical techniques,” in Calibration and Orientation of Cameras in Computer Vision, Springer, 95-121 (2001).
[8] G.-Q. Wei and S. De Ma, “Implicit and explicit camera calibration: Theory and experiments,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 469–480 (1994).
[9] C. B. Duane, “Close-range camera calibration,” Photogramm. Eng. 37(8), 855-866 (1971).
[10] O. D. Faugeras, Q.-T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in Proc. Eur. Conf. Comput. Vision, 321-334 (1992).
[11] C. S. Fraser, “Digital camera self-calibration,” ISPRS J. Photogramm. Remote Sens. 52(4), 149-159 (1997).
[12] A. Heyden and K. Astrom, “Euclidean reconstruction from image sequences with varying and unknown focal length and principal point,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 438–443 (1997).
[13] A. Gruen and H. A. Beyer, “System calibration through self-calibration,” in Calibration and Orientation of Cameras in Computer Vision, Springer, (2001).
[14] A. Gruen and T. S. Huang, Calibration and Orientation of Cameras in Computer Vision, 34, Springer Science & Business Media (2013).
[15] J. Salvi, J. Pagès, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognit. 37, 827–849 (2004).
[16] Geng, J. , “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128-160 (2011).
[17] S. Feng, C. Zuo, L. Zhang, T. Tao, Y. Hu, W. Yin, J. Qian, and Q. Chen, “Calibration of fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 143, 106622 (2021).
[18] C. Chen, J. Yu, N. Gao, and Z. Zhang, “High accuracy 3D calibration method of phase calculation-based fringe projection system by using LCD screen considering refraction error,” Opt. Lasers Eng. 126, 105870 (2020).
[19] A. Dhall, K. Chelani, V. Radhakrishnan, and K. M. Krishna, “LiDAR-camera calibration using 3D-3D point correspondences,” arXiv preprint arXiv:1705.09785 (2017).
[20] T. Tóth, Z. Pusztai, and L. Hajder, “Automatic LiDAR camera calibration of extrinsic parameters using a spherical target,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA) 2020, 8580–8586 (2020).
[21] J.-K. Huang and J. W. Grizzle, “Improvements to target-based 3D LiDAR to camera calibration, ” IEEE Access 8, 134101–134110 (2020).
[22] Q. Wang, C. Yan, R. Tan, Y. Feng, Y. Sun, and Y. Liu, “3D-CALI: Automatic calibration for camera and LiDAR using 3D checkerboard,” Measurement, 203, 111971 (2022).
[23] D. B. Gennery, “Stereo camera calibration,” Proceedings ARPA IUS Workshop 1, 101–107 (1979).
[24] O. D. Faugeras and G. Toscani, “The calibration problem for stereo,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 15–20 (1986).
[25] S. Nedevschi, T. Marita, M. Vaida, R. Danescu, D. Frentiu, F. Oniga, C. Pocol, and D. Moga, “Camera calibration method for stereo measurements,” J. Control Eng. Appl. Informatics 4, 21–28 (2002).
[26] H. Hirschmüller, “Stereo processing by semi global matching and mutual information,” IEEE Trans. Pattern Anal. Mach. Intell. 30, 328–341 (2007).
[27] B. Guan, Y. Yu, A. Su, Y. Shang, and Q. Yu, “Self-calibration approach to stereo cameras with radial distortion based on epipolar constraint,” Appl. Opt. 58, 8511–8521 (2019).
[28] Y. Furukawa and J. Ponce, “Accurate camera calibration from multi-view stereo and bundle adjustment,” Int. J. Comput. Vis. 84, 257–268 (2009).
[29] J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi camera calibration method based on a multi plane stereo target,” Appl. Opt. 58, 9353–9359 (2019).
[30] V. Lepetit, F. Moreno Noguer, and P. Fua, “EP n P: An accurate O(n) solution to the PnP problem,” Int. J. Comput. Vis. 81, 155–166 (2009).
[31] Y. Zheng, Y. Kuang, S. Sugimoto, K. Åström, and M. Okutomi, “Revisiting the PnP problem: A fast, general and optimal solution,” in Proc. IEEE Int. Conf. Comput. Vis., 2344–2351 (2013).
[32] G. Schweighofer and A. Pinz, “Globally Optimal O(n) Solution to the PnP Problem for General Camera Models,” in Proc. BMVC, 1-10 (2008).
[33] F. Moreno-Noguer, V. Lepetit, and P. Fua, “Accurate Non-Iterative O(n) Solution to the PnP Problem,” in Proc. IEEE 11th Int. Conf. Comput. Vision, 1-8 (2007).
[34] X. X. Lu, “A review of solutions for perspective-n-point problem in camera pose estimation,” Journal of Physics: Conference Series, 1087(5), 052009 (2018).
[35] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330-1334 (2002).
[36] Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proc. IEEE 7th Int. Conf. Comput. Vision 1, 666-673 (1999).
[37] Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004).
[38] J. Weng, P. Cohen, M. Herniou, et al., “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell. 14(10), 965-980 (1992).
[39] J. Heikkilä and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, 1106-1112 (1997).
[40] P. F. Sturm and S. J. Maybank, “On plane-based camera calibration: A general algorithm, singularities, applications,” in Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit. (Cat. No. PR00149) 1, 432-437 (1999).
[41] R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robotics Autom. 3(4), 323-344 (2003).
[42] Q. Zhang and R. Pless, “Extrinsic calibration of a camera and laser range finder (improves camera calibration),” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (IEEE Cat. No. 04CH37566) 3, 2301-2306 (2004).
[43] J. Jin and X. Li, “Efficient camera self-calibration method based on the absolute dual quadric,” J. Opt. Soc. Am. A, 30(3), 287-292 (2013).
[44] H. Bazargani and R. Laganière, “Camera calibration and pose estimation from planes,” IEEE Instrum. Meas. Mag. 18(6), 20-27 (2015).
[45] T. Luhmann, C. S. Fraser, and H.-G. Maas, “Sensor modelling and camera calibration for close-range photogrammetry,” ISPRS J. Photogramm. Remote Sens., 115, 37-46 (2016).
[46] J. Jiang, L. Zeng, B. Chen, Y. Lu, and W. Xiong, “An accurate and flexible technique for camera calibration,” Computing, 101(12), 1971-1988 (2019).
[47] J. Yu, Y. Liu, Z. Zhang, F. Gao, N. Gao, Z. Meng, and X. Jiang, “High-accuracy camera calibration method based on coded concentric ring center extraction,” Optics Express, 30(23), 42454-42469 (2022).
[48] B. Chai and Z. Wei, “Stratified camera calibration algorithm based on the calibrating conic,” Optics Express, 31(2), 1282-1302 (2023).
[49] X. Qin, X. Xia, and H. Xiang, “A high-quality and convenient camera calibration method using a single image,” Electronics, 13(22), 4361 (2024).
[50] E. Brachmann, F. Michel, A. Krull, M. Y. Yang, S. Gumhold, and others, “Uncertainty-driven 6D pose estimation of objects and scenes from a single RGB image,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 3364–3372 (2016).
[51] C. Mei and P. Rives, “Single view point omnidirectional camera calibration from planar grids,” Proc. IEEE Int. Conf. Robot. Autom., 3945–3950 (2007).
[52] Y. Zhang, X. Zhao, and D. Qian, “Learning-based distortion correction and feature detection for high precision and robust camera calibration,” IEEE Robotics Autom. Lett., 7(4), 10470-10477 (2022).
[53] M. Jeong, H. Byun, and S. Lee, “Learning camera parameters with weighted edge attention from single-view images,” IEEE Access, 11, 16896-16906 (2023).
[54] R. I. Hartley and P. Sturm, “Triangulation,” Computer Vision and Image Understanding, 68(2), 146–157 (1997).
[55] J. L. Schonberger and J.-M. Frahm, “Structure-from-motion revisited,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 4104–4113 (2016).
[56] O. Özyeşil, V. Voroninski, R. Basri, and A. Singer, “A survey of structure from motion,” Acta Numerica, 26, 305–364 (2017).
[57] R. Horaud and F. Dornaika, “Hand-eye calibration,” Int. J. Robot. Res. 14(3), 195–210 (1995).
[58] F. Dornaika and R. Horaud, “Simultaneous robot-world and hand-eye calibration,” IEEE Trans. Robot. Autom. ,14(4), 617–622 (2002).
[59] H. Malm and A. Heyden, “Simplified intrinsic camera calibration and hand-eye calibration for robot vision,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. 1, 1037–1043 (2003).
[60] K. H. Strobl and G. Hirzinger, “Optimal hand-eye calibration,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 4647–4653 (2006).
[61] D. Moreno and G. Taubin, “Simple, accurate, and robust projector-camera calibration,” in Proc. Int. Conf. 3D Imaging, Model. Process. Vis. Transm., 464–471 (2012).
[62] I. Ali, O. Suominen, A. Gotchev, and E. R. Morales, “Methods for simultaneous robot-world-hand–eye calibration: A comparative study,” 19(12), 2837 (2019).
[63] T. E. Lee, J. Tremblay, T. To, J. Cheng, T. Mosier, O. Kroemer, D. Fox, and S. Birchfield, “Camera-to-robot pose estimation from a single image,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 9426–9432 (2020).
[64] F. Zhong, B. Li, W. Chen, and Y.-H. Liu, “Robot–camera calibration in tightly constrained environment using interactive perception,” IEEE Trans. Robot. 39(6), 4952–4970 (2023).
[65] J. J. Moré, “The Levenberg–Marquardt algorithm: implementation and theory,” in Numerical Analysis: Proceedings of the Biennial Conference held at Dundee, June 28–July 1, 1977, 105–116 (2006).
[66] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed. (Cambridge University Press, 2003).
[67] V. N. Mahajan, Fundamentals of Geometrical Optics (SPIE Press, 2014).