| 研究生: |
鄭柏暐 Po-Wei Cheng |
|---|---|
| 論文名稱: |
基於消費級深度相機之器械追蹤系統開發 Development of Instrument Tracking System Based on Consumer-grade Depth Camera |
| 指導教授: |
廖昭仰
Chao-Yaug Liao |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 75 |
| 中文關鍵詞: | 深度相機 、手術導航系統 、定位系統 、RealSense 、點雲 |
| 外文關鍵詞: | Depth camera, Surgical navigation system, Location system, RealSense, Point cloud |
| 相關次數: | 點閱:19 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
手術導航系統現今已廣泛的使用在臨床上,可協助醫師更安全、更精確地完成手術,受到許多醫師及患者的青睞。由於手術導航系統需要精度較高的器材以及許多相關技術的整合,以保持其準確性與穩定性,因此通常整套系統要價不菲。若能將其中的硬體設備以其他較低廉的方案取代,便能大幅地降低其成本。
深度相機相較於傳統相機,除了可得知環境的影像之外,還可以直接獲得視野內物體的深度,如今已廣泛的應用在機器視覺領域,可搭載在自走車、機器手臂上等,許多廠商也推出消費級的深度相機,使得深度相機的使用門檻與成本降低了不少。而深度相機通常用於感測周遭事物的距離,因此若能使用深度相機取代導航系統中造價昂貴的感測器,用來得知目標物的位置,便能使成本大幅度的降低。
本研究之目的是以消費級的深度相機為基礎搭建一套器械追蹤系統。本研究結合彩色影像、紅外線影像、深度資訊等,透過影像處理與演算法來找出目標物的位置。本研究將深度相機與目標物固定在滑軌上,使滑軌移動特定距離,測量本研究之誤差。實驗包含測量單一球體球心的誤差、動態參考框架距離誤差和角度誤差,而各個項目又包含在不同距離所測得的誤差。擬和球心的誤差可在3mm以內,而動態參考框架的幾何相似度可達98%以上,角度誤差平均為2.74度。
Surgical navigation system is now widely used in clinics. They can assist doctors in completing operations more safely and accurately. It is widely accepted by physicians and patients nowadays. Surgical navigation system requires high-precision equipment and the integration of many related technologies to maintain its accuracy and stability, so the entire system is usually expensive. Compared with traditional camera, depth camera can not only acquire the color image of the environment, but also obtain the depth of objects in the field of view directly. Nowadays, depth camera has been widely used in the field of machine vision. It can be mounted on automatic vehicles, robotic arms, etc. Many manufacturers have also introduced consumer-grade depth cameras, which lower the threshold and cost of depth cameras. Depth cameras are usually used to sense the distance of things around. Therefore, if the depth camera can be used to replace the expensive sensors in the navigation system, the cost of the system can be greatly reduced.
The purpose of this research is to build a spatial positioning system based on consumer-grade depth cameras. This research combines color images, infrared images, depth information, etc. to find the location of the target through image processing and algorithms. In this research, the depth camera and the target are fixed on the slide rail, so we can make the target move a specific distance and measure the error of the experiment. The experiment includes measuring the error of the sphere center, the distance error of the dynamic reference frame, and the angle error of the dynamic reference frame, and each item includes the error measured at different distances. The minimum error of the fitting sphere can be within 3 mm, and the geometric similarity of the dynamic reference frame can reach more than 98%, the angular error is 2.74 degree on average.
[1] Klaus Ebmeier, K Giest and Rolf Kalff, “Intraoperative Computerized Tomography for Improved Accuracy of Spinal Navigation in Pedicle Screw Placement of the Thoracic Spine”. Intraoperative Imaging in Neurosurgery, Vol. 85, pp.105-113, 2003.
[2] Guo-Yan Zheng, Jens Kowal, Miguel A. Gonzalez Ballestera, Marco Caversaccio and Lutz-Peter Nolte, “Registration Techniques For Computer Navigation”, Current Orthopaedics, Vol. 21 (3), pp.170-179, June 2007.
[3] 張志儒,「電腦輔助系統用於脊椎後融合骨釘植入手術之臨床應用評估」,國立中央大學,碩士論文,民國105年。
[4] David Hernandez, Roja Garimella, Adam E M Eltorai and Alan H Daniels, “Computer-Assisted Orthopaedic Surgery”, Orthopaedic Surgery, Vol. 9, pp. 152-158, June 2017.
[5] Vivek Singh, John Realyvasquez, Trevor Simcox, Joshua C. Rozell, Ran Schwarzkopf and Roy Davidovitch, “Robotics Versus Navigation Versus Conventional Total Hip Arthroplasty: Does the Use of Technology Yield Superior Outcomes?”, Journal of Arthroplasty, Vol. 36 (8), pp.2801-2807, August 2021.
[6] Sang-Min Kim, Youn-Soo Park, Chul-Won Ha, Seung-Jae Lim and Young-Wan Moon, “Robot-Assisted Implantation Improves the Precision of Component Position in Minimally Invasive TKA”, Orthopedics, Vol. 35 (9), pp. E1334-E1339, September 2012.
[7] Jens Decking, Christoph Theis, Tobias Achenbach, Edgar Roth, Bernhard and Nafe, Anke Eckardt, “Robotic Total Knee Arthroplasty the Accuracy of CT-Based Component Placement”, Acta Orthopaedica Scandinavica, Vol. 75 (5), pp. 573-579, October 2004.
[8] Hong-Yu Liu, Wei-Hung Su, Karl Reichard and Shi-Zhuo Yin, “Calibration-Based Phase-Shifting Projected Fringe Profilometry for Accurate Absolute 3D Surface Profile Measurement”, Optic Communications, Vol.216 (1-3), pp.65-80, February 2003.
[9] Monica Carfagni, Rocco Furferi, Lapo Governi, Chiara Santarelli, Michaela Servi, Francesca Uccheddu and Yary Volpe, “Metrological and Critical Characterization of the Intel D415 Stereo Depth Camera”, Sensors, Vol. 19 (3), No. 489, January 2019.
[10] Feng-Quan Zhang, Ting-Shen Lei, Jinhong Li, Xing-Quan Cai, Xu-Qiang Shao, Jian Chang and Feng Tian, “Real-time Calibration and Registration Method for Indoor Scene with Joint Depth and Color Camera”, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 32 (7), No. 7, July 2018.
[11] Chong Wang, Zhong Liu and Shing-Chow Chan, “Superpixel-Based Hand Gesture Recognition with Kinect Depth Camera”, IEEE Transactions on Multimedia, Vol. 17 (1), No. 1, pp.29-39, January 2015.
[12] Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges and Andrew Fitzgibbon, “Kinect Fusion: Real-Time Dense Surface Mapping and Tracking”, 2011 10th IEEE International Symposium on Mixed and Augmented Reality, October 2011.
[13] Jing Tong, Jin Zhou, Li Gang Liu, Zhi Geng Pan and Hao Yan, “Scanning 3D Full Human Bodies Using Kinects”, IEEE Transactions on Visualization and Computer Graphics, Vol. 18 (4), pp.643-650, April 2012.
[14] Jamie Shotton, Toby Sharp, Alex Kipman, Andrew Fitzgibbon, Mark Finocchio, Andrew Blake, Mat Cook and Richard Moore, “Real-Time Human Pose Recognition in Parts from Single Depth Images”, Communications of the ACM, Vol. 56 (1), pp. 116-124, January 2013.
[15] John Oyekan, Axel Fischer, Windo Hutabarat, Christopher Turner and Ashutosh Tiwari, “Utilizing Low Cost RGB-D Cameras to Track the Real Time Progress of a Manual Assembly Sequence”, Assembly Automation, Vol.40 (6), pp.925-939, November 2020.
[16] Mateu-Mateus Marc, Guede-Fernandez Federico, Garcia-Gonzalez Miguel A., Ramos-Castro Juan and Fernandez-Chimeno Mireya, “Non-Contact Infrared-Depth Camera-Based Method for Respiratory Rhythm Measurement While Driving”, IEEE Access, Vol. 7, pp. 152522-152532, October 2019.
[17] Cristian Vilar, Silvia Krug and Mattias O’Nils, “Realworld 3D Object Recognition Using a 3D Extension of the HOG Descriptor and a Depth Camera”, Sensors, Vol. 21 (3), No. 910, February 2021.
[18] Long-Yu Zhang, Hao Xia and Yan-You Qiao, “Texture Synthesis Repair of RealSense D435i Depth Images with Object-Oriented RGB Image Segmentation”, Sensors, Vol. 20 (23), No. 6725, December 2020.
[19] Kai-Lun Yang, Kai-Wei Wang, Wei-Jian Hu and Jian Bai, “Expanding the Detection of Traversable Area with RealSense for the Visually Impaired”, Sensors, Vol. 16 (11), No. 1954, November 2016.
[20] Ji-Min Cho, Soon-Yong Park and Sung-Il Chien, “Hole-Filling of RealSense Depth Images Using a Color Edge Map”, IEEE Access, Vol. 8, pp. 53901-53914, March 2020.
[21] Yue-Hua Li, Jing-Bo Zhou, Qing-Wei Mao, Jian-Gyan Jin and Feng-Shan Huang, “Line Structured Light 3D Sensing With Synchronous Color Mapping”, IEEE Sensors Journal, Vol. 20 (17), pp. 9796-9805, September 2020.
[22] Perng-Fei Luo, Yuh-Jin Chao, and Michael A. Sutton, “Application of Stereo Vision to Three-Dimensional Deformation Analyses in Fracture Experiments”, Optical Engineering, Vol. 33 (3), pp. 981-990, March 1994.
[23] Qin-Yong Lin, Rong-Qian Yang, Zhe-Si Zhang, Ken Cai, Zhi-Gang Wang, Mei-Ping Huang, Jin-Hua Huang, Yin-Wei Zhan and Jian Zhuang, “Robust Stereo-Match Algorithm for Infrared Markers in Image-Guided Optical Tracking System”, IEEE Access Vol. 6, pp. 52421-52433, October 2018.
[24] Soon-Yong Park, Murali Subbarao, “A Multiview 3D Modeling System Based on Stereo Vision Techniques”, Machine Vision and Applications, Vol. 16 (3), pp. 148-156, May 2005.
[25] Yue-Hua Li, Jing-Bo Zhou, Qing-Wei Mao, Jian-Gyan Jin and Feng-Shan Huang, “Line Structured Light 3D Sensing With Synchronous Color Mapping”, IEEE Sensors Journal, Vol. 20 (17), pp. 9796-9805, September 2020.
[26] Andreas Velten, Thomas Willwacher, Otkrist Gupta, Ashok Veeraraghavan, Moungi G. Bawendi, and Ramesh Raskar, “Recovering Three-Dimensional Shape Around a Corner Using Ultrafast Time-of-Flight Imaging”, Nature Communications, Vol. 3, No.745, March 2012.
[27] Guo-Yan Zheng, Jens Kowal, Miguel A. Gonzalez Ballestera, Marco Caversaccio and Lutz-Peter Nolte, “Registration Techniques For Computer Navigation”, Current Orthopaedics, Vol. 21 (3), pp.170-179, June 2007.
[28] David Hernandez, Roja Garimella, Adam E M Eltorai and Alan H Daniels, “Computer-Assisted Orthopaedic Surgery”, Orthopaedic Surgery, Vol. 9, pp. 152-158, June 2017.
[29] Xing-Guang Duan, Liang Gao, Yong-Gui Wang, Jian-Xi Li, Hao-Yuan Li and Yan-Jun Guo1, "Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot", Journal of Healthcare Engineering, Vol. 2018,No. 4670852, December 2018.
[30] 吳成柯等,數位影像處理,儒林圖書,臺北,1995。
[31] Wilhelm Burger and Mark J. Burge, 2008 Digital image processing: an algorithmic introduction using Java, 1st edit., Springer, New York, 2008.
[32] Michael Doube, “Multithreaded Two-Pass Connected Components Labelling and Particle Analysis in ImageJ”, Royal Society Open Science, Vol. 8 (3), No. 201784, March 20
[33] Intel RealSense Team, Intel RealSense D400 Series Product Family Datasheet, Revision 010, February 2021.
[34] Northern Digital Inc., Polaris Spectra Tool Kit Guide, Revision 1, August 2006.