| 研究生: |
鄭元豪 Yuan-Hao JHENG |
|---|---|
| 論文名稱: |
基於深度學習之3D醫療護具特徵再構建與變形 |
| 指導教授: |
王文俊
Wen-June Wang |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 中文 |
| 論文頁數: | 67 |
| 中文關鍵詞: | 再構建與變形 、3D醫療護具 、深度學習 、點雲 、自編碼網路 |
| 外文關鍵詞: | reconstruction and deformation, 3D medical protector, deep learning, point cloud, AutoEncoder |
| 相關次數: | 點閱:14 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文旨在設計一個基於深度學習的網路架構來進行3D醫療護具的再構建與變形,分別針對三種不同的病症構建出其相應的3D醫療護具,手的部分為媽媽手及腕隧道,腳的部分為鞋墊。現階段製作3D醫療護具的方式為針對每位病患不同大小的手、腳手動進行繪製,相當耗費時間及人力,因此本文透過深度學習的方式訓練一個AutoEncoder自編碼網路,讓網路自動構建出符合輸入資料尺寸的3D醫療護具,節省中間人工繪製的時間,達到精準且有效率製作3D醫療護具的目的。
本文以自身手、腳的3D掃描資料當作訓練資料,然後以人工的方式繪製訓練資料相應的3D醫療護具當作訓練ground truth,接著對資料進行表面平均採點的動作,讓訓練資料及訓練ground truth皆以點雲資料的形式輸入到自編碼網路中進行訓練,網路在編碼及解碼的過程中會學習中間層latent code的主要特徵,隨著網路訓練次數的增加,解碼器再構建出來的結果會越來越接近ground truth,網路訓練完成後會保留該訓練權重,接著我們縮放及旋轉自身手、腳的3D掃描資料當作測試資料,然後一樣以點雲資料的形式輸入到已經訓練好的自編碼網路中,網路會使用訓練好的權重對測試資料進行3D醫療護具再構建與變形的動作,網路輸出即為符合該測試資料尺寸的點雲形式的3D醫療護具,為了評估網路再構建的輸出結果好壞,我們使用MMD-CD及JSD兩種驗證指標對其進行驗證,最後將點雲形式的3D醫療護具還原成面的形式再透過3D列印機將網路再構建的3D醫療護具打印出來。
The purpose of this dissertation is to design a network architecture based on deep learning to reconstruct and deform the 3D medical protector. There are three different types of protector to target the de Quervain Syndrome, carpal tunnel syndrome for hands and insoles for correcting feet, respectively. Usually, traditional methods in protector production are that designers draw the protectors manually, which spend a lot of time. Hence, we train an AutoEncoder network to make the 3D medical protector be reconstructed automatically and satisfy the size of the input data. The costs of time and labor can be reduced; meanwhile, the goal with effectivity and accuracy can be achieved finally for producing the 3D medical protector.
Firstly, we use 3D scanner to collect the data of my hands and feet as training data, after then, the corresponding protector is built manually and it will be regarded as the training ground truth in this study. The points of the training data and ground truth are sampled uniformly, and then, inputting them to the AutoEncoder deep net architecture. The network will learn the main features of latent code during the encoding and decoding processes. As the training steps increase, the results of the decoder reconstruction will be closer and closer to the ground truth. When the training is completed, the trained weights will be saved. In addition, we zoomed and rotated the 3D scan data of my hands and feet as verification data, then, the verification data is inputted to the trained AutoEncoder network as well. The network will reconstruct the 3D medical protector which can satisfy the size of verification data. To quantitatively evaluate performances of the experimental results, we apply MMD-CD and JSD verification metric to verify. Consequently, the suitable 3D medical protector is printed by 3D printer.
[1] D. Maturana and S. Scherer, "Voxnet: A 3D convolutional neural network for real-time object recognition," Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 922-928.
[2] Z. Wu et al., "3D shapenets: A deep representation for volumetric shapes," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1912-1920.
[3] Y. Li, S. Pirk, H. Su, C. R. Qi, and L. J. Guibas, "FPNN: Field probing neural networks for 3D data," Neural Information Processing Systems, pp. 307-315, 2016.
[4] D. Z. Wang and I. Posner, "Voting for voting in online point cloud object detection," Robotics: Science and Systems, vol. 1, no. 3, pp. 10.15607, 2015.
[5] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, "Multi-view convolutional neural networks for 3D shape recognition," Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 2015, pp. 945-953.
[6] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, "Volumetric and multi-view cnns for object classification on 3D data," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 5648-5656.
[7] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, "Pointnet: Deep learning on point sets for 3D classification and segmentation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 652-660.
[8] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5099-5108.
[9] B.-S. Hua, M.-K. Tran, and S.-K. Yeung, "Pointwise convolutional neural networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 984-993.
[10] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, "Spectral networks and locally connected networks on graphs," arXiv preprint arXiv:1312.6203, 2013.
[11] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst, "Geodesic convolutional neural networks on riemannian manifolds," Proceedings of the IEEE International Conference on Computer Vision Workshops, Santiago, Chile, 2015, pp. 37-45.
[12] Y. Fang et al., "3D deep shape descriptor," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 2319-2328.
[13] K. Guo, D. Zou, and X. J. A. T. o. G. Chen, "3D mesh labeling via deep convolutional neural networks," ACM Transactions on Graphics, vol. 35, no. 1, pp. 3, 2015.
[14] D. P. Kingma and M. J. a. p. a. Welling, "Auto-encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013.
[15] M. E. Yumer and N. J. Mitra, "Learning semantic deformation flows with 3D convolutional networks," Proceedings of the European Conference on Computer Vision, Amsterdam, Netherlands, 2016, pp. 294-311.
[16] A. Kurenkov et al., "Deformnet: Free-form deformation network for 3D shape reconstruction from a single image," Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision, Lake Tahoe, NV, USA, 2018, pp. 858-866.
[17] D. Jack et al., "Learning free-form deformations for 3D object reconstruction," arXiv preprint arXiv:1803.10932, 2018.
[18] H. Fan, H. Su, and L. J. Guibas, "A point set generation network for 3D object reconstruction from a single image," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 605-613.
[19] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. J. Guibas, "Learning representations and generative models for 3D point clouds," arXiv preprint arXiv:1707.02392, 2017.
[20] J. Lin, "Divergence measures based on the Shannon entropy," IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 145-151, 1991.
[21] 林新醫院. (2019). 媽媽手(狹窄性肌腱滑膜炎). Available: http://www.lshosp.com.tw/%E8%A1%9B%E6%95%99%E5%9C%92%E5%9C%B0/%E5%BE%A9%E5%81%A5%E7%A7%91/%E5%AA%BD%E5%AA%BD%E6%89%8B/
[22] 許維志、陳威宏. (2019). 腕隧道症候群. Available: http://www.skh.org.tw/Neuro/CTS.htm
[23] D. Chen. (Mar. 2019). 高足弓與扁平足. Available: http://lovespine.pixnet.net/blog/post/357577964-%E9%81%B8%E4%B8%80%E9%9B%99%E5%A5%BD%E9%9E%8B