| 研究生: |
姜智桓 Zhi-Huan Jiang |
|---|---|
| 論文名稱: |
基於三維多尺度卷積神經網路自動分割與量化腦海綿狀血管瘤 Automated Cerebral Cavernous Malformation Segmentation and Quantification Using 3D Multi-scale Convolutional Neural Networks |
| 指導教授: |
蔡章仁
Jang-Zern Tsai |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 英文 |
| 論文頁數: | 87 |
| 中文關鍵詞: | 腦海綿狀血管瘤 、磁振造影 、深度學習 、自動分割 、3D卷積神經網路 |
| 外文關鍵詞: | Cerebral cavernous malformation, Magnetic resonance imaging, Deep Learning, Segmentation, 3D convolutional neural network |
| 相關次數: | 點閱:15 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
腦海綿狀血管瘤(cerebral cavernous malformations, CCM),是腦部中的一種血管病變,由良性不正常的血管組成,在腦部某一位置的血管膨脹成團。在T2權重影像上會有低信號hypointensity(黑色)的邊緣,本體可能因為反覆出血而呈現多囊狀有如爆米花(popcorn)般的型態。目前腦海綿狀血管瘤的診斷,主要依賴醫師的目視判讀及手動標記,但人眼判讀的方法容易受外界環境、視覺疲勞影響,手動標記耗時費力,因此擁有一客觀工具來改善診斷的準確性和效率為當前需求。本文提出一種深度學習方式,在T2權重影像上自動分割及量化腦海綿狀血管瘤。首先,使用Mask Region based Convolution Neural Networks (Mask RCNN)對T2權重影像進行實質腦提取,去除頭骨、頭皮以及背景雜訊,目的為提高分割效率於實質腦範圍內的腦海綿狀血管瘤,接著對影像進行強度標準化、體素尺寸重採樣以及資料增量等影像前處理,最後使用Deepmedic多尺度3D卷積神經網路在實質腦範圍進行腦海綿狀血管瘤的分割與量化。本研究使用的資料來源為臺北榮民總醫院192筆T2權重影像,資料被隨機劃分五分之三為訓練集、五分之一為驗證集及五分之一為測試集。目前訓練模型用於腦海綿狀血管瘤自動分割在測試集上取得的模型評估指標,平均Dice、精確率(Precision)、召回率(Recall)分別為0.736、0.807和0.729。此結果顯示了所提出的深度學習方法在自動腦海綿狀血管瘤分割方面的有效性。此系統的開發提供了一種客觀工具,以提高腦海綿狀血管瘤診斷的準確性和效率。
Cerebral cavernous malformations (CCM) are vascular abnormalities in the brain characterized by benign clusters of abnormal blood vessels. Magnetic resonance imaging (MRI) is a diagnostic tool used by physicians to detect and assess the size of CCM. In T2-Weighted (T2W) images, there may be hypointensity (dark) edges, and the lesion itself may appear as a multicystic structure resembling popcorn-like morphology, possibly due to recurrent seizures. Currently, the diagnosis of CCM heavily relies on visual interpretation and manual delineation by physicians. However, these methods are subjective, prone to environmental factors and visual fatigue, and time-consuming. Therefore, there is a current need for an objective tool to improve the accuracy and efficiency of diagnosis. To address these challenges, we proposed a deep learning-based approach for automated segmentation and quantification of CCM on T2W. Firstly, a Mask Region based Convolution. Neural Networks (Mask RCNN) model is employed to extract the brain region from the T2W, removing skull, scalp, and background noise to improve segmentation efficiency within the brain region. The images are then subjected to preprocessing steps including intensity normalization, voxel size resampling, and data augmentation. Finally, a Deepmedic multi-scale 3D convolutional neural network (CNN) is used to perform CCM segmentation and quantification within the extracted brain region. The dataset used in this study consists of 192 T2W from Taipei Veterans General Hospital, which are randomly divided into training (3/5), validation (1/5), and testing (1/5) sets. The trained model for CCM segmentation achieved the following evaluation metrics on the testing set: average Dice coefficient of 0.736, precision of 0.807, and recall of 0.729. The results demonstrate the effectiveness of the proposed deep learning approach in automated CCM segmentation. The developed system provides an objective tool to improve the accuracy and efficiency of CCM diagnosis.
[1] N. Mouchtouris et al., "Management of cerebral cavernous malformations: from diagnosis to treatment," vol. 2015, 2015.
[2] P. J. Porter, R. A. Willinsky, W. Harper, and M. C. J. J. o. n. Wallace, "Cerebral cavernous malformations: natural history and prognosis after clinical deterioration with or without hemorrhage," vol. 87, no. 2, pp. 190-197, 1997.
[3] R. T. Dalyai et al., "Management of incidental cavernous malformations: a review," vol. 31, no. 6, p. E5, 2011.
[4] R. S. Fisher et al., "ILAE official report: a practical clinical definition of epilepsy," vol. 55, no. 4, pp. 475-482, 2014.
[5] H. Wang, S. N. Ahmed, M. J. C. M. I. Mandal, and Graphics, "Computer-aided diagnosis of cavernous malformations in brain MR images," vol. 66, pp. 115-123, 2018.
[6] V. Duay, X. Bresson, J. S. Castro, C. Pollo, M. B. Cuadra, and J.-P. Thiran, "An active contour-based atlas registration model applied to automatic subthalamic nucleus targeting on MRI: method and validation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2008: 11th International Conference, New York, NY, USA, September 6-10, 2008, Proceedings, Part II 11, 2008, pp. 980-988: Springer.
[7] H. Abdi and L. J. J. W. i. r. c. s. Williams, "Principal component analysis," vol. 2, no. 4, pp. 433-459, 2010.
[8] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, B. J. I. I. S. Scholkopf, and t. applications, "Support vector machines," vol. 13, no. 4, pp. 18-28, 1998.
[9] P. Sharma and A. P. Shukla, "A review on brain tumor segmentation and classification for MRI images," in 2021 International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), 2021, pp. 963-967: IEEE.
[10] A. Tiwari, S. Srivastava, and M. J. P. R. L. Pant, "Brain tumor segmentation and classification from magnetic resonance images: Review of selected methods from 2014 to 2019," vol. 131, pp. 244-260, 2020.
[11] A. Wadhwa, A. Bhardwaj, and V. S. J. M. r. i. Verma, "A review on brain tumor segmentation of MRI images," vol. 61, pp. 247-259, 2019.
[12] B. H. Menze et al., "The multimodal brain tumor image segmentation benchmark (BRATS)," vol. 34, no. 10, pp. 1993-2024, 2014.
[13] S. Pereira, A. Pinto, V. Alves, and C. A. J. I. t. o. m. i. Silva, "Brain tumor segmentation using convolutional neural networks in MRI images," vol. 35, no. 5, pp. 1240-1251, 2016.
[14] M. Havaei et al., "Brain tumor segmentation with deep neural networks," vol. 35, pp. 18-31, 2017.
[15] N. J. Tustison et al., "Optimal symmetric multimodal templates and concatenated random forests for supervised brain tumor segmentation (simplified) with ANTsR," vol. 13, pp. 209-225, 2015.
[16] L. J. M. l. Breiman, "Random forests," vol. 45, pp. 5-32, 2001.
[17] G. R. Cross, A. K. J. I. T. o. P. A. Jain, and M. Intelligence, "Markov random field texture models," no. 1, pp. 25-39, 1983.
[18] M. Soltaninejad et al., "Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels," vol. 157, pp. 69-84, 2018.
[19] L. Zhao, D. Sarikaya, and J. J. J. M. B. T. S. Corso, "Automatic brain tumor segmentation with MRF on supervoxels," vol. 51, pp. 51-54, 2013.
[20] C. Sutton, A. J. F. McCallum, and T. i. M. Learning, "An introduction to conditional random fields," vol. 4, no. 4, pp. 267-373, 2012.
[21] S. Bakas et al., "GLISTRboost: combining multimodal MRI segmentation, registration, and biophysical tumor growth modeling with gradient boosting machines for glioma segmentation," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers 1, 2016, pp. 144-155: Springer.
[22] H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, "Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks," in Medical Image Understanding and Analysis: 21st Annual Conference, MIUA 2017, Edinburgh, UK, July 11–13, 2017, Proceedings 21, 2017, pp. 506-517: Springer.
[23] K. Kamnitsas et al., "Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation," vol. 36, pp. 61-78, 2017.
[24] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, "Deep convolutional neural networks for the segmentation of gliomas in multi-sequence MRI," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: First International Workshop, Brainles 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 5, 2015, Revised Selected Papers 1, 2016, pp. 131-143: Springer.
[25] X. Zhao, Y. Wu, G. Song, Z. Li, Y. Zhang, and Y. J. M. i. a. Fan, "A deep learning model integrating FCNNs and CRFs for brain tumor segmentation," vol. 43, pp. 98-111, 2018.
[26] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, "3D U-Net: learning dense volumetric segmentation from sparse annotation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17-21, 2016, Proceedings, Part II 19, 2016, pp. 424-432: Springer.
[27] T. R. Patel et al., "Multi-resolution CNN for brain vessel segmentation from cerebrovascular images of intracranial aneurysm: a comparison of U-Net and DeepMedic," in Medical Imaging 2020: Computer-Aided Diagnosis, 2020, vol. 11314, pp. 677-685: SPIE.
[28] P.-Y. Kao, T. Ngo, A. Zhang, J. W. Chen, and B. Manjunath, "Brain tumor segmentation and tractographic feature extraction from structural MR images for overall survival prediction," in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part II 4, 2019, pp. 128-141: Springer.
[29] A. Humera, T. J. M. G. Humera, and Vision, "Skull stripping using traditional and soft-computing approaches for magnetic resonance images: a semi-systematic meta-analysis," vol. 29, no. 1/4, 2020.
[30] A. Kharb and P. J. W. Chaudhary, "A Review On Skull Stripping Techniques Of Brain MRI Images," vol. 18, no. 6, 2021.
[31] H. Z. U. Rehman, H. Hwang, and S. J. A. S. Lee, "Conventional and deep learning methods for skull stripping in brain MRI," vol. 10, no. 5, p. 1773, 2020.
[32] V. P. Grover et al., "Magnetic resonance imaging: principles and techniques: lessons for clinicians," vol. 5, no. 3, pp. 246-255, 2015.
[33] Y. J. J. o. N. S. Jung, "Multiple predicting K-fold cross-validation for model selection," vol. 30, no. 1, pp. 197-215, 2018.
[34] M. M. Badža and M. Č. J. A. S. Barjaktarović, "Classification of brain tumors from MRI images using a convolutional neural network," vol. 10, no. 6, p. 1999, 2020.
[35] J. Ashburner et al., "SPM12 manual," vol. 2464, no. 4, 2014.
[36] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[37] Y. Liu, B. M. J. I. j. o. b. Dawant, and h. informatics, "Automatic localization of the anterior commissure, posterior commissure, and midsagittal plane in MRI scans using regression forests," vol. 19, no. 4, pp. 1362-1374, 2015.
[38] S. Ren, K. He, R. Girshick, and J. J. A. i. n. i. p. s. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," vol. 28, 2015.
[39] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
[40] S. Targ, D. Almeida, and K. J. a. p. a. Lyman, "Resnet in resnet: Generalizing residual architectures," 2016.
[41] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[42] C.-J. Chou, C.-C. Lee, C.-J. Chen, H.-C. Yang, and S.-J. J. B. Peng, "Displacement of gray matter and incidence of seizures in patients with cerebral cavernous malformations," vol. 9, no. 12, p. 1872, 2021.