| 研究生: |
安語代 Diannata Rahman Yuliansyah |
|---|---|
| 論文名稱: |
以深度摺含神經網路重建擴散光學影像研究 Diffuse Optical Imaging using Deep Convolutional Neural Networks |
| 指導教授: |
潘敏俊
Min-Chun Pan |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 英文 |
| 論文頁數: | 48 |
| 中文關鍵詞: | 擴散光學成像 、深度卷積神經網絡 、Tikhonov正則化 |
| 外文關鍵詞: | Diffuse Optical Imaging, deep convolutional neural networks, Tikhonov regularization |
| 相關次數: | 點閱:16 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究的目的是開發深度學習算法,以替代現有的Tikhonov正則化方法。在本研究中,我們開發了一個用於擴散光學成像的深度卷積神經網絡模型。我們準備的訓練數據集由10000個樣本組成,其中有不同命名的仿體案例。這些仿體案例的參數已經根據不同的參數設定。對於每個樣本,輸入數據的形式是16×15×2浮點值(16個光源與檢測器位置),即對數振幅和對數相位值。輸出數據的形式是大小為64×64的矩形網格,每個網格的吸收和散射係數。這些值是從3169個節點的原始數據內插出來的。對於測試數據集,從實驗數據中共選取了10個樣本。該模型架構基於多種方法。我們使用了從傳感器域到圖像域轉換的方法。我們還使用了編碼器的概念,就是學習輸入的壓縮表示。將輸入的壓縮和轉換到圖像域後,我們使用U-net與跳過連接來提取特徵,得到對比度圖像。將對比度圖像與背景係數相乘得到輸出圖像。
在訓練過程中,我們使用自定義的損失函數,它是對比圖像和背景係數輸出的加權MSE之和。我們使用Adam優化器,β1=0.5。我們使用的學習率為0.0002,批次大小為32。該模型共創建了6,588,608個可訓練參數,並在21.6小時內訓練了200個迭代。同時,對於訓練數據集中的每個樣本,Tikhonov正則化方法的平均計算為154秒。 僅經過幾次迭代,訓練損失就會迅速下降,因此深度學習的體系結構被認為是理想的。 我們選擇使用第七個時期的權重,因為它證明這可以使模型能夠預測未知的實驗數據。 需要進行進一步的概括,以確保模型不會過擬合併提高性能。 從結果可以得出結論,由於該模型成功地定位了夾雜物,因此該模型可以替代Tikhonov正則化方法。
關鍵字:擴散光學成像,深度卷積神經網絡,Tikhonov正則化
The purpose of this study is to develop deep learning algorithms as an alternative to the existing Tikhonov regularization method. In this study, a deep convolutional neural network model for diffuse optical imaging has been developed. We prepare the training dataset that consists of 10000 samples with different designations of phantom cases. The parameters of these phantom cases had been specified based on various properties. For each sample, the input data are in the form of 16×15×2 floating-point values (16 source/detector locations), which are log amplitude and log phase values. The output data are in the form of a rectangular grid of size 64×64, each for both absorption and scattering coefficients. These values have been interpolated from the original data of 3169 nodes. For the test dataset, there are a total of 10 chosen samples from experimental data. The model architecture is based on multiple ideas. We use the idea of transformation from sensor-domain to image-domain. We also use the concept of an encoder, which is to learn a compressed representation of the inputs. After the compression and transformation of the inputs to image-domain, we use U-net with skip connections to extract the features and obtain the contrast image. The output images are obtained by multiplying the contrast images with the background coefficients.
For the training process, we use custom loss function which is the sum of the weighted MSE of contrast image and background coefficient outputs. We use Adam optimizer with β1 = 0.5. We use a learning rate of 0.0002 and a batch size of 32. The model has been created with a total of 6,588,608 trainable parameters and trained in 21.6 hours for 200 epochs. Meanwhile, the average computation for the Tikhonov regularization method is 154 seconds for each sample in the training dataset. The training loss drops quickly after only a few iterations, hence the architecture of deep learning is considered ideal. We choose to use the weights resulted from the seventh epoch, as it proves that this can make the model to be able to predict the unseen experimental data. A further generalization is needed so that the model doesn’t overfit and the performance can be enhanced. From the results, it is concluded that the proposed model is feasible as an alternative to the Tikhonov regularization method, as the proposed model succeed to localize the inclusions.
References
[1] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries.,” CA. Cancer J. Clin., vol. 68, no. 6, pp. 394–424, 2018.
[2] M. S. Fuller, C. I. Lee, and J. G. Elmore, “Breast Cancer Screening : An Evidence-Based Update,” vol. 99, no. 3, pp. 451–468, 2016.
[3] A. Godavarty, S. Rodriguez, Y. J. Jung, and S. Gonzalez, “Optical imaging for breast cancer prescreening,” Breast Cancer Targets Ther., vol. 7, pp. 193–209, 2015.
[4] A. Gibson and H. Dehghani, “Diffuse optical imaging,” Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., vol. 367, no. 1900, pp. 3055–3072, Aug. 2009.
[5] J. C. Hebden, S. R. Arridge, and D. T. Delpy, “Optical imaging in medicine: I. Experimental techniques,” Phys. Med. Biol., vol. 42, no. 5, pp. 825–840, 1997.
[6] H. Dehghani, S. Sri Nivasan, B. W. Pogue, and A. Gibson, “Numerical modelling and image reconstruction in diffuse optical tomography,” Philos. Trans. R. Soc. A Math. Phys. Eng. Sci., vol. 367, no. 1900, pp. 3073–3093, 2009.
[7] S. R. Arridge and J. C. Schotland, “Optical tomography: Forward and inverse problems,” Inverse Probl., vol. 25, no. 12, 2009.
[8] T. Durduran, R. Choe, W. B. Baker, and A. G. Yodh, “Diffuse optics for tissue monitoring and tomography,” Reports Prog. Phys., vol. 73, no. 7, p. 076701, Jul. 2010.
[9] B. W. Pogue, M. S. Patterson, H. Jiang, and K. D. Paulsen, “Initial assessment of a simple system for frequency domain diffuse optical tomography,” Phys. Med. Biol., vol. 40, no. 10, pp. 1709–1729, Oct. 1995.
[10] S. R. Arridge, M. Schweiger, M. Hiraoka, and D. T. Delpy, “A finite element approach for modeling photon transport in tissue,” Med. Phys., vol. 20, no. 2, pp. 299–309, Mar. 1993.
[11] K. D. Paulsen and H. Jiang, “Spatially varying optical property reconstruction using a finite element diffusion equation approximation,” Med. Phys., vol. 22, no. 6, pp. 691–701, 1995.
[12] M. A. O’Leary, D. A. Boas, B. Chance, and A. G. Yodh, “Experimental images of heterogeneous turbid media by frequency-domain diffusing-photon tomography,” Opt. Lett., vol. 20, no. 5, p. 426, 1995.
[13] M. A. O’Leary, “Imaging with diffuse photon density waves,” 1996.
[14] J. C. Hebden et al., “Three-dimensional time-resolved optical tomography of a conical breast phantom,” 2001.
[15] S. R. Arridge, “Optical tomography in medical imaging,” Inverse Probl., vol. 15, no. 2, pp. R41–R93, Apr. 1999.
[16] H. Dehghani et al., “Near infrared optical tomography using NIRFAST: Algorithm for numerical model and image reconstruction,” Commun. Numer. Methods Eng., vol. 25, no. 6, pp. 711–732, Jun. 2009.
[17] N. Cao, A. Nehorai, and M. Jacobs, “Image reconstruction for diffuse optical tomography using sparsity regularization and expectation-maximization algorithm,” Opt. Express, vol. 15, no. 21, p. 13695, 2007.
[18] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, May 2017.
[19] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9351, 2015, pp. 234–241.
[20] D. M. Pelt and K. J. Batenburg, “Fast tomographic reconstruction from limited data using artificial neural networks,” IEEE Trans. Image Process., vol. 22, no. 12, pp. 5238–5251, 2013.
[21] S. Wang et al., “Accelerating magnetic resonance imaging via deep learning,” Proc. - Int. Symp. Biomed. Imaging, vol. 2016-June, pp. 514–517, 2016.
[22] B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, no. 7697, pp. 487–492, 2018.
[23] J. Yoo et al., “Deep Learning Can Reverse Photon Migration for Diffuse Optical Tomography,” vol. XXX, no. Xx, 2017.
[24] Y. Yao, Y. Wang, and R. L. Barbour, “of absorption and scattering distributions by a Born iterative method,” vol. 14, no. 1, pp. 325–342, 1997.
[25] H. Ben Yedder, A. BenTaieb, M. Shokoufi, A. Zahiremami, F. Golnaraghi, and G. Hamarneh, “Deep learning based image reconstruction for diffuse optical tomography,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11074 LNCS, pp. 112–119, 2018.
[26] J. Feng, Q. Sun, Z. Li, Z. Sun, and K. Jia, “Back-propagation neural network-based reconstruction algorithm for diffuse optical tomography,” J. Biomed. Opt., vol. 24, no. 05, p. 1, 2018.
[27] M. T. McCann, K. H. Jin, and M. Unser, “Convolutional neural networks for inverse problems in imaging: A review,” IEEE Signal Process. Mag., vol. 34, no. 6, pp. 85–95, 2017.
[28] A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos, “Using Deep Neural Networks for Inverse Problems in Imaging: Beyond Analytical Methods,” IEEE Signal Process. Mag., vol. 35, no. 1, pp. 20–36, 2018.
[29] K. He, “Deep Residual Learning for Image Recognition.”
[30] S. R. Arridge and M. Schweiger, “Part 2 : Finite-element-method calculations e e e e,” Appl. Opt., vol. 34, no. 34, pp. 8026–37, 1995.
[31] T. J. Farrell, M. S. Patterson, and B. Wilson, “A diffusion theory model of spatially resolved, steady-state diffuse reflectance for the noninvasive determination of tissue optical properties in vivo,” Med. Phys., vol. 19, no. 4, pp. 879–888, Jul. 1992.
[32] W. Egan, Optical properties of inhomogeneous materials: Applications to geology, astronomy, chemistry, and engineering. London: Academic Press, 1979.
[33] S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. Upper Saddle River, NJ: Pearson Education, 2010.
[34] I. J. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA: MIT Press, 2016.
[35] G. Yang et al., “DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction,” IEEE Trans. Med. Imaging, vol. 37, no. 6, pp. 1310–1321, 2018.
[36] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 32nd Int. Conf. Mach. Learn. ICML 2015, vol. 1, pp. 448–456, 2015.
[37] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical Evaluation of Rectified Activations in Convolutional Network,” 2015.
[38] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” J. Mach. Learn. Res., vol. 9, pp. 249–256, 2010.
[39] D. A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” 4th Int. Conf. Learn. Represent. ICLR 2016 - Conf. Track Proc., pp. 1–14, 2016.
[40] H. Zheng, Z. Yang, W. Liu, J. Liang, and Y. Li, “Improving deep neural networks using softplus units,” Proc. Int. Jt. Conf. Neural Networks, vol. 2015-Septe, no. July 2015, 2015.
[41] L.-Y. Chen, M.-C. Pan, C.-C. Yan, and M.-C. Pan, “Wavelength optimization using available laser diodes in spectral near-infrared optical tomography,” Appl. Opt., vol. 55, no. 21, p. 5729, 2016.
[42] L. Y. Chen, M. C. Pan, and M. C. Pan, “Visualized numerical assessment for near infrared diffuse optical tomography with contrast-and-size detail analysis,” Opt. Rev., vol. 20, no. 1, pp. 19–25, 2013.
[43] L. V. Wang and H. Wu, Biomedical Optics: Principles and Imaging. New Jersey: Wiley-Interscience, 2007.