| 研究生: |
李冠佑 Kuan-Yu Lee |
|---|---|
| 論文名稱: |
一階梯度變分影像融合模型的比較研究 A Comparative Study of the First-Order Gradient Variational Image Fusion Model |
| 指導教授: |
鄭經斅
Ching-Hsiao Cheng |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
理學院 - 數學系 Department of Mathematics |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 中文 |
| 論文頁數: | 34 |
| 中文關鍵詞: | 多焦影像融合 、變分影像融合模型 、分裂Bregman迭代法 、引導濾 波器 、雙邊濾波器 、U形卷積神經網路 |
| 外文關鍵詞: | Multi-focus image fusion, variational image fusion model, split Bregman interation, guided fliter, bilateral fliter, U-Net |
| 相關次數: | 點閱:21 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
多焦影像融合是影像處理領域中一項被廣泛研究的技術,其目的在將具有不同焦深的多幅來源影像合併為單一輸出影像,並保留每個輸入影像的清晰區域,使我們能獲得完整且銳利的視覺資訊。本文聚焦於研究基於一階梯度的變分影像融合模型,並探索了幾種從輸入影像中選擇梯度特徵的策略。我們分別使用不同的選擇標準來提取代表性梯度結構,並將其整合到一個變分框架中,該框架使用分裂Bregman迭代法進行求解。我們針對各種梯度選取方法在變分模型下的表現進行比較,透過客觀品質評估指標與視覺觀察來評估每種方法的性能。實驗結果表明,與傳統影像融合方法相比,合適的梯度選擇與變分融合方法相結合,能夠在多焦影像融合的任務中提高視覺清晰度並更自然地保留細節。
Multi-focus image fusion is a widely studied technique in the field of image processing. It aims to combine multiple source images with different focal depths into a single output that retains the sharp regions of each input. This thesis focuses on the first-order gradient-based variational fusion model and explores several strategies for selecting gradient features from the input images. Representative gradient structures are extracted using different selection criteria and integrated into a variational framework, which is solved using the split Bregman iteration method. The performance of each method is evaluated through both objective quality metrics and visual inspection. Experimental results show that appropriate gradient selection, when combined with a variational fusion approach, leads to improved visual clarity and better detail preservation compared to traditional fusion methods.
[1] R. S. Blum and Z. Liu, Multi-Sensor Image Fusion and Its Applications, CRC Press, Taylor & Francis Group, 2006.
[2] J. F. Cai, S. Osher, and Z. Shen, Split Bregman methods and frame based image restoration, Multiscale Modeling and Simulation: a SIAM Interdisciplinary Journal, 8 (2009), pp. 337-369.
[3] E. J. Candès and D. L. Donoho, Curvelets: A surprisingly effective nonadaptive representation for objects with edges, International Conference on Curves and Surfaces (4th), Proceedings, Vol. 2, Curve and Surface Fitting, 1999, pp. 105-120.
[4] S.-K. Chen, A Study on Multi-Focus Image Fusion using Deep Attention Networks, Master Thesis, Master’s Program of Data Science, Feng Chia University, Taiwan, 2023.
[5] A. L. da Cunha, J. Zhou, and M. N. Do, The nonsubsampled contourlet transform: theory, design, and applications, IEEE Transactions on Image Processing, 15 (2006), pp. 3089-3101.
[6] T. Goldstein and S. Osher, The split Bregman method for L1 regularized problems, SIAM Journal on Imaging Sciences, 2 (2009), pp. 323-343.
[7] K. He, J. Sun, and X. Tang, Guided image filtering, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), pp. 1397-1409.
[8] P.-W. Hsieh, P.-C. Shao, and S.-Y. Yang, A regularization model with adaptive diffusivity for variational image denoising, Signal Processing, 149 (2018), pp. 214-228.
[9] D. Lahat, T. Adali, and C. Jutten, Multimodal data fusion: an overview of methods, challenges, and prospects, Proceedings of the IEEE, 103 (2015), pp. 1449-1477.
[10] H. Li and B. S. Manjunath, Multisensor image fusion using the wavelet transform, Graphical Models and Image Processing, 57 (1995), pp. 235-245.
[11] H. Li and X.-J. Wu, DenseFuse: A Fusion Approach to Infrared and Visible Images, IEEE Transactions on Image Processing, 28 (2019), pp. 2614-2623.
[12] H. Li, X. Wu, and J. Kittler, DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs, Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4724-4732.
[13] H. Li, X. Wu, and J. Kittler, DenseFuse: A fusion approach for infrared and visible images, IEEE Transactions on Image Processing, 28 (2019), pp. 2614-2623.
[14] F. Li and T. Zeng, Variational image fusion with first and second-order gradient information, Journal of Computational Mathematics, 34 (2016), pp. 200-222.
[15] X. Ma, J. Ma, and C. Li, FusionGAN: A generative adversarial network for infrared and visible image fusion, Information Fusion, 48 (2019), pp. 11-26.
[16] S. Mallat, A theory for multiresolution signal decomposition: the wavelet representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 (1989), pp. 674-693.
[17] X. Qiu, M. Li, L. Zhang, and X. Yuan, Guided filter-based multi-focus image fusion through focus region detection, Signal Processing: Image Communication, 72 (2019), pp. 35-46.
[18] C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images, Sixth International Conference on Computer Vision, 1998, pp. 839-846.
[19] C.-S. You and S.-Y. Yang, A simple and effective multi-focus image fusion method based on local standard deviations enhanced by the guided filter, Displays, 72 (2022), 102146.
[20] Y. Zhang, Y. Liu, P. Sun, H. Yan, X. Zhao and L. Zhang, IFCNN: A general image fusion framework based on convolutional neural network, Information Fusion, 54 (2020), pp. 99-118.