跳到主要內容

簡易檢索 / 詳目顯示

研究生: 刑映綸
Ying-Lun Hsing
論文名稱: 基於SVD萃取資料的稀疏字典學習用於單一影像填補
Sparse Dictionary Learning on SVD-Refined Data for Single Image Inpainting
指導教授: 楊肅煜
Suh-Yuh Yang
口試委員:
學位類別: 碩士
Master
系所名稱: 理學院 - 數學系
Department of Mathematics
論文出版年: 2025
畢業學年度: 113
語文別: 英文
論文頁數: 42
中文關鍵詞: 單一影像填補稀疏表示字典學習奇異值分解泊松影像編輯
外文關鍵詞: single image inpainting, sparse representation, dictionary learning, singular value decomposition, Poisson image editing
相關次數: 點閱:17下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 單一影像填補是一種影像填補的技術,它利用影像中已知的資訊,對於影像中有破損的區域進行填補。本文所採用之方法建構於稀疏表示與字典學習的架構之上,其中字典對於整體填補效能中扮演關鍵角色,然而訓練一本過完備字典是費時的。在過往的方法中,已被證實將訓練一本字典的問題,拆成訓練若干個子字典的問題後再合併可以有效減少訓練時間,但整個過程依然仍需花費不少計算時間。我們發現這個過程費時的原因之一為,訓練子字典時,子訓練集資料量龐大,並且我們從子訓練集的資料中發現它們具有高度的相似性。基於此觀察,本文提出一種改良策略:分別先對各子訓練集用奇異值分解做資料萃取,再將做完資料萃取的子訓練集合併,並訓練得到一本完整字典。此外,為了讓填補邊緣更自然,本文於填補階段最後引入泊松影像編輯技術,以增強邊緣一致性並強化紋理細節。最後,透過多個數值實驗驗證本文提出的方法除填補紋理較過往方法豐富外,可以有效的減少計算時間。


    Single image inpainting refers to the process of filling in missing or corrupted regions using only the information available within the same image. The method proposed in this thesis is based on a sparse representation and dictionary learning framework, where the dictionary plays a critical role in determining overall inpainting performance. However, training an over-complete dictionary is often time-consuming. Previous studies have shown that dividing the training task into sub-dictionaries and merging them can reduce training time, but the process remains computationally expensive. We observed that one of the major contributors to the high time cost is the large volume of data in each sub-training set and the significant redundancy among them. Based on this observation, we propose an improvedstrategy: applying singular value decomposition to extract essential features from each sub-training set individually, and then merging the reduced datasets for unified dictionary training. This approach significantly reduces data redundancy while preserving important structural information. In addition, to ensure smoother transitions at the boundaries of the inpainted regions, we incorporate Poisson image editing in the final stage of inpainting. This enhances edge consistency and improves texture detail. Experimental results on various test cases demonstrate that the proposed method not only achieves richer texture reconstruction compared to previous approaches but also significantly reduces the required computation time.

    Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Sparserepresentationanddictionarylearning . . . . . . . . . . . . . 5 2.1 Sparserepresentation . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Alternatingdirectionmethodofmultipliers . . . . . . . . . . . . 6 2.3 Sparsedictionarylearning . . . . . . . . . . . . . . . . . . . . . . 11 2.4 Alternatingdirectionmethod-baseddictionaryupdate. . . . . . 13 3 Poissonimageediting. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1 Poissoneditingtheory . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2 Numericalsolution. . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 Singleimageinpainting . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.1 Constructingatrainingset . . . . . . . . . . . . . . . . . . . . . . 20 4.1.1 Pixelencoding. . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.2 Imagepatchgrouping . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.3 PerformingSVD-refinedoneachgroup. . . . . . . . . . . . . 23 4.2 Constructingadictionary . . . . . . . . . . . . . . . . . . . . . . . 24 4.3 Imageinpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3.1 Inpaintingorder . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3.2 Imageinpaintingmethod . . . . . . . . . . . . . . . . . . . . . 26 4.4 Poissonimageediting . . . . . . . . . . . . . . . . . . . . . . . . . 27 5 Numericalexperiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6 Summaryandconclusion. . . . . . . . . . . . . . . . . . . . . . . . . . 31 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33

    [1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpainting, Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 2000, pp. 417–424.
    [2] D. J. Barrientos Rojas, B. J. T. Fernandes, and S. M. M. Fernandes, Areviewon image inpainting techniques and datasets, Proceedings of the 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), 2020, pp. 240–247.
    [3] J. Shen and T. F. Chan, Mathematical models for local nontexture inpaintings, SIAM Journal on Applied Mathematics, 62 (2002), pp. 1019–1043.
    [4] T. F. Chan and J. Shen, Nontexture inpainting by curvature-driven diffusions, Journal of Visual Communication and Image Representation, 12 (2001), pp. 436–449.
    [5] A. Telea, An imageinpainting technique based on the fast marching method, Journal of Graphics Tools, 9 (2004), pp. 23–34.
    [6] A. G. Patel, D. Prajapati, and P. Patel, Improved robust algorithm for exemplar based image inpainting, International Journal of Computer Applications, 101 (2014), pp. 23–27.
    [7] M. M. Hadhoud, K. Moustafa, and S. Shenoda, Digital images inpainting using modifiedconvolution based method, International Journal of Signal Processing, Image Processing and Pattern Recognition, 1 (2008), pp.1–10.
    [8] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros, Context encoders: Feature learning by inpainting, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2536–2544.
    [9] S. Iizuka, E. Simo-Serra, and H. Ishikawa, Globally and locally consistent image completion, ACM Transactions on Graphics, 36 (2017), Article 107.
    [10] B. Shen, W. Hu, Y. Zhang, and Y. J. Zhang, Image inpainting via sparse representation, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2009, pp. 697–700.
    [11] R. Rubinstein, M. Zibulevsky, and M. Elad, Doublesparsity: Learning sparse dictionaries for sparse signal approximation, IEEE Transactions on Signal Pro
    cessing, 58 (2010), pp. 1553–1564.
    [12] C.-C. Tsai, A sparse dictionary learning-based method for single image inpainting, Master Thesis, National Central University, Taiwan, 2022.
    [13] A. Criminisi, P. Perez, and K. Toyama, Region filling and object removal by exemplar-based image inpainting, IEEE Transactions on Image Processing, 13 (2004), pp. 1200–1212.
    [14] Q. Fan, H. Liu, Z. Fu, and X. Li, Exemplar-based image inpainting based on pixel inhomogeneity factor, Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017,
    pp. 1164–1168.
    [15] C. Li, H. Chen, X. Han, X. Pan, and D. Niu, An improved Criminisi method for image inpainting, Journal of Physics: Conference Series, 2022.
    [16] P. Pérez, M. Gangnet, and A. Blake, Poisson image editing, ACM SIGGRAPH 2003 Papers, 2003, pp. 313–318.
    [17] B. Olshausen and D. Field, Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, 381 (1886), pp. 607–609.
    [18] D. Donoho, For most large underdetermined systems of linear equations the minimal ℓ1-norm solution is also the sparsest solution, Communications on Pure and Applied Mathematics, 59 (2006), pp. 797–829.
    [19] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, Distributed optimization andstatistical learning via the ADMM,Foundations and Trends in Machine Learning, 3 (2010), pp. 1–122.
    [20] R. Rubinstein, A. M. Bruckstein, and M. Elad, Dictionaries for sparse representation modeling, Proceedings of the IEEE, 98 (2010), pp. 1045–1057.
    [21] J. M. Di Martino, G. Facciolo, and E. Meinhardt-Llopis, Poisson image editing, Image Processing On Line, 6 (2016), pp. 300–325.

    QR CODE
    :::