跳到主要內容

簡易檢索 / 詳目顯示

研究生: 劉榮勝
Rong-Sheng Liu
論文名稱: 基於深度學習方法之高精確度瞳孔放大片偵測演算法
Ultra-Accurate Detection of the Existence of Cosmetic Contact Lens for Iris Images based on Deep Learning
指導教授: 栗永徽
Yung-Hui Li
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系在職專班
Executive Master of Computer Science & Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 54
中文關鍵詞: 瞳孔放大片深度學習虹膜分割虹膜識別
外文關鍵詞: Cosmetic contact lens, Deep learning, Iris segmentation, Iris recognition
相關次數: 點閱:10下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,瞳孔放大片已經成為許多民眾的生活用品,更是不少愛美時尚男女的生活必需品。為了符合更多需求,廠商也針對色澤、風格和紋理提供更多的選擇,豐富產品的多變性。這些瞳孔放大片也因改變虹膜紋理在虹膜辨識上受到考驗。

    若要透過深度學習的方法,則需要收集大量的資料進行網路模型的訓練,從資料中萃取出複雜的規則;另外在深度學習進行訓練之前,需要對影像做預處理,如:影像的分割、影像形式的轉換及利用影像處理的方法增加影像的數量等,來達到高準確以及穩健的結果。本篇論文收集台灣市售的9家廠牌、18種款式及101位配戴虹膜放大片與未配戴時的樣本,實驗中使用的圖像總數為30390,透過深度學習的方式訓練模型,使得測試準確度可達到99%以上的水準。


    In recent years, Cosmetic Contact Lens (CCL) has become a daily necessity for many people, and it is also a necessity for many people who love beauty and fashion. In order to meet more needs, manufacturers also provide more choices for color, style and texture to enrich the variability of products. These Cosmetic Contact Lens (CCL) also becomes a challenge for iris recognition because it changes the appearance of the texture of the iris.

    However, in deep learning method, one needs to collect a lot of data for the training of the network model, and extract rules from the data. In addition, before training a deep learning, it is better to preprocess the image for the sake of data augmentation, such as : image cropping, scaling, rotating to achieve higher accuracy and robustness. This paper collects CCL samples from 9 brands and 18 styles from Taiwan. We invite 101 participants and collect eye images with and without wearing CCL. The total number of images used in the experiment is 30390. At the end, we can achieve an accuracy higher than 99% using deep learning based models.

    中文摘要 i 英文摘要 ii 致謝 iii 目錄 iv 圖目錄 vi 表目錄 viii 一、緒論 1 1-1 前言 1 1-2 研究目的 2 1-3 論文架構 3 二、文獻回顧 4 2-1 深度學習影像分類網路介紹 4 2-1-1 AlexNet 4 2-1-2 GoodLeNet 5 2-1-3 VGGNet 10 2-1-4 ResNet 12 2-1-5 DenseNet 13 2-1-6 SqueezeNet 14 2-1-7 MobileNet與MobileNetV2 16 2-2 虹膜辨識介紹 20 三、方法說明 22 3-1 方法架構 22 3-2 資料前處理 23 3-3 深度學習網路 24 四、實驗 26 4-1 實驗設備 26 4-2 資料收集 26 4-3 實驗結果 28 4-3-1 原始影像偵測結果 28 4-3-2 預處理影像偵測結果 32 4-3-3 未知影像偵測結果 36 4-4 討論 36 五、結論與未來展望 38 5-1 結論 38 5-2 未來展望 39 六、參考文獻 40

    [1] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," in Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998, doi: 10.1109/5.726791.
    [2] Krizhevsky, A., Sutskever, I., & Hinton, G.E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NIPS.
    [3] C. Szegedy et al., "Going deeper with convolutions," 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9, doi: 10.1109/CVPR.2015.7298594.
    [4] Ioffe, Sergey and Szegedy, Christian. "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.." CoRR abs/1502.03167 (2015).
    [5] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, "Rethinking the Inception Architecture for Computer Vision," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826, doi: 10.1109/CVPR.2016.308.
    [6] Szegedy, Christian, Ioffe, Sergey and Vanhoucke, Vincent. "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.." CoRR abs/1602.07261 (2016).
    [7] Simonyan, Karen and Andrew Zisserman. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” CoRR abs/1409.1556 (2015): n. pag.
    [8] VGGNet. Retrieved June 14, 2020, from https://www. itread01.com/content/1568289844.html
    [9] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778.
    [10] Huang, G., Liu, Z., & Weinberger, K.Q. (2017). Densely Connected Convolutional Networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261-2269.
    [11] Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J. & Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size (cite arxiv:1602.07360Comment: In ICLR Format)
    [12] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications (cite arxiv:1704.04861)
    [13] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen, "MobileNetV2: Inverted Residuals and Linear Bottlenecks," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 2018, pp. 4510-4520, doi:10.1109/CVPR.2018.00474.
    [14] MobileNetV2. Retrieved June 14, 2020, from https://medium.com/@chih.sheng.huang821/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-mobilenet-depthwise-separable-convolution-f1ed016b3467
    [15] J. Daugman, “How Iris Recognition Works,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 21-30, JAN 2004.
    [16] J. Daugman, “Probing the Uniqueness and Randomness of IrisCodes: Results from 200 Billion Iris Pair Comparisons”, Proceedings of the IEEE, Vol 94, Issue 11, pp. 1927-1935, IEEE, November 2006.
    [17] J. Daugman, “New Methods in Iris Recognition”, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol 37, Issue 5, pp. 1167-1175, January 2007.
    [18] H. Hofbauer, F. A.-Fernandez, J. Bigun, and A. Uhl, “Experimental Analysis Regarding the Influence of Iris Segmentation on the Recognition Rate,” The Institution of Engineering and Technology Biometrics, vol. 5, no. 3, pp. 200-211, AUG 2016.
    [19] Po-Jen Huang, “A Fast Iris Segmentation Algorithm based on Faster R-CNN”, https://ndltd.ncl.edu.tw/cgi-in/gs32/gsweb.cgi/ccd=VOW3dO/record?r1=1&h1=2
    [20] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137-1149, JAN 2016.
    [21] Y.-H. Li and P.-J. Huang, “An Accurate and Efficient User Authentication Mechanism on Smart Glasses based on Iris Recognition,” Mobile Information Systems, vol. 2017, Article ID 1281020, pp. 1-14, JUL 2017.
    [22] MobileNet. Retrieved June 14, 2020, from https://zhuanlan.zhihu.com/p/54425450

    QR CODE
    :::