| 研究生: |
連翊展 I-Chan Lien |
|---|---|
| 論文名稱: |
AILIS: An Adaptive and Iterative Learning Method for Accurate Iris Segmentation AILIS: An Adaptive and Iterative Learning Method for Accurate Iris Segmentation |
| 指導教授: | 栗永徽 |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 軟體工程研究所 Graduate Institute of Software Engineering |
| 論文出版年: | 2016 |
| 畢業學年度: | 104 |
| 語文別: | 英文 |
| 論文頁數: | 55 |
| 中文關鍵詞: | 機器學習 、虹膜辨識 、虹膜分割 |
| 外文關鍵詞: | machine learning, iris segmentation, iris recognition |
| 相關次數: | 點閱:16 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在虹膜辨識系統中,segmentation是其中最為重要的一環,segmentation的品質好壞左右著虹膜辨識最終的成功率。在過去的研究中,已經開發出了許多的segmentation演算法,如neural network、Hough Transform,但是未曾出現過「評估」segmentation品質的演算法,所以也無法對segmentation的正確與否給予一個客觀化的指標。因此,我們開發出了一個方法叫KIRD,它可以針對segmentation的品質給出一個數值化的指標,可以在不需人工介入的情況下,正確的評估segmentation的好壞與否。並且,我們在KIRD的基礎上,開發出了一套叫作AILIS的segmentation演算法,它是一個會在迭代中學習、具有高度應變性的演算法。在每一輪迭代中,AILIS都會根據前一輪的結果自動的學習並優化機器學習模型,以此產生出品質更佳的segmentation。根據實驗結果,AILIS可以將ICE虹膜資料庫(灰階影像)中99.39%的眼部影像成功的生成品質極佳的segmentation,在UBIRIS虹膜資料庫(彩色影像)中也有94.60%的成功率,並且在後續的大規模虹膜辨識實驗也驗證了AILIS的有效性與高度適應性。
Iris segmentation is one of the most important pre-processing stage for an iris recognition system. The quality of iris segmentation results dictates the iris recognition performance. In the past, methods of either learning-based (for example, neural network) or non-learning-based (for example, Hough Transform) have been proposed to deal with this topic. However, there does not exist an objective and quantitative figure of merit in terms of quality assessment for iris segmentation (to judge whether a segmentation hypothesis is accurate or not). Most existing works evaluated their iris segmentation quality by human. In this work, we propose KIRD, a mechanism to fairly judge the correctness of iris segmentation hypotheses. On the foundation of KIRD, we propose AILIS, which is an adaptive and iterative learning method for iris segmentation. AILIS is able to learn from past experience and automatically build machine-learning models for iris segmentation for both gray-scale and colored iris images. Experimental results show that, without any prior training, AILIS can successfully perform iris segmentation on ICE (gray-scale images) and UBIRIS (colored) to the accuracy rate of 99.39% and 94.60%, respectively. Large-scale iris recognition experiments based on AILIS segmentation hypotheses also validated its effectiveness, compared to the state-of-the-art algorithm.
[1] Y. Du, E. Arslanturk, Z. Zhou, and C. Belcher, “Video-based noncooperative iris image segmentation,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 41, no. 1, pp. 64-74, 2011.
[2] E. Trucco, and M. Razeto, “Robust iris location in close-up images of the eye,” Pattern analysis and applications, vol. 8, no. 3, pp. 247-255, 2005.
[3] J. Daugman, "High confidence personal identification by rapid video analysis of iris texture." pp. 50-60.
[4] J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 15, no. 11, pp. 1148-1161, 1993.
[5] J. Daugman, “The importance of being random: statistical principles of iris recognition,” Pattern recognition, vol. 36, no. 2, pp. 279-291, 2003.
[6] J. Daugman, “New methods in iris recognition,” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 37, no. 5, pp. 1167-1175, 2007.
[7] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, "A system for automated iris recognition." pp. 121-128.
[8] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride, “A machine-vision system for iris recognition,” Machine vision and Applications, vol. 9, no. 1, pp. 1-8, 1996.
[9] R. P. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348-1363, 1997.
[10] W.-K. Kong, and D. Zhang, "Accurate iris segmentation based on novel reflection and eyelash detection model." pp. 263-266.
[11] C.-l. Tisse, L. Martin, L. Torres, and M. Robert, "Person identification technique using human iris recognition." pp. 294-299.
[12] Q.-C. Tian, Q. Pan, Y.-M. Cheng, and Q.-X. Gao, "Fast algorithm and application of hough transform in iris segmentation." pp. 3977-3980.
[13] X. Liu, K. W. Bowyer, and P. J. Flynn, "Experiments with an improved iris segmentation algorithm." pp. 118-123.
[14] L. Ma, Y. Wang, and T. Tan, "Iris recognition using circular symmetric filters." pp. 414-417.
[15] L. Ma, T. Tan, Y. Wang, and D. Zhang, “Personal identification based on iris texture analysis,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 25, no. 12, pp. 1519-1533, 2003.
[16] N. Ritter, R. Owens, J. Cooper, and P. P. van Saarloos, "Location of the pupil-iris border in slit-lamp images of the cornea." pp. 740-745.
[17] A. Ross, and S. Shah, "Segmenting non-ideal irises using geodesic active contours." pp. 1-6.
[18] S. Shah, and A. Ross, “Iris segmentation using geodesic active contours,” Information Forensics and Security, IEEE Transactions on, vol. 4, no. 4, pp. 824-836, 2009.
[19] J. Huang, Y. Wang, T. Tan, and J. Cui, "A new iris segmentation method for recognition." pp. 554-557.
[20] A. Uhl, and P. Wild, "Weighted adaptive hough and ellipsopolar transforms for real-time iris segmentation." pp. 283-290.
[21] H. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 1, pp. 23-38, 1998.
[22] G. A. Carpenter, and S. Grossberg, “A massively parallel architecture for a self-organizing neural pattern recognition machine,” Computer vision, graphics, and image processing, vol. 37, no. 1, pp. 54-115, 1987.
[23] A. Timmermans, and A. Hulzebosch, “Computer vision system for on-line sorting of pot plants using an artificial neural network classifier,” Computers and electronics in agriculture, vol. 15, no. 1, pp. 41-55, 1996.
[24] L. Zhao, and C. E. Thorpe, “Stereo-and neural network-based pedestrian detection,” Intelligent Transportation Systems, IEEE Transactions on, vol. 1, no. 3, pp. 148-154, 2000.
[25] L. W. Liam, M. Chekima, L. C. Fan, and J. Dargham, "Iris recognition using self-organizing neural network." pp. 169-172.
[26] R. P. Broussard, L. R. Kennell, D. L. Soldan, and R. W. Ives, "Using artificial neural networks and feature saliency techniques for improved iris segmentation." pp. 1283-1288.
[27] R. P. Broussard, and R. W. Ives, "Using artificial neural networks and feature saliency to identify iris measurements that contain the most discriminatory information for iris segmentation." pp. 46-51.
[28] R. H. Abiyev, and K. Altunkaya, “Neural network based biometric personal identification with fast iris segmentation,” International Journal of Control, Automation and Systems, vol. 7, no. 1, pp. 17-23, 2009.
[29] H. Proenca, “Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 32, no. 8, pp. 1502-1516, 2010.
[30] J. Thornton, “Matching deformed and occluded iris patterns: a probabilistic model based on discriminative cues,” 2007.
[31] M. A. Figueiredo, and A. K. Jain, “Unsupervised learning of finite mixture models,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 24, no. 3, pp. 381-396, 2002.
[32] P. J. Phillips, K. W. Bowyer, P. J. Flynn, X. Liu, and W. T. Scruggs, "The iris challenge evaluation 2005." pp. 1-8.
[33] H. Proença, and L. A. Alexandre, "UBIRIS: A noisy iris image database," Image Analysis and Processing–ICIAP 2005, pp. 970-977: Springer, 2005.
[34] R. Kerekes, B. Narayanaswamy, J. Thornton, M. Savvides, and B. Kumar, "Graphical model approach to iris matching under deformation and occlusion." pp. 1-6.
[35] J. Thornton, M. Savvides, and V. Kumar, “A Bayesian approach to deformed pattern matching of iris images,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 29, no. 4, pp. 596-606, 2007.