| 研究生: |
盧俊良 Jyun-Liang Lu |
|---|---|
| 論文名稱: |
基於光線與臉部表情變化下之人臉辨識 Face Recognition Under Illumination and Facial Expression Variation |
| 指導教授: |
范國清
Kuo-Chin Fan |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 畢業學年度: | 97 |
| 語文別: | 中文 |
| 論文頁數: | 72 |
| 中文關鍵詞: | 人臉辨識 、光線變化 、表情變化 |
| 外文關鍵詞: | face recognition, illumination, facial expression |
| 相關次數: | 點閱:9 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
提供一個可靠且有效率的人臉辨識系統需克服許多問題,其中包含了人臉影像受到光線、表情或頭部姿勢變化等,這些因素將會造成辨識率的下降,因此所採用之系統在面對影像的變化時,應具備較高之抵抗能力。此外,若要將系統架構能應用在即時辨識上,即要考慮到系統的計算時間必須是快速且有效率的。
基於上述目的之下,本篇論文提出了一個架構,可同時克服影像中具有光線變化與臉部表情變化的問題,並以最少的訓練影像張數、較少的計算時間,達到最佳的辨識率。本系統先使用Retinex演算法降低光線的影響,再使用主動式外觀模型(Active Appearance Model, AAM),擷取出五官特徵,建立component-based的辨識系統,再利用Support Vector Machine (SVM)以嘴巴特徵來建立表情分類的model,並依據SVM的判斷結果,削弱受到表情影響特徵的權重值,藉以降低表情變化對辨識率帶來的影響。
而實驗結果顯示,本篇論文所提出的架構,在每個人只訓練單張影像下,能克服光線變化與表情變化的問題,提升了系統的辨識率。
Most face recognition methods assume either constant lighting condition or natural facial expressions and hence can not deal with both kinds of variations simultaneously. The constraint has to be alleviated in a reliable and practical face recognition system.
In order to resolve the aforementioned problem, we present a component-based face recognition system which can deal with both illumination and facial expression variations with the using of only one training sample image per class. In our work, retinex algorithm is firstly adopted to decrease the influence of illumination variation. Then, active appearance model (AAM) is employed to extract facial features to establish the proposed face recognition system. Next, support vector machine (SVM) is utilized to distinguish the variations of facial expressions by using the mouth features. To equip with the
capability of insensitivity to expressions, the proposed system decreases the weights of these features which are affected by facial expressions. Finally, the recognition part combines the global feature and local features to generate the recognition result.
Experimental results demonstrate that the proposed component-based face recognition system can indeed improve the performance when the images are under different illumination and facial expression variations.
[1] P.N. Belhumeur and D.J. Kriegman, “What is the set of images of an object under all possible lighting conditions?” In Proceedings, IEEE Conference on Computer Vision and Pattern Recognition, pp. 52 - 58, 1997.
[2] T.R. Raviv and A. Shashua, “The quotient image: Class based re-rendering and recognition with varying illuminations”. In Proceedings, IEEE Conference on Computer Vision and Pattern Recognition, pp. 566 - 571, 1999.
[3] H.T. Wang, S.Z. Li, Y.S. Wang, “Face recognition under varying lighting conditions using self quotient image”, Sixth IEEE International Conference on Automatic Face and Gesture Recognition Proceedings, 17-19 May 2004.
[4] Yung-Mao Lu, Bin-Yih Liao, Jeng-Shyang Pan, “Face recognition algorithm decreasing the effect of illumination”, International Conference on Intelligent Information Hiding and Multimedia Signal Processing, 2008.
[5] C.P. Chen and C.S. Chen, “Lighting normalization with generic intrinsic illumination subspace for face recognition”, IEEE International Conference on Computer Vision, October 2005.
[6] Moonhwi Lee, Cheong Hee Park, “An efficient image normalization method for face recognition under varying illuminations”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 711 - 720, 1997.
[7] Stefano Arca, Paola Campadelli, Raffaella Lanzarotti, Giuseppe Lipori,“A face recognition system dealing with expression variant faces”, International Conference on Pattern Recognition, Vol. 2, pp. 1242-1245, 2006.
[8] Jung-Wei Hong, Kai-Tai Song, “Facial expression recognition under illumination variation”, IEEE Workshop on Advanced Robotics and Its Social Impacts, pp. 1 - 6, 2007.
[9] Aleix Martinez, ”Recognizing imprecisely localized, partially occluded and expression variant faces from a single sample per class”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, pp748 – 763, 2002.
[10] Iordanis Mpiperis, Sotiris Malassiotis and Michael G. Strintzis,”Expression-compensated 3D face recognition with geodesically aligned bilinear models”, IEEE International Conference on Biometrics: Theory, Applications and Systems, pp. 1 – 6, 2008.
[11] E. Land, “An alternative technique for the computation of the designator in the Retinex theory of color vision”, in Proceedings of the National Academy of Science, 83, pp. 3078-3080, 1986.
[12] Brian Funt, Kobus Barnard, Michael Brockington, and Vlad Cardei,“Luminance-based multi-scale Retinex”, AIC Color 97 Kyoto 8th Congress of the International Color Association, May 1997.
[13] Z. Rahman, D. Jobson, and G. Woodell, “Retinex processing for automatic image enhancement”, The Human Vision and Electronic Imaging VII Conference, Vol. 4662, pp. 390-401, 2002.
[14] E. Land, The Retinex, American Scientist, 52, pp.247-264, 1964.
[15] E. Land, “The Retinex theory of color vision”, Proceedings of The Royal Institution of Great Britain, 47, pp. 23-58, 1974.
[16] J. Huang, P.C. Yuen, W.S. Chen and J.H. Lai, “Component-based subspace linear discriminant analysis method for face recognition with one training sample”, Optical Engineering, Vol. 44, no. 5, May 2005.
[17] B. Heisele, P. Ho, J. Wu and T. Poggio, “Face recognition:component-based versus global approaches”, Computer Vision and Image Understanding, Vol. 91, pp. 6–21, 2003.
[18] T. Moriyama, T. Kanade, J. Xiao and J.F. Cohn, “Meticulously detailed eye region model and its application to analysis of facial images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, no. 5, May 2006.
[19] T.F. Cootes, G.J. Edwards and C.J. Taylor, “Active appearance models”,IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23,
no. 6, June 2001.
[20] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” Computer Vision and Image Understanding, Vol. 61, pp. 38-59, January 1995.
[21] G. J. Edwards, C. J. Taylor, and T. F. Cootes, “Learning to identify and track faces in image sequences,” International Conference on Computer Vision, pp.317, January 1998.
[22] G. J. Edwards, A. Lanitis, C. J. Taylor, and T. F. Cootes, “Statistical models of face images—Improving specificity,” Image and Vision Computing, Vol. 16, pp. 203-211, March 1998.
[23] G. J. Edwards, C. J. Taylor, and T. F. Cootes, “Interpreting face images using active appearance models,” International Conference on Face & Gesture Recognition, pp.300, 1998.
[24] K. Jonsson, J. Matas, J. Kittler and Y. Li, “Learning support vectors for face verification and recognition”, IEEE International Conference on Automatic Face and Gesture Recognition, pp.208 - 213, 2000.
[25] Bernd Heisele, Purdy Ho, and Tomaso Poggio, “Face recognition with support vector machines: Global versus component-based approach”, IEEE International Conference on Computer Vision, Vol. 2, pp.688 - 694, July 2001.
[26] Vladimir N. Vapnik, “The nature of statistical learning theory” New York: Springer-Verlag, 1995
[27] M. A. Turk and A. P. Pentland, “Face recognition using Eigenfaces,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 586-591, June 1991.
[28] X. Wang, X. Tang, “Random sampling LDA for face recognition”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 259 - 265, 2004 .
[29] Byung-Joo Oh, “Face recognition by using neural network classifiers based on PCA and LDA”, IEEE International Conference on Systems, Man and Cybernetics, Vol. 2, pp. 1699 - 1703, 2005.
[30] Center for Computational Vision and Control“Yale Face Database”,
http://cvc.yale.edu/projects/yalefaces/yalefaces.html.
[31] Center for Computational Vision and Control“Yale B Face Database”,
http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html.
[32] Center for Computational Vision and Control “Extended Yale Face Database B”, http://vision.ucsd.edu/~leekc/ExtYaleDatabase/ExtYaleB.html.
[33] Taiping Zhanga, Bin Fanga, Yuan Yuanb, Yuan Yan Tanga, Zhaowei Shanga, Donghui Lia, Fangnian Langa , MingwuRen, Jingyu Yang, “Multiscale facial structure representation for face recognition under varying Iillumination”, Pattern Recognition, Vol. 42, pp.251-258, 2009.
[34] T. Chen, W. Yin, X.S. Zhou, D. Comaniciu, T.S. Huang, “Total variation models for variable lighting face recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, pp. 1519 - 1524, 2006.
[35] R.M. Mutelo, W.L. Woo, S.S. Dlay, “Discriminant analysis of the two-dimensional Gabor features for face recognition”, IET Computer Vision, Vol. 2, pp. 43 – 49, 2008.
[36] Wankou Yang, Jianguo Wang, Mingwu Ren, Jingyu Yang, “Feature extraction based on Laplacian Bidirectional Maximum Margin Criterion”, Pattern Recognition, In Press, Corrected Proof, 2009.
[37] J. T. Tou, R. C. Gonzalez, Pattern recognition principles, Addison-Wesley Publishing Company, 1974.