| 研究生: |
黃郁如 Yu-Ju Huang |
|---|---|
| 論文名稱: |
以人臉五官特徵為基礎之適應型性別辨識 An Adaptive Method for Gender Recognition Based on Facial Components |
| 指導教授: |
范國清
Kuo-Chin Fan |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2013 |
| 畢業學年度: | 101 |
| 語文別: | 中文 |
| 論文頁數: | 66 |
| 中文關鍵詞: | 性別辨識 、適應行決策融合 、主動形狀模型 、隨機森林 |
| 外文關鍵詞: | gender recognition, adaptive decision making, active shape model, random forest |
| 相關次數: | 點閱:8 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
性別是人類一項重要的特徵,一個自動且有效的性別辨識系統,可以廣泛的應用在安全監控、人機介面或消費者行為分析等領域,例如在消費者行為分析方面,將數位廣告看板加上性別辨識系統,根據不同的性別播放不同的廣告,以吸引更多潛在的顧客。
不同性別之間的人臉外觀有顯著的差異,因此利用人臉特徵來判斷性別是最直覺的方法,近年來,許多利用人臉與五官特徵來辨識性別的方法相繼被提出,但是在真實的環境中,路上的行人經常會有像是戴墨鏡或是圍圍巾的裝扮,這些人臉部分遮蔽的情形可能導致辨識率下降,因此本論文提出一個適應型的性別辨識方法,首先在遮蔽分類的階段偵測出被遮蔽的五官區塊,然後只選擇未遮蔽的區塊來辨識性別,利用這種動態選擇五官區塊的方式來解決因部分遮蔽所造成的誤判並且提升整體的辨識率。
實驗方面,從三個部分來探討,分別是各區塊的性別辨識結果、綜合各區塊辨識結果的適應型性別辨識以及利用部分遮蔽影像來測試系統的強健性,實驗結果顯示,在未遮蔽或有部分遮蔽的人臉影像中,本論文提出的性別辨識方法皆能維持良好的辨識率。
Gender is a very important personal attribute inherent in human beings. Hence, an automatic and effective gender recognition system is desirable in various applications, such as intelligent surveillance system, human-computer interaction, and customer behavior analysis. Take customer behavior analysis for example, the applying of gender recognition technology in digital signage can attract potential customers while demonstrating custom advertisements.
Since human face conveys a clear sexual dimorphism, the using of facial features seems to be an intuitive way for gender recognition. In consequence, abundant gender recognition methods were proposed based on certain facial components. However, people may wear sunglasses or scarves in real life which will result in occluded facial image. Under these circumstances, the performance of gender recognition will thus be degraded due to the occlusion problem. In this thesis, we propose an adaptive gender recognition method by dynamically selecting facial components to cope with the aforementioned problem. In the occlusion classification procedure, the occluded facial components are firstly detected and only the non-occluded facial components are selected in the recognition role to overcome the failure problem suffering the recognition performance.
Three different experiments were conducted to verify the validity of our proposed method. They were categorized in terms of single classifier of components, multiple classifiers for adaptive gender recognition, and system robustness test by using partial occlusion image. Experimental results demonstrate that the proposed method exhibit excel accuracy under various conditions.
[1] R. C. Luo, T.-T. Lin, and M.-C. Tsai, “Gender classification based on multi-classifiers fusion for human-robot interaction,” International Symposium on Industrial Electronics, pp. 796-800, 2011.
[2] C. B. Ng, Y. H. Tay, and B.-M. Goi, “Recognizing human gender in computer vision: a survey,” Pacific Rim International Conference on Artificial Intelligence, vol. 7458, pp. 335-346, 2012.
[3] J.-M. Fellous, “Gender discrimination and prediction on the basis of facial metric information,” Vision Research, 37(14), pp. 1961-1973, 1997.
[4] S. Mozaffari, H. Behravan, and R. Akbari, “Gender classification using single frontal image per person: combination of appearance and geometric based features,” International Conference on Pattern Recognition, pp. 1192-1195, 2010.
[5] H.-C. Lian and B.-L. Lu, “Multi-view gender classification using local binary patterns and support vector machines,” International Symposium on Neural Networks, pp. 202-209, 2006.
[6] C. Shan, “Learning local binary patterns for gender classification on real-world face images,” Pattern Recognition Letters, 33(4), pp. 431-437, 2012.
[7] B. Li, X.-C. Lian, and B.-L. Lu, “Gender classification by combining clothing, hair and facial component classifiers,” Neurocomputing, 76(1), pp. 18-27, 2012.
[8] B. Xia, H. Sun, and B.-L. Lu, “Multi-view gender classification based on local gabor binary mapping pattern and support vector machines,” International Joint Conference on Neural Networks, pp. 3388-3395, 2008.
[9] L. A. Alexandre, “Gender recognition: a multiscale decision fusion approach,” Pattern Recognition Letters, 31(11), pp. 1422-1427, 2010.
[10] P.-H. Lee, J.-Y. Hung, and Y.-P. Hung, “Automatic gender recognition using fusion of facial strips,” International Conference on Pattern Recognition, pp.1140-1143, 2010.
[11] S. Y. D. Hu, Brendan Jou, Aaron Jaech, and Marios Savvides, “Fusion of region-based representations for gender identification,” International Joint Conference on Biometrics, pp. 1-7, 2011.
[12] S. Milborrow, “Locating facial features with active shape models,” Master’s Thesis, University of Cape Town, 2007.
[13] S. Milborrow and F. Nicolls, “Locating facial features with an extended active shape model,” European Conference on Computer Vision, pp. 504-513, 2008.
[14] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models - their training and application,” Computer Vision and Image Understanding, 61(1), pp. 38-59, 1995.
[15] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” European Conference on Computer Vision, vol. 2, pp. 484-498, 1998.
[16] T. F. Cootes, G. Edwards, and C. J. Taylor, “Comparing active shape models with active appearance models,” British Machine Vision Conference, pp. 173-182, 1999.
[17] L. Breiman, “Random forests,” Machine Learning, 45(1), pp. 5-32, 2001.
[18] A. Verikas, A. Gelzinis, and M. Bacauskiene, “Mining data with random forests: A survey and results of new tests,” Pattern Recognition, 44(2), pp. 330-349, 2011.
[19] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[20] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, 20(3), pp. 273-297, 1995.
[21] P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss, “The FERET database and evaluation procedure for face-recognition algorithms,” Image and Vision Computing, 16(5), pp. 295-306, 1998.
[22] A. Martinez and R. Benavente, “The AR face database,” CVC Tech. Report #24, 1998.