| 研究生: |
林煌山 Hung-San Lin |
|---|---|
| 論文名稱: |
利用膚色及區域極小值作人臉特徵是否遮蔽之偵測判斷 Detection of Facial Occlusions by Skin-Color based and Local-Minimum based Feature Extractor |
| 指導教授: |
范國清
Kuo-Chin Fan |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 畢業學年度: | 91 |
| 語文別: | 中文 |
| 論文頁數: | 49 |
| 中文關鍵詞: | 樣板比對 、人臉偵測 、膚色 、移動物偵測 、人臉追蹤 、特徵抽取 、人臉辨識 |
| 外文關鍵詞: | Skin Color, Facial Detection, Facial Recognition, Feature Extracting, Template Match, Face Tracking, Motion Detection |
| 相關次數: | 點閱:11 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
人臉偵測及人臉辨識的相關研究中,經常遇到的難題不外乎複雜背景或是人臉特徵有所遮蔽的情形。在本論文中,利用動態資訊解決複雜背景的問題,同時縮小搜尋視窗增進了整個系統偵測時的速度,而不同於一般人臉偵測的研究,本論文除了抽取人臉特徵外,還進一步偵測人臉特徵是否有被遮蔽,並提出了兩種偵測的策略。
第一種策略運用了大多數研究所採用的膚色資訊,先離線統計膚色在YCbCr空間中的值域,將其建入查詢表格中並加以處理,再根據此查詢表格分離輸入影像中的非膚色區域作為特徵,透過幾何上的限制去將特徵配對,若配對失敗時,將根據離線統計所得到的人臉膚色分布比例來判斷是否有遮蔽發生。
若第一種策略抽取失敗,之後的遮蔽判定卻認為沒有遮蔽,則我們將採用第二種策略,經由觀察顯示眼睛及嘴巴在人臉部分算是亮度較低的,而且其附近擁有較多的邊緣資訊,藉由這兩項觀察我們將分別抽取出眼睛及嘴巴的候選,同樣的將這些候選作配對,若配對失敗,則根據這些候選的分布情形,作出最後的遮蔽判定。
經由自行拍攝的影像測試結果顯示,我們所提出的方法在偵測人臉特徵是否遮蔽這方面,的確具有可行性及其正確。
The frequently-encountered problems in human face detection and recognition are complex background and occlusion of human face features. In this thesis, we adopt dynamic information of video sequences to resolve the complex background problem. The speed of face detection can be drastically decreased due to the reduced in the size of searching window. In addition to the extraction of face features, we can also determine whether face features are occluded. Two strategies are proposed in this thesis to achieve the goal.
The first strategy is to use skin color information by analyzing the skin color distribution in YCbCr color system. Non-skin color regions can be eliminated by utilizing the analyzed skin color distribution information. Then, face features are paired based on the geometric constraints. If the pairing is failure, the face skin color distribution ratios obtained from statistical results are utilized to determine if occlusion do occur.
If the first strategy fails in determining whether human faces are occluded, then the second strategy is employed. The second strategy is implemented based on the facts that the gray values of eyes and mouths are lower than the other parts in human faces and there will contain more edge information in the vicinity of eyes and mouths. The candidates of eyes and mouths can be extracted according to the aforementioned two facts. Then, all possible candidates are paired. If the pairing is failure, the determination of occlusion/non-occlusion can be accomplished by judging the distribution of these candidates.
Experiments were conducted on various video images. Experimental results verify the feasibility and validity of our proposed approach in determining the occlusion/non-occlusion of face features.
[1] M. Turk and A. Pentland, “ Eigenfaces for recognition”, Journal of Cognitive Neuro-science, vol.3, no.1, pp.71-86, 1991.
[2] K. K. Sung and T. Poggio, “Example-based learning for view-based human face detection”, in Proc. Image Understanding Workshop, pp. 843-850, Monterey, Calif., Nov. 1994.
[3] H. A. Rowley, S. Baluja, and T. Kanade, “Human face detection in visual scenes”, Tech. Rep. CMU-CS-95-158R, Carnegie Mellon University, 1995.
[4] S. H. Jeng, H. Y. Mark Liao, C. C. Han, M. Y. Chern, and Y. T. Liu, “An efficient approach for facial feature detection using geometrical face model”, to appear in Pattern Recognition, 1997.
[5] K. Sobottka and I. Pitas, “Extraction of facial regions and features using color and shape information”, in Proc. 13th International Conference on Pattern Recognition, pp.421-425, Vienna, Austria, Aug. 1996.
[6] H. Wu, Q. Chen, and M. Yachida, “A fuzzy-theory-based face detector”, in Proc. 13th International Conference on Pattern Recognition, pp.406-410, Vienna, Austria, Aug. 1996.
[7] B. Moghaddam, and A. Pentland, “Probabilistic Visual Learning for Object Representation,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 19, no. 7, pp. 696-710, July 1997.
[8] B. Moghaddam, W. Wahid, and A. Pentland, “Beyond Eigenfaces: Probabilistic Matching for Face Recognition,” IEEE International Conference on Automatic Face and Gesture Recognition, pp. 21-35, 1998.
[9] C. Cortes, V. Vapnik, “Support vector networks,” Machine Learning, Vol. 20,pp. 273-297, 1995.
[10] E. Osuna, “Support Vector Machines: Training and Applications,” Ph.D. thesis, Dept. of EECS, Massachusetts Institute of Technology, 1998.
[11] E. Osuna, R. Freund, and F. Girosi, “Training Support Vector Machines: an Application to Face Detection,” Proc. Computer Vision and Pattern Recognition, pp. 17-19, 1997.
[12] D. Gutchess, M. Trajkovic, E. Cohen-Solal, D. Lyons, A. K. Jain, “A Background Model Initialization Algorithm for Video Surveillance”, IEEE, 2001.
[13] R.M. Haralick, and L. G. Shapiro, Computer and Robot Vision, Vo1.1, Addison-Wesley Inc., 1992.
[14] R. Brunelli, and T. Poggio, “Template Matching: Matched Spatial Filters And Beyond,” MIT AI Memo 1549,July 1995.
[15] M.S. Lew, N. Sube, T.S. Huang, “Improving visual matching,” IEEE Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 58-65, 2000.
[16] H. Thomas, E. Charles, and L. Ronald, “Introduction To Algorithms,” pp. 898-902, 1989.