跳到主要內容

簡易檢索 / 詳目顯示

研究生: 廖英傑
Ying-Chieh Liao
論文名稱: 可以自由移動頭部之視線追蹤演算法
Eye Gaze Estimation from Iris Imageswith Free Head Movements
指導教授: 范國清
Kuo-Chin Fan
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
畢業學年度: 94
語文別: 英文
論文頁數: 42
中文關鍵詞: 虹膜輪廓注視點估計多層感知機虹膜定位人機介面
外文關鍵詞: iris contour, iris localization, human-computer interface, gaze estimation, multilayer perceptron
相關次數: 點閱:6下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 現行的視線追蹤系統可以應用於許多地方,例如居家看護、滑鼠控制以及線上學習等等。而如果是希望做一般性的使用,也就是希望系統的安裝時間或者使用難易度等等都會容易使用,此時,非侵入式的系統比較被建議,也就是架設一台攝影機在固定地方來追蹤視線而建構出的系統。然而,對於大部分的非侵入式系統,都會要求使用者的頭部保持固定不動,對於漸凍人這類的病人而言,這是行的通的,但是若希望一般人也可以使用,就會變的非常困難,原因是在於大部分的人都沒辦法長時間的保持頭部固定,也因此,會使得系統的正確率下降許多。
    因此,為了解決頭移動限制的問題,在本篇論文中,提出了一個有效的解決辦法。此演算法的流程是,在輸入部分為一320*240 大小的眼睛圖,接著取出已
    定好的特徵,包括了兩個眼角位置以及用來表示虹膜輪廓的橢圓之中心點、兩軸比例和方向等等,共有八個。接著,把這些特徵值代入在校正過程中已訓練好的類神經網路,而類神經網路的輸出值則是最後所推測的注視點位置。與其他系統方法不同點的地方在於,用來表示虹膜輪廓的是橢圓,而非圓形,這是因為用圓形會失去很多資訊。此外,輸入部分和輸出部分(注視點)之間的對應函式不使用二階或三階的多項式,而是利用了類神經網路來做函式逼近。而特徵的取決,由於也考量到頭移動的因素,因此除了使用眼睛中心點作為特徵外,也取了多個有意義的特徵。最後,實驗結果證明了本研究所提出的演算法是可以用來解決頭移動的限制。


    A gaze estimation method using the eye position and the iris contour is proposed in this thesis. In traditional methods, users’ heads are asked to remain still for a long time which is an exhausted and fatigue work. In order to create a comfortable environment, users’ heads are allowed to move freely. In our approach, the eye region is assumed within the view of the camera. As we know, the mapping between the gaze points and the eye region is difficultly to be formulated as a polynomial. The zoomed and clear eye images were grabbed to increase the accuracy of gaze estimation. Therefore, the center of an eye ball and the shape of iris contour should both be considered. First, the eye corners are located manually to estimate the center of eye
    ball. Using this information, the pose of a human head is guessed to be a global gaze feature. In addition, the iris center, size, and orientation, called local gaze features, are
    calculated and integrated to train a neural network (NN). Instead of a polynomial function approximation, the NN was trained to estimate the mapping between the gaze features and the gaze points. Experiments were conducted and the results demonstrate the effectiveness of the proposed method in gaze estimation. Finally,
    conclusions are given and future works are suggested.

    Chapter 1 Introduction 1 1.1 Motivations 1 1.2 Related Works 3 1.2.1 Eye Anatomy 3 1.2.2 Eye Tracking Techniques 3 1.2.3 Head Mounted Device 4 1.2.4 Electric Skin Potential 4 1.2.5 Eye Image Using Artificial Neural Networks (ANN) 4 1.2.6 Dual Purkinje Image 5 1.2.7 Video-based Iris and Pupil Tracking 6 1.2.8 Pupil-Glint Vector Technique 7 1.3 System Overview8 1.4 Thesis Organization 9 Chapter 2 Iris Contour Extraction10 2.1 Iris Contour 10 2.2 Iris Localization 11 (a) Vertical Boundary Identification 13 (b) 8-Connected Component Labeling14 (c) Horizontal Boundaries Identification 15 2.3 Elliptical Iris Contour Detection 17 Chapter 3 Eye Gaze Determination 23 3.1 Neural Network23 (a) Face detection24 (b) Function approximation 24 (c) Incident detection 25 3.2 Calibration 25 3.3 Geometrical Features of Eyes 25 (a) Still Head Movement Constraint 26 (b) Without Still Head Movement Constraint 28 (c) Feature Selection 29 (d) Gaze Determination 29 Chapter 4 Experimental Results and Discussions 31 4.1 System Configuration 31 4.2 Experiments 32 4.2.1 Iris Contour Detection32 4.2.2 Gaze Estimation 33 4.3 Discussions 38 Chapter 5 Conclusions and Future Works 39 5.1 Conclusions 39 5.2 Future works 39 References 41

    [1] LC Technique. http://www.eyegaze.com/
    [2] SensoMotoric Instruments. http://www.smi.de/
    [3] Z. Zhu and Q. Ji, “Eye gaze tracking under natural head movements”, IEEE
    Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 918-923, June 2005.
    [4] Z. Zhu and Q. Ji, “Eye and gaze tracking for interactive graphic display”, Machine Vision and Applications, vol. 15, no. 3, pp. 139-148, July 2004.
    [5] D. Hyun and M. J. Chung, “Non-intrusive eye gaze estimation without
    knowledge of eye pose”, Proc. of 6th IEEE International Conference on
    Automatic Face and Gesture Recognition, pp. 785-790, May 2004.
    [6] T. Cornsweet, H. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images”, Journal of the Optical Society of America, vol. 63, pp.
    921-928, 1973.
    [7] S. Baluja and D. Pomerleau, “Non-intrusive gaze tracking using artificial neural
    networks”, Tech. Rep. CMU-CS-94-102, School of Computer Science, CMU,
    CMU Pittsburgh, Pennsylvania 15213, Jan. 1994.
    [8] Y. Ebisawa, “Improved video-based eye-gaze detection method”, IEEE
    Transactions on Instrumentation and Measurement, vol. 47, pp. 948-955, Aug.
    1998.
    [9] S. W. Shih and J. Liu, “A novel approach to 3-D gaze tracking using stereo
    cameras”, IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 34,
    pp. 234-235, Feb. 2004.
    [10] C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection and
    tracking using multiple light sources”, Image Vision Computing, vol. 18, pp.
    331-335, 2000.
    [11] T. E. Hutchinson, K. P. White, Jr., W. N. Martin, K. C. Reichert and L. A. Frey,
    “Human-computer interaction using eye-gaze input”, IEEE Transactions on
    Systems, Man and Cybernetics, vol. 19, pp. 1527-1534, Nov./Dec. 1989.
    [12] A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses”,
    IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, pp.
    476- 480, May 1999.
    [13] C. M. Privitera and L. W. Stark, “Algorithms for defining visual
    regions-of-interest: Comparison with eye fixations”, IEEE Transactions on
    Pattern Analysis and Machine Intelligence, vol. 22, pp. 970-982, Sep. 2000.
    [14] A. L. Yuille, D. S. Cohen, and P. W. Hallinan, “Feature extraction from faces
    using deformable templates”, Proc. of IEEE Computer Society Conference on
    Computer Vision and Pattern Recognition, pp. 104-109, June 1989.
    [15] W. Huang, B. Yin, C. Jiang, and J. Miao, “A new approach for eye feature
    extraction using 3D eye template”, Proc. of 2001 International Symposium on
    Intelligent Multimedia, Video and Speech Processing, pp. 340-343, May 2001.
    [16] Y. Matsumoto and A. Zelinsky, “An algorithm for real-time stereo vision
    implementation of head pose and gaze direction measurement”, Proc. of Fourth
    International Conference on Automatic Face and Gesture Recognition, pp.
    499-504, March 2000.
    [17] K. N. Kim and R. S. Ramakrishna, “Vision-based eye-gaze tracking for human
    computer interface”, IEEE International Conference on Systems, Man, and
    Cybernetics, vol. 2, pp. 324-329, Oct. 1999.
    [18] G. C. Feng and P.C. Yuen, “Variance projection function and its application to
    eye detection for human face recognition”, Pattern Recognition Letters, vol. 19,
    pp. 899-906, July 1998.
    [19] Z. H. Zhou and X. Geng, “Projection functions for eye detection”, Pattern
    Recognition, vol. 37, pp. 1049-1056, May 2004.
    [20] S. Haykin, Neural Networks, 2nd edition, Prentice Hall, ch. 4, 1999.
    [21] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection”,
    Proc. of IEEE Computer Society Conference on Computer Vision and Pattern
    Recognition, pp. 203-208, June 1996.
    [22] C. Garcia and M. Delakis, “Convolutional face finder: a neural architecture for
    fast and robust face detection”, IEEE Transactions on Pattern Analysis and
    Machine Intelligence, vol. 26, pp. 1408-1423, Nov. 2004.
    [23] T. Draelos and D. Hush, “A constructive neural network algorithm for function
    approximation”, IEEE International Conference on Neural Networks, vol. 1, pp.
    50-55, June 1996.
    [24] X. Jin, D. Srinivasan, and R. L. Cheu, “Classification of freeway traffic patterns
    for incident detection using constructive probabilistic neural networks”, IEEE
    Transactions on Neural Networks, vol. 12, pp. 1173-1187, Sept. 2001.
    [25] D. Srinivasan, Xin Jin, and R. L. Cheu, “Evaluation of adaptive neural network
    models for freeway incident detection”, IEEE Transactions on Intelligent
    Transportation Systems, vol. 5, pp. 1-11, March 2004.
    [26] J. Zhu and J. Yang, “Subpixel eye gaze tracking”, Proc. of Fifth IEEE
    International Conference on Automatic Face and Gesture Recognition, pp.
    124-129, May 2002.

    QR CODE
    :::