跳到主要內容

簡易檢索 / 詳目顯示

研究生: 蔡裕明
Yu-Ming Tsai
論文名稱: 基於稠密光流分析之行車危險偵測與駕駛輔助系統
An event detection and driver assistance system based on dense optical flow analysis
指導教授: 鄭旭詠
Hsu-Yung Cheng
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 81
中文關鍵詞: 駕駛輔助事件偵測光流法自適應增強
外文關鍵詞: ADAS, Event Detection, Optical Flow, Adaboost
相關次數: 點閱:13下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著近年來行動裝置市場的擴張,推動微型電腦系統的快速發展。車載資訊系統受益於這波風潮,在硬體方面獲得大幅提升。在此背景下,ADAS(Advanced Driver Assistance Systems,先進駕駛輔助系統)得以更廣泛的發展,已成為當前汽車電子的重要研究方向。
    本篇論文關注於駕駛輔助系統中,基於車前影像進行事件警示的預防碰撞系統。當前這類研究大多嘗試對原始影像進行分析,並精確地偵測畫面中的車輛,再根據車輛與自身車的距離與狀態決定是否警示。然而,當這類方法遭遇到車輛偵測上的困難時,就可能導致警示系統失常。
    本篇論文所提出的系統不依賴車輛偵測方法,而是嘗試分析單鏡頭影像的稠密光流(Dense Optical Flow)作為車前事件的特徵。我們以移動向量的直方圖統計建立不同畫面區塊的特徵向量,並利用由單純貝氏分類器(Naive Bayes Classifier)與自適應增強演算法(Adaboost)組合而成的階層式分類器(Cascade Classifier)判斷事件是否發生。此外,由於車道上的固定景物會在畫面上產生明顯的光流反應。我們也利用這個特性,將光流特徵使用於車道的偵測與分類上,以提升駕駛輔助系統對行車場景的辨識能力。
    在實驗中,我們將展示本論文提出的系統對高速公路行車事件偵測具有良好的可靠度。而在車道的分類辨識上,光流特徵也確實能提供足夠的資訊,使系統能判斷自身車兩側的車道狀況。且在沒有特別最佳化的情況下,系統可以在個人電腦上達到每秒鐘30幀以上的實時運作。


    During the past few years, Advanced Driver Assistance Systems (ADAS) are widely developed and have become an important research subject in the area of automotive electronics. In this work, we focus on vision-based collision avoidance and event warning system in ADAS. Many existing research works on this topic attempted to analyze source images and detect vehicles in the front. Then the system can make warnings depending on the distance between ego-vehicle and other vehicles. However, sometimes the vehicles cannot be detected because of the large variation of vehicle types and appearance. Also, objects which can cause dangers might not be vehicles, the system may miss these events.

    This paper proposed an event warning approach based on dense optical flow analysis of monocular video rather than vehicle detection. The system constructs histograms of optical flow vectors in different regions as features. Then, cascade classifiers consist of a naive Bayes classifier and an Adaboost classifier are trained to judge events of current frame. In addition, we also attempt to use the optical flow feature in lane detection and classification and improved the abilities of driving scenario understanding.

    Experiments have shown that the proposed system has high reliability of caution event detection for highway scenarios. On the lane classification part, optical flow features indeed can help the system classify lane conditions. Without specific optimizations, the system implemented on a personal computer runs at a real-time speed of 30 frames per second.

    摘要 v Abstract vi 致謝 vii 圖目錄 x 表目錄 xi 第一章 緒論 1 1.1 研究動機 1 1.2 相關文獻 5 1.3 系統流程與論文架構 6 第二章 光流特徵擷取 9 2.1 影像結構 10 2.1.1 高速公路情境與前處理 10 2.1.2 消失點評估 13 2.1.3 車道區域下緣偵測 15 2.2 影像光流 17 2.2.1 光流評估方法 17 2.2.2 光流長度正規化 20 2.2.3 光流前處理 21 2.3 光流密度 22 2.4 光流向量直方統計 24 2.4.1 光流ROI(Region of Interest) 24 2.4.2 向量直方統計 25 第三章 車道判斷與事件偵測 27 3.1 車道判斷 27 3.1.1 車道線偵測 28 3.1.2 車道分類 31 3.2 車前事件偵測 37 3.2.1 階層式分類器 37 3.2.2 單純貝氏分類器 39 3.2.3 自適應增強演算法 41 第四章 實驗結果與討論 46 4.1 實驗設備與樣本標記 46 4.1.1 實驗設備 46 4.1.2 樣本標記 47 4.2 車道偵測與分類 50 4.2.1 車道偵測 50 4.2.2 車道分類 52 4.3 事件偵測 55 4.3.1 測試指標 55 4.3.2 貝氏分類器 56 4.3.3 階層式分類器 58 4.4 系統速度 66 第五章 結論與未來工作 67 參考文獻 68

    [1] P. Lenz , J. Ziegler , A. Geiger and M. Roser, "Sparse scene flow segmentation for moving object detection in urban environments", Proc. IEEE Intell. Veh. Symp., pp. 926-932, 2011
    [2] D. Nguyen, C. Hughes ,J. Horgan "Optical Flow-Based Moving-Static Separation in Driving Assistance Systems", IEEE ITSC, 2015
    [3] C. Pantilie, S. Bota, I. Haller and S. Nedevschi, "Real-time obstacle detection using dense stereo vision and dense optical flow", Proc. IEEE Int. Conf. Intelligent Computer Communication and Processing (ICCP), pp. 191-196, 2010
    [4] O. Karaduman, H. Eren, H. Kurum, M. Celenk, "Approaching Car Detection via Clustering of Vertical-Horizontal Line Scanning Optical Edge Flow," 15th International IEEE Conference on Intelligent Transportation Systems (IEEE ITSC), Alaska-Anchorage USA, September 16-19, 2012.
    [5] Onkarappa, N., Sappa, A., “Speed and texture: an empirical study on optical-flow accuracy in ADAS scenarios”. IEEE Transactions on Intelligent Transportation Systems 15(1), 136–147 ,2014
    [6] N. Onkarappa and A. Sappa, "An empirical study on optical flow accuracy depending on vehicle speed", Proc. IEEE Intell. Veh. Symp., pp. 1138-1143, 2012
    [7] Q. Nie, S. Bouchafa, A. Merigot, "Model-based optical flow for large displacements and homogeneous regions", IEEE International Conference on Image Processing, 2013
    [8] X. Zhang, P. Jiang and F. Wang, "Overtaking Vehicle Detection Using A Spatio-temporal CRF", IEEE Intell. Veh. Symp., no. IV, pp. 338-342, 2014
    [9] M. Felisa and P. Zani, "Robust monocular lane detection in urban environments", Proc. IEEE Intell. Veh. Symp., pp. 591-596, 2010
    [10] M. Nieto, J. A. Laborda, and L. Salgado. "Road environment modeling using robust perspective analysis and recursive bayesian segmentation". Machine Vision and Applications, 22:927-945, 2011.
    [11] Yong Chen, Mingyi He, and Yifan Zhang, “Robust Lane Detection Based on Gradient Direction, “ IEEE Conference on Industrial Electronics and Applications, 2011.
    [12] J.Matas, C.Galambos and J.Kittler. “Robust Detection of Lines Using the Progressive Probabilistic Hough Transform.” Computer Vision and Image Understanding, pp:119-137, 2000
    [13] Berthold K.P. Horn and Brian G. Schunck. "Determining Optical Flow". Artificial Intelligence, 17, pp. 185-203, 1981.
    [14] B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision", IJCAI, 1981
    [15] Gunnar Farneback, “Two-frame motion estimation based on polynomial expansion”, Lecture Notes in Computer Science , 363-370, 2003
    [16] M. Tao, J. Bai, P. Kohli and S. Paris, "Simpleflow: A non-iterative, sublinear optical flow algorithm", Computer Graphics Forum, vol. 31, pp. 345-353, 2012
    [17] L. A. Zadeh, "Fuzzy sets", Inform. Contrvol., vol. 8, pp. 338-353, 1965
    [18] T. Takagi and M. Sugeno, "Fuzzy identification of systems and its applications to modeling and control", IEEE Trans. Syst., Man, Cybern., vol. 15, pp. 116-132, 1985
    [19] Paul Viola and Michael J. Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features", IEEE CVPR, 2001
    [20] Y. Freund and R. E. Schapire, "A decision-theoretic generalization of on-line learning and an application to boosting", J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119-139, 1997
    [21] J. Friedman, T. Hastie, and R. Tibshirani, “Additive logistic regression: a statistical view of boosting”, Annals of Statistics 28, pp. 337-407, 2000
    [22] R. Rojias,"AdaBoost and the Super Bowl of Classifiers A Tutorial Introduction to Adaptive Boostring",Technical Report, 2009
    [23] C. Guo, J. Meguro, Y. Kojima and T. Naito, "A multimodal ADAS system for unmarked urban scenarios based on road context understanding", IEEE Trans. Intell. Transp. Syst., vol. 16, no. 4, pp. 1690-1704, 2015
    [24] S. Sivaraman and M. Trivedi, "Integrated lane and vehicle detection, localization, and tracking: A synergistic approach", IEEE Trans. Intell. Transp. Syst., vol. 14, no. 2, pp. 906-917, 2013
    [25] M. Nieto, L. Salgado, F. Jaureguizar, and J. Cabrera, "Stabilization of inverse perspective mapping images based on robust vanishing point estimation," in IEEE Intelligent Vehicles Sysposium, 6 2007.
    [26] Opencv github samples :
    https://github.com/opencv/opencv/tree/master/samples/gpu
    [27] OpenCV CUDA Binaries :
    https://sourceforge.net/projects/opencvprebuilt/
    [28] Opencv Installation in Windows :
    http://docs.opencv.org/3.0-beta/doc/tutorials/introduction/windows_install/windows_install.html

    QR CODE
    :::