跳到主要內容

簡易檢索 / 詳目顯示

研究生: 鄭旭詠
HSU-YUNG CHENG
論文名稱: 視訊分析應用於智慧型運輸系統中之先進車輛控制與安全服務
Video Analysis for Advanced Vehicle Control and Safety Services in Intelligent Transportation Systems
指導教授: 范國清
Kuo Chin Fan
口試委員:
學位類別: 博士
Doctor
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
畢業學年度: 95
語文別: 英文
論文頁數: 89
中文關鍵詞: 智慧型運輸系統車道偵測環境分類先進車輛控制與安全服務
外文關鍵詞: Advanced Vehicle Control and Safety Services, Lane Detection, Intelligent Transportation Systems, Environment Classification
相關次數: 點閱:8下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 先進車輛控制與安全系統是智慧型運輸系統中很重要的一環。在視訊處理技術的協助之下,以圖像為基礎之車輛控制與安全系統可分析所拍攝之視訊並自動找出前方之車道與車輛,進而警告駕駛人車到偏離或是即將發生碰撞之危險情況。唯有正確地找出車道邊緣之所在位置,系統才能判斷目前車輛是否偏離車道中心,或是離同一車道之前方車輛距離過於接近。因此車道偵測是車輛控制與安全系統中重要之關鍵技術。本篇論文提出了一個階層式的車道偵測方法,以用於處理不同類型的道路。因為不同類型的道路往往需要不同的車道偵測方法,因此,我們先將道路環境分成兩大類: 結構道路與非結構道路。結構道路與非結構道路之分類係利用特徵值解構區別分析。分類完畢之後,不同類型的道路即用不同的方法來處理。本篇論文所提出之方法可有效地區分結構道路與非結構道路並針對不同類型的道路使用不同的處理方法。同時針對行車流量大之結構道路,設計了一套演算法可以有效地找出結構道路之車道的左右邊界,且不會被路面上其他往來的車輛所干擾影響。同時,實驗結果也顯示此方法於各種不同之照明情況下皆可以適用。本篇論文的主要貢獻包括了以下幾點: 首先,我們提出了階層式車道偵測的概念,可以充分利用針對各種路況所設計的方法來處理不同的道路環境。同時,利用特徵值解構區別分析,我們可以在訓練樣本有限的情況之下求得所需的高思函數並得到很好的分類結果。另外,在車道線偵測的部份,我們所設計的方法不會受光線及佔據路面車輛的影響。最後,對車流量較大的結構道路,我們所設計的方法能夠有效去除往來車輛的影響。


    Advanced Vehicle Control and Safety Systems (AVCSS) play an important role in the Intelligent Transportation Systems. Lane boundaries have to be determined accurately in order to decide weather the vehicle is deviating from its current lane, or the vehicle is too close to the vehicle in the same lane in front of it. Therefore, lane detection is a crucial part in the advanced vehicle control and safety systems. This dissertation proposed a hierarchical lane detection system that can handle various types of road conditions. Because methods suitable for different road types are often different, we first classify the environments into two groups: structured roads and unstructured roads. Structured roads and unstructured roads are classified based on Eigenvalue Decomposition Regularized Discriminant Analysis (EDRDA). After classification, different lane detection algorithms are applied to different types of roads. In this dissertation, we are able to distinguish different road types effectively and use different algorithms to handle them. For structured roads, we propose a mechanism which is able to robustly find the left and right boundary lines of the lane and would not be affected by the passing traffic. Experimental results also show that the proposed method can work well in various lighting conditions. The main contribution of this dissertation includes several aspects. First, we propose the concept of hierarchical lane detection which can deal with different environment with relevant methods. Second, we apply EDRDA and design a voting mechanism for environment classification. With limited number of training samples, we are able to estimate the parameters for the Gaussian models using EDRDA and obtain satisfying classification results. Third, for structured roads those have heavier traffic, we design a mechanism to effectively eliminate the influence of passing vehicles when performing lane detection. We also extract lane-mark colors in a way that is not affected by illumination changes and the proportion of space that vehicles on the road occupy.

    CHAPTER 1 INTRODUCTION 1 1.1 Introduction to Intelligent Transportation Systems 2 1.1.1 Advanced Traffic Management Systems (ATMS) 2 1.1.2 Advanced Traveler Information Systems (ATIS) 2 1.1.3 Advanced Public Transportation Systems (APTS) 2 1.1.4 Advanced Vehicle Control and Safety Systems (AVCSS) 3 1.1.5 Commercial Vehicle Operations (CVO) 3 1.1.6 Emergency Management System (EMS) 3 1.1.7 Electronic Toll Collection Systems (ETCS) 4 1.1.8 Vulnerable Individual Protection Services (VIPS) 4 1.2 Video analysis techniques in ITS 4 1.3 Introduction to Lane Detection 7 1.3.1 Challenges of Lane Detection 7 1.3.2 Literature Review 9 1.3.3 Common modules in lane detection 11 1.4 Proposed System 15 CHAPTER 2 HIERARCHICAL LANE DETECTION 17 2.1 Hierarchical lane detection framework 18 2.2 Feature Point Extraction 19 2.3 Linear Discriminant Analysis (LDA) and Quadric Discriminant Analysis (QDA) 20 2.4 Regularized Discriminant Analysis (RDA) and Eigenvalue Decomposition Regularized Discriminant Analysis (EDRDA) 24 2.4.1 Model 28 2.4.2 Model 28 2.4.3 Model 28 2.4.4 Model 29 2.4.5 Model 29 2.4.6 Model 29 2.4.7 Model 30 2.4.8 Model 30 2.4.9 Model 30 2.5 Applying EDRDA to classification problem of structured roads and unstructured roads 30 2.5.1 Environment Classification 32 2.6 Experimental results 34 CHAPTER 3 LANE DETECTION FOR STRUCTURED ROADS 43 3.1 Proposed system architecture 43 3.2 Road and lane-mark color extraction 47 3.2.1 Road color extraction 47 3.2.2 Lane-mark color extraction 49 3.2.3 Experimental results 51 3.3 Moving vehicle elimination 52 3.3.1 Initialization 53 3.3.2 Constructing Tracking List 54 3.3.3 Object Verification 55 3.3.4 Object Matching 56 3.3.5 New position estimation 59 3.4 Lane recognition and lane coherence verification 62 3.5 Experimental Results 68 CHAPTER 4 CONCLUSIONS AND FUTURE WORKS 77 References 81 Appendix A 87

    [1] M. Bertozzi et al., “Artificial vision in road vehicles,” in Proc. of the IEEE, vol. 90, no. 7, pp. 1258-1271, Jul. 2002.
    [2] M. Bertozzi, A. Broggi, A. Fascioli, T. Graf, and M.-M. Meinecke, “Pedestrian detection for driver assistance using multiresolution infrared vision,” IEEE Trans. on Vehicular Technology, vol. 53, no. 6, pp. 1666-1678, Nov. 2004.
    [3] C. Little, “The intelligent vehicle initiative: Advancing ‘Human-Centered’ smart vehicles,” Public Roads Mag., vol. 61, no. 2, pp. 18–25, Sept./Oct. 1997.
    [4] K. Kluge and S. Lakshmanan, “A deformable-template approach to lane detection,” in Proc. IEEE Intell. Vehicle Symp., pp.54-59, Sept. 1995.
    [5] Y. Wang, E. K. Teoh, and D. Shen, “Lane detection using B-snake,” in International Conf. on Information Intell. and Syst., pp. 438-443, 1999.
    [6] Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, vol. 21 , pp. 677-689, 2000.
    [7] Y. U. Yim and S. Y. Oh, “Three-feature based automatic lane detection algorithm (TFALDA) for autonomous driving,” IEEE Trans. on Intell. Transport. Syst., vol. 4, no. 4, pp. 219-225, Dec. 2003.
    [8] F. W. J. Gibbs and B. T. Thomas, “The fusion of multiple image analysis algorithms for robot road following,” in Proc. IEEE 5th Int. Conf. Image Processing and Its Applications, pp. 394–398, Jul. 1995.
    [9] C. Rasmussen, “Grouping dominant orientations for ill-structured road following,” in Proc. IEEE Comp. Soc. Conf. on Computer Vision and Pattern Recognition, vol. 1, pp. 470–477, Jul. 2004.
    [10] K. Kluge, “Extracting road curvature and orientation from image edge points without perceptual grouping into features,” in Proc. IEEE Intell. Vehicle Symp., pp. 109–114, Oct. 1994.
    [11] K. Kluge and G. Johnson, “Statistical characterization of the visual characteristics of painted lane marking,” in Proc. IEEE Intell. Vehicle Symp., pp. 488–493, Sept. 1995.
    [12] J. P. Gonzalez and U. Ozguner, “Lane detection using histogram-based segmentation and decision trees,” in Proc. IEEE Intell. Transport. Syst., pp. 346-351, Oct. 2000.
    [13] K. A. Redmill, S. Upadhya, A. Krishnamurthy, and Ü Özgüner, “A lane tracking system for intelligent vehicle application,” in Proc. IEEE Intell. Transport. Syst., pp. 273–279, Aug. 2001.
    [14] Y. He, H. Wang, and B. Zhang, “Color-based road detection in urban traffic scenes,” IEEE Trans. on Intell. Transport. Syst., vol. 5, no. 4, pp. 309-318, Dec. 2004.
    [15] K. Onoguchi, N. Takeda, and M. Watanabe, “Planar projection stereopsis method for road extraction,” IEICE Trans. on Information Syst., vol. E81-D, no. 9, pp. 1006–1018, 1998.
    [16] A. Takahashi, Y. Ninomiya, M. Ohta, M. Nishida, and M. Takayama, “Rear view lane detection by wide angle camera,” in Proc. IEEE Intell. Vehicle Symp., vol. 1, pp. 148-153, Jun. 2002.
    [17] S. G. Jeong et al, “Real time lane detection for autonomous navigation,” in Proc. IEEE Intell. Transport. Syst., pp. 508-513, Aug. 2001.
    [18] J.S. Jin, Z. Zhu, and G. Xu, “A stable vision system for moving vehicles,” IEEE Trans. on Intell. Transport. Syst., vol. 1, no. 1, pp. 32-39, Mar. 2000.
    [19] T. N. Schoepflin and D. J. Dailey, “Dynamic camera calibration of roadside traffic management cameras for vehicle speed estimation,” IEEE Trans. on Intell. Transport. Syst, vol. 4, no. 2, pp. 90-98, Jun. 2003.
    [20] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis and Machine Vision, Thomson-Engineering, 2nd edition, 1998.
    [21] C. Hatipoglu, U. Ozguner, and K.A. Redmill, “Automated lane change controller design,” IEEE Trans. on Intell. Transport. Syst., vol. 4, no. 1, pp. 13-22, Mar. 2003.
    [22] M.J. Magee and J.K. Aggarwal, “Determining vanishing points from perspective images,” Computer Vision Graphics Image Process, vol. 26, pp. 256-267, 1984.
    [23] A. Toi, J. Kittler, M. Petrou, T. Windeatt, “Vanishing point detection,” Image and Vision Comput., vol. 11, no. 4, pp. 240-245, 1993..
    [24] L. Quan, R. Mohr, “Determining perspective structures using hierarchical Hough transform,” Pattern Recognition Letters, vol. 9, pp. 279-286, 1989.
    [25] H.Y. Cheng, B.S. Jeng, P.T. Tseng, K.C. Fan, "Lane Detection with Moving Vehicles in the Traffic Scenes," accepted and to appear in IEEE Trans. on Intell. Transport. Syst.
    [26] T. Hastie, R. Tibshirani, and J. Friedman, "The Elements of Statistical Learning- Data Mining, Inference, and Prediction," Springer, pp. 90-91.
    [27] J. Friedman, "Regularized Discriminant Analysis," Journal of the American Statistical Association, vol. 84, pp. 165-175.
    [28] H. Bensmail and G. Celeux, "Regularized Gaussian Discriminant Analysis through Eigenvalue Decomposition," Journal of the American Statistical Association, vol. 91, pp.1743–1748, 1996.
    [29] J. D. Crisman and C. E. Thorpe, “SCARF: A color vision system that tracks roads and intersections,” IEEE Trans. on Robot. Automat., vol. 9, pp. 49–58, Feb. 1993.
    [30] J. Crisman and C. E. Thorpe, “UNSCARF: A color vision system for the detection of unstructure roads,” in Proc. IEEE Int. Conf. Robotics and Automation, Apr. 1991, pp. 2496–2501.
    [31] M. Bertozzi and A. Broggi, “GOLD: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Trans. Image Processing, vol. 7, pp. 62–81, Jan. 1998.
    [32] A. Broggi, “Robust real-time lane and road detection in critical shadow conditions,” in Proc. IEEE Int. Symp. Computer Vision, Coral Gables, FL, Nov. 1995, pp. 353–359.
    [33] D. Pomerleau, “RALPH: Rapidly adapting lateral position handler,” in Proc. IEEE Intelligent Vehicles Symp., Detroit, MI, Sept. 1995, pp. 506–511.
    [34] D. Comaniciu, P. Meer, “Mean Shift: A robust approach toward feature space analysis,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 5, May 2002.
    [35] I. Haritaoglu, D. Harwood, L. Davis, “W4: who? when? where? a real time system for detecting and tracking people,” In: Proceedings of the IEEE Conf. on Automatic Face and Gesture Recognition, 1998, pp. 222-227.
    [36] C. Ridder, O. Munkelt, H. Kirchner, “Adaptive background estimation and forground detection using kalman-filtering,” In: Proceedings of the Recent Advances in Mechatronics, 1995, pp. 193-199.
    [37] C. Wren, A. Azarbyejani, T. Darrell, A. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, 1997, pp. 780-785.
    [38] S. McKenna, S. Jabri, Z. Duric, H.Wechsler, “Tracking interacting people,” In: Proceedings of the International Conf. on Automatic Face and Recognition, 2000, pp. 348-353.
    [39] D. R. Magee, “Tracking multiple vehicles using foreground, background and motion models,” Image and Vision Computing, vol. 22, 2004, pp. 143-155.
    [40] M. Fettke, K. Sammut, M. Naylor, H. Fangpo, “Evaluation of motion detection techniques for video surveillance,” In: Proceedings in: Information, Decision and Control, 2002, pp. 247 - 252.
    [41] S. K. Nayar and R. M. Bolle, “Computing reflectance ratios from an image,” Pattern Recognition, vol. 26, no. 10, pp. 1529-1542.
    [42] J. Ma, X. Lu and C. Wu, “A motion constraint equation under space-varying or time-varying illumination,” Pattern Recognition Letters, vol. 5, no. 3, pp. 203-205.
    [43] R. C. Gonzalez and R. E Woods, Digital image processing, Addison-Wesley Publishing Company.
    [44] M. Y. Siyal and M. Fathy, “An image detection technique based on morphological edge detection and background differencing for real-time traffic analysis,” Pattern Recognition Letters, vol. 16 no. 12, pp. 1321-1330.
    [45] M. Y. Siyal and M. Fathy, “A neural-vision based approach to measure traffic queue parameters in real-time,” Pattern Recognition Letters, vol. 20, no. 8, pp. 761-770.
    [46] D. M. Tsai, “A fast thresholding selection procedure for multimodal and unimodal histograms,” Pattern Recognition Letters, vol. 16, pp. 653-666, 1995.
    [47] M. I. Sezan, “A Peak Detection Algorithm and its Application to Histogram-Based Image Data Reduction,” Graphical Models and Image Processing, vol. 29, pp. 47-59, 1985.
    [48] Q. Z. Wu, H. Y. Cheng, and K. C. Fan, "Motion detection based on Two-piece linear approximation for cumulative histograms of ratio images in intelligent transportation systems," 2004 IEEE International Conf. on Networking, Sensing and Control, pp. 309-314, Mar. 2004.
    [49] Q. Z. Wu, H. Y. Cheng, and B. S. Jeng, "Motion detection via change-point detection for cumulative histograms of ratio images," Pattern Recognition Letters, vol. 26, pp. 555-563, 2005.
    [50] H. Y. Cheng, Q. Z. Wu, K. C. Fan and B. S. Jeng, "Binarization method based on pixel-level dynamic thresholds for change detection in image sequences," Journal of Information Science and Engineering, vol. 22, pp. 545-557, 2006.
    [51] R.O. Duda, P. E. Hart, D. G. Stork, Pattern Classification, 2nd Edition, Wiley-Interscience, Oct 2000.

    QR CODE
    :::