跳到主要內容

簡易檢索 / 詳目顯示

研究生: 程凱驛
Kai-yi Cheng
論文名稱: 基於視訊場景資料蒐集與訓練之自適應車流估計機制
An Adaptive Traffic Flow Analysis Scheme Based on Scene-Specific Sample Collection and Training
指導教授: 蘇柏齊
Po-chyi Su
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
畢業學年度: 99
語文別: 中文
論文頁數: 69
中文關鍵詞: 車輛ISMSVMSURF自我訓練
外文關鍵詞: Self-Training, Vehicle, SVM, ISM, SURF
相關次數: 點閱:13下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究提出針對固定式道路監控攝影機所拍攝畫面之分析工具,用於獲取道路上的交通資訊,以對車流進行估算。本論文主要分為兩個部分:第一部分為模型訓練機制,我們首先對畫面內容進行去背景,並利用形態學方法得到可能的車輛遮罩,再對遮罩面積進行統計分析後,取得畫面中可能之不同種類車輛大小資訊,並依此收集不同種類車輛之樣本影像。在每個區域自動取得定量之訓練樣本後,我們以支援向量機 (Support Vector Machine)搭配隱式型態模式(Implicit Shape Model)的技術,對資料進行訓練及相關處理,此自適應演算方式可以大幅減少模型建置的人力需求。第二部分為辨識機制,我們使用訓練完成的SVM對特徵點進行分類過濾,再利用訓練完成的ISM對場景中的車輛影像進行辨識,協助解決車輛影像交疊問題,同時提升車輛分類準確度。實驗結果顯示這個機制確實能夠適應不同的交通場景,有效對車輛進行辨識,達成車輛計數或車流估算的目的。


    This research presents a framework of analyzing the traffic information in the surveillance videos from the static roadside cameras to assist solving the vehicle occlusion problem for more accurate traffic flow estimation and vehicle classification. The proposed scheme consists of two main parts. The first part is a model training mechanism, in which the traffic and vehicle information will be collected from the characteristics of masks. Their statistics are employed to automatically establish the models of scene, including the implicit shape model of vehicles and the support vector machine of feature points. It should be noted that the proposed self-training mechanism can reduce a great deal of human efforts. The second part adopts the established implicit shape model and support vector machine to recognize vehicles. Each feature point is classified into a vehicle type and processed by the corresponding ISM. Experimental results demonstrate that the proposed scheme can deal with the scenes with different characteristics in the traffic surveillance videos.

    第一章 緒論...............1 1.1 研究動機............1 1.2 研究貢獻............3 1.3 論文架構............3 第二章 相關研究............4 2.1 車輛偵測............4 2.2 車輛計數............5 2.3 自我訓練機制.........6 2.4 隱式型態模式.........7 第三章 提出之方法...........9 3.1 系統流程.............9 3.2 物體擷取 ............11 3.2.1 高斯混合模型..........11 3.2.2 前景擷取..............13 3.2.3 遮罩分析..............14 3.3 交通資訊分析...........15 3.4 收集訓練資料...........18 3.5 訓練車輛模式...........21 3.5.1 訓練支援向量機..........22 3.5.2 訓練隱式型態模式........24 3.6 辨識過程...............26 3.6.1 支援向量機.............27 3.6.2 隱式型態模式...........28 3.7 車輛計數..............33 3.7.1 單一影像...............33 3.7.2 連續影像...............34 第四章 實驗結果...............36 4.1 場景統計結果...........36 4.2 訓練樣本收集...........39 4.3 單一影像車輛計數........45 4.4 連續影像車輛計數........50 第五章 結論與未來方向..........52 5.1 結論..................52 5.2 未來方向...............53 參考文獻.......................54

    [1] Guolin Wang, Deyun Xiao, and Jason Gu, “Review on vehicle detection based on video for traffic surveillance,” Proceedings of the IEEE International Conference on Automation and Logistics Qingdao China, pp.2961-2966, 2008.
    [2] Nikos Paragios, and Rachid Deriche Geodesic “Active contours and level sets for the detection and tracking of moving objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.22, no.3 pp. 266-280, March, 2000.
    [3] Lei Xie, Guangxi Zhu, Yuqi Wang, Haixiang Xu, and Zhenming Zhang, “Robust vehicles extraction in a video-based intelligent transportation systems,” IEEE 2005 International Conference on Communications, Circuits and Systems, vol.2, 27-30 pp. 887-890, May 2005.
    [4] Jinglei Zhang, and Zhengguang Liu, “A vision-based road surveillance system using improved background subtraction and region growing approach.” IEEE Eighth CIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, vol.3, pp.819 – 822, Aug. 1 2007.
    [5] Clement. C. C. Pang, William. W. L. Lam, and Nelson. H. C. Yung, “A novel method for resolving vehicle occlusion in a monocular traffic-image sequence,” IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 3, pp.129-141, 2004.
    [6] Andrew H. S. Lai, and Nelson H. C. Yung, “Vehicle-type identification through automated virtual loop assignment and block-based direction-biased motion estimation,” IEEE Transactions on Intelligent Transportation Systems, vol.1, no.2, pp. 86-97, 2000.
    [7] HanKyu Moon, Rama Chellappa, and Azriel Ronsenfeld, “Optimal edge-based shape detection,” IEEE Transactions on Image Processing, vol.11, no. 11, pp. 1209-1226, 2002.
    [8] Luo-Wei Tsai, Jun-Wei Hsieh, and Kao-Chin Fan, “Vehicle detection using normalized color and edge map,” IEEE Transactions on image Processing, vol.16, no. 3, pp. 850-864, 2007.
    [9] Javier Diaz Alonso, Eduardo Ros Vidal, Alexander Rotter, and Martin Muhlenberg, “Lane-change decision aid system based on motion-driven vehicle tracking,” IEEE Transactions on Vehicular Technology, vol. 57, no. 5, pp.2736-2746, 2008.
    [10] R. Culter, and L. Davis, “Robust real-time periodic motion detection, analysis and applications,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp.781-796, 2000.
    [11] William W.L. Lam. , Clement C.C. Pang. , and Nelson H.C. Yung. , “A method for vehicle count in the presence of multi-vehicle occlusions in traffic images,” IEEE Transactions on Intelligent Transportation system, vol. 8, no.3, pp. 441-459, 2007.
    [12] Angel Sanchez, Pedro D. Suarez, Aura Conci, and Eldman Nunes “Video-based distance traffic analysis: application to vehicle tracking and counting,” Computing in Science and Engineering, vol. 13, no. 3, pp. 38 – 45, 2011.
    [13] Wen-Chung Chang, and Chih-Wei Cho, “Online boosting for vehicle detection,” IEEE Transactions on Systems, Man, and Cybernetics-Part B:Cybernetics, vol. 40, no. 3, pp. 892-902, 2010.
    [14] H. Celik, A. Hanjalic, E. Hendriks, and S. Boughorbel, “Online training of object detectors from unlabeled surveillance video,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008. CVPRW’08, pp.1-7, 2008.
    [15] S. Sivaraman, and M.M. Trivedi, “A general active-learning framework for on-road vehicle recognition and tracking,” IEEE Transactions on Intelligent Transportation system, vol. 11, no. 2, pp.267-276, 2010.
    [16] B. Leibe, A. Leonardis, and B. Schiele, “Robust object detection with interleaved categorization and segmentation,” International Journal of Computer Vision, vol. 77, no. 1, pp. 259-289, 2008.
    [17] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International journal of computer vision, vol. 60, no. 2, pp. 91-100, 2004.
    [18] P. KaewTraKulPong and R. Bowden, “An improved adaptive background mixture model for real-time tracking with shadow detection,” In Proc. 2nd European Workshop on Advanced Video Based Surveillance Systems, pp. 149-158, 2001.
    [19] Corinna Cortes, and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273-297, 1995.
    [20] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded Up Robust Features, ” Computer Vision and Image Understanding (CVIU), vol. 110, no. 3, pp. 346-359, 2008.
    [21] Y. Cheng, “Mean shift, mode seeking, and clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 8, pp. 790-799, 1995.

    QR CODE
    :::