| 研究生: |
馬翔毅 Hsung-yi Ma |
|---|---|
| 論文名稱: |
使用動態背景補償以偵測與追蹤移動監控畫面之前景物 Object Detection and Tracking for a Moving Surveillance Camera by Using Dynamic Background Compensation |
| 指導教授: |
蘇柏齊
Po-Chyi Su |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 畢業學年度: | 95 |
| 語文別: | 中文 |
| 論文頁數: | 43 |
| 中文關鍵詞: | 物體追蹤 、監視系統 |
| 外文關鍵詞: | visual surveillance system, object tracking |
| 相關次數: | 點閱:11 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
自動化監控是近年來熱門的研究方向,由於監控人員無法永遠專注地監視攝影畫面,利用自動化監控來幫忙追蹤監視是必要的。但目前的監控系統是以固定式定點拍攝的攝影機為主要設備,由於攝影範圍有限,因而會有許多的死角,造成監視上的困難。因此,我們採用可旋轉式攝影機(Pan-Tilt-Zoom camera),利用其可控制移轉的特性來增加監控視野。
本研究提出動態背景預測法來偵測並追蹤物體。當攝影機未移動時,利用背景相減找出移動物體並取出其特徵點,在攝影機追蹤移動時,採用光流運算得到移動物體的特徵點移動後的位置與背景移動向量,進而估計出當前的背景影像,再用動態背景補償的方式,防止預測誤差傳播擴散,最後採用分水嶺演算法得到物體更精確的輪廓,以控制攝影機完成追蹤。
當移動物體突然轉向或停止時,我們所提出的方法依然可以正確的追蹤物體,實驗結果證實我們所提出方式的可行性。
There are increasing demands to detect usual/unusual events in various environments nowadays. Deploying cameras in public/private areas to form a ubiquitous surveillance system is thought to be very helpful in ensuring safety of people in many aspects. However, as more and more cameras are being installed, it may become impractical and cumbersome to find available human resources to achieve effective surveillance. Advanced surveillance systems that can actively monitor an area/object of interest and automatically identify abnormal situations are considered to be a promising solution. The advanced surveillance systems rely on analyzing the visual data recorded by the cameras to determine if unusual events happen. The issue of object tracking in video frames is thus very important and needs to be investigated thoroughly.
In this research, we adopt Pan-Tilt-Zoom (PTZ) cameras in our surveillance environment and propose a novel detection and tracking algorithm for dynamic scene videos captured by a PTZ camera. In our system, we first use the static scene tracking algorithm to construct the background and then use the dynamic scene tracking algorithm when the camera starts moving. The optical flow approach is used to detect the background motion and then predict the current background image. The background subtraction is then applied to obtain the rough foreground regions. In order to better predict the next frame, we compensate the predicted background to prevent error propagation. Finally, the watershed algorithm is applied to obtain a more precise contour of the foreground object. The camera is controlled to move for tracking the object accordingly. Experimental results show the feasibility of the proposed system.
[1] D. Meyer, J. Denzler, and H. Niemann, “Model based extraction of articulated objects in image sequences for gait analysis,” in Proc. IEEE International Conference on Image Processing, 1998, pp. 78–81.
[2] J. Barron, D. Fleet, and S. Beauchemin, “Performance of optical flow techniques,” International journal of computer vision, vol. 12, no. 1, pp. 42–77, 1994.
[3] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder: real-time tracking of the human body,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 19, pp. 780–785, July 1997.
[4] Kalman, R. E. "A new approach to linear filtering and prediction problems," Transactions of the ASME - Journal of Basic Engineering Vol. 82: pp. 35-45 (1960).
[5] C. A. Pau and A. Barber, “Traffic sensor using a color vision method,” in Proc. SPIE—Transportation Sensors and Controls: Collision Avoidance, Traffic Management, and ITS, vol. 2902, 1996, pp. 156–165.
[6] B. Schiele, “Vodel-free tracking of cars and people based on color regions,” in Proc. IEEE Int. Workshop Performance Evaluation of Tracking and Surveillance, Grenoble, France, 2000, pp. 61–71.
[7] R. Polana and R. Nelson, “Low level recognition of human motion,”in Proc. IEEE Workshop Motion of Non-Rigid and Articulated Objects, Austin, TX, 1994, pp. 77–82.
[8] B. Coifman, D. Beymer, P.McLauchlan, and J. Malik, “Areal-time computer vision system for vehicle tracking and traffic surveillance,” Transportation Res.: Part C, vol. 6, no. 4, pp. 271–288, 1998.
[9] J. Malik and S. Russell, “Traffic surveillance and detection technology development (new traffic sensor technology),” Univ. of California, Berkeley, 1996.
[10] A. Baumberg and D. C. Hogg, “Learning deformable models for tracking the human body,” in Motion-Based Recognition, Eds. Norwell, MA: Kluwer, 1996, pp. 39–60.
[11] C. Micheloni, G.L. Foresti, “Zoom on target while tracking”, IEEE International Conference on Image Processing (ICIP), 2005.
[12] Chih-Chiang Hu, Chin-Chuan Han, Jun-Wei Hsieh, and Kuo-Chin Fan, “Construction of panoramic images for monitoring environment,” 17th IPPR Conference on Computer Vision, Graphics, and Image Processing, 2004.
[13] S. Coorg and S. Teller, “Spherical mosaic with quaternions and dense correlation,” International Journal of Computer Vision, vol. 37,pp. 259–273, 2000.
[14] E. Noirfalise, J. Lapreste, F. Jurie, and M. Dhome, “Real-time registration for image mosaicing,” in Proc. of British Machine Vision Conference, 2002.
[15] M. Greiffenhagen, D. Comaniciu, H. Neimann and V. Ramesh. “ Design, analysis, and engineering of video monitoring systems: An Approach and a case study,” in Proceedings of IEEE, vol.89, no.10, October, pp.1498-1517, 2001.
[16] N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,”IEEE Trans. Pattern Analysis Machine Intelligence, vol. 22, pp. 564–569, June 2000.
[17] A. Elgammal, et al., “Background and foreground modeling using non-parametric kernel density estimation for visual surveillance,” Proceedings of the IEEE. 90(7): p. 1151-1163, 2002.
[18] Manoj Aggarwal, Rakesh Kumar, Harpreet Sawhney. “Real-time wide area multi-camera stereo tracking,“ IEEE Computer Society Conference on Vision and Pattern Recognition, 2005.
[19] Ting-Hsun Chang, Shaogang Gong. ”Tracking multiple people with a multi-camera system”. IEEE Workshop on Multi-Object Tracking, 2001.
[20] Hanzi Wang and David Suter. “Tracking and segmenting people with occlusions by a sample consensus based method”, IEEE, 2005.
[21] Omar Javed, Mubarak Shah. “KNIGHTM: A multi-camera Surveillance System”, 2003.
[22] S. Khan, O. Javed, Z. Rasheed and M. Shah, “Human tracking in multiple camera”, Proc. IEEE Computer Vision, 2001.
[23] T Horprasert, D Harwood, LS Davis. “A statistical approach for real-time robust background subtraction and shadow detection”, IEEE ICCV.
[24] Dockstader, S.L., Tekalp, A.M. “Multiple camera fusion for multi-object tracking”, IEEE Workshop on Multi-Object Tracking, 2001.
[25] Don Murray and Anup Basu, ”Motion Tracking with an Active Camera”, IEEE Trans. Pattern Analysis Machine Intelligence, vol. 16, no. 5, May 1994.
[26] Y. C. Chung, C. H. Wang, J. M. Wang, S. C. Lin, S. W. Chen, “Integration of Omnidirectional and Movable Cameras for Indoor Surveillance”, IPPR Conference on Computer Vision, Graphics and Image Processing (CVGIP), 2004.
[27] H. P. Moravec, “Towards automatic visual obstacle avoidance,” in Proc. 5th International Joint Conference on Artificial Intelligence, pp.584 1977.