| 研究生: |
吳亞倫 Ya-lun Wu |
|---|---|
| 論文名稱: |
多攝影機協同物件追蹤的智慧型視訊監控 Multi-camera Cooperative Object Tracking for Intelligent Video Surveillance |
| 指導教授: |
陳慶瀚
Ching-han Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 畢業學年度: | 100 |
| 語文別: | 中文 |
| 論文頁數: | 74 |
| 中文關鍵詞: | 多攝影機協同物件追蹤 、物件追蹤 、智慧型監控 |
| 外文關鍵詞: | object tracking, multi-camera object tracking, intelligent video surveillance |
| 相關次數: | 點閱:13 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在視訊監控、機器人視覺的應用,物件偵測與追蹤扮演很重要的角色,如何建立一個穩定的監控平台一直是大家持續研究的目標。然而這些監控設備常礙於硬體本身的限制,通常都會有監控範圍不夠廣泛,或產生死角等等的問題;為了改善此情形,某些監控設備會使用超廣角鏡頭、使用數位PTZ攝影機來改善視角問題、或是使用多攝影機架構,而提出這些方法都是為了要增加增廣監控的範圍,使其視域變廣達到全面監控的效果。
本論文針對於多攝影機視訊監控應用,提出一個強健且高效率的多攝影機協同追蹤的方法,比起單一攝影機系統的監控,可以達到更全面的監控範圍;擴大了監控區域、也增加了監控的可視角。我們先利用漸進式背景建模,標定監控重疊區域後,在利用連通物件方法分析取得前景物後,以PSO演算法進行適應性的追蹤加上錯誤校正機制,而當追蹤物件進入重疊區域即將離開當下攝影畫面時,多攝影機間透過一個換手協議來決定轉移追蹤權,以達到可靠、強健的多攝影機連續追蹤監控。我們同時提出一個物件追蹤的性能評估方法,用以評估本研究所設計的多攝影機視訊監控系統性能。
現今研究的多攝影機追蹤方法,多著重於重疊區物體深度資訊的取得,本方法則朝向增廣其監控範圍,此外也不需要複雜的場景參數設置,即可實現多攝影機連續物件追蹤的視訊監控應用。
In the application of security surveillance and robot vision, object detection and tracking always play an important role. How to establish a stable intelligence surveillance platform is a final purpose we all want to accomplish. But we usually face the problems of the narrow angle of view and dead space in single camera. So we may use some wide-angle lens, PTZ (pan-tilt-zoom) camera, or even multi-camera system to improve this kind of issue.
In multi-camera surveillance system, we proposed a robust and efficient mechanism to solve the foregoing problem. At first we use the progress background modeling and calibrate the demarcation (overlapped area) between the cameras, and the connected-component to extract the interest object of the foreground, and then put a PSO tracker on that. When this tracking object is about to leave the current camera, we design a cooperative protocol to take over the tracking token and go on tracking the object in another camera. The cooperation between cameras can achieve reliable and robust in continuous tracking. Finally we proposed a NGT measure to evaluate the tracking performance.
Compares to the other multi-camera approaches, most of them prefer to put the cameras toward the overlapped area, in order to get the depth information as a criterion. We tend to get more surveillance view instead of the overlapped area. And with the uncomplicated setup of the camera calibration, we can realize the intelligent surveillance for multi-camera cooperative object tracking.
[1] A. Yilmaz, O. Javed, and M. Shah, “Object Tracking: A Survey,” In ACM Comput. Surv. 38, 4, Article 13, Dec. 2006.
[2] R. Basri and D. Jacobs, “Recognition Using Region Correspondences,” In International Journal of Computer Vision, 25, 2, pp. 141–162, 1996.
[3] D. G. Lowe, “Object Recognition from Local Scale-Invariant Feature,” In International Conference on Computer Vision, 60, 2, pp.91–110, 2004.
[4] J. Y. Kuo, “The color recognition of objects of survey and implementation on real-time video surveillance,” In IEEE International Conference on System Man and Cybernetics (SMC), 3741–3748, 10–13 Oct. 2010.
[5] M. S. Nagmode, “Moving Object Detection from Image Sequence in Context with Multimedia Processing,” In IET International Conference on Wireless, Mobile and Multimedia Networks, 259–262, 2008.
[6] J. H. Cho and S. D. Kim, “Object detection using spatio-temporal thresholding in image sequences,” In Electronics Letters, 1109–1110, 2 Sept. 2004.
[7] H. Moravec, “Visual mapping by a robot rover,” In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). 598–600, 1979.
[8] C. Harris and M. Stephens, “A combined corner and edge detector,” In 4th Alvey Vision Conference. 147–151, 1988.
[9] J. Shi and C. Tomasi, “Good features to track,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 593–600, 1994.
[10] D. Lowe, “Distinctive image features from scale-invariant keypoints,” In Int. J. Comput. Vision 60, 2, 91–110, 2004.
[11] D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” In IEEE Trans. Patt. Analy. Mach. Intell. 24, 5, 603–619, 2002.
[12] J. Shi and J. Malik, “Normalized cuts and image segmentation,” In IEEE Trans. Patt. Analy. Mach. Intell. 22, 8, 888–905, 2000.
[13] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” In Int. J. Comput. Vision 1, 321–332, 1988.
[14] R. Jain and H. Nagel, “On the analysis of accumulative difference pictures from image sequences of real world scenes,” In IEEE Trans. Patt. Analy. Mach. Intell. 1, 2, 206–214, 1979.
[15] C. Wren, A. Azarbayejani, and A. Pentland, “Pfinder: Real-time tracking of the human body,” In IEEE International Conference on Image Processing (ICIP). 277–280, 1997.
[16] C. Stauffer and W. Grimson, “Learning patterns of activity using real time tracking,” In IEEE Trans. Patt. Analy. Mach. Intell. 22, 8, 747–767, 2000.
[17] A. Elgammal, D. Harwood, and L. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of IEEE 90, 7, 1151-1169, 2002.
[18] L. Li and Maylor K.H. Leung, “Integrating intensity and texture differences for robust change detection,” In IEEE Trans Image Process. 11, 2, 105-112, 2002.
[19] N. Oliver, B. Rosario, and A. Pentland, “A Bayesian computer vision system for modeling human interactions,” In IEEE Trans. Patt. Analy. Mach. Intell. 22, 8, 831-843, 2000.
[20] J. Zhong and S. Sclaroff, “Segmenting foreground objects from a dynamic textured background via a robust kalman filter,” In IEEE International Conference on Computer Vision (ICCV). 44–50, 2003.
[21] H. Rowley, S. Balujia,and T. Kanade, “Neural network-based face detection,” In IEEE Trans. Patt. Analy. Mach. Intell. 20, 1, 23–38, 1998.
[22] P. Viola, M. Jones, and D. Snow, “Detecting pedestrians using patterns of motion and appearance,” In IEEE International Conference on Computer Vision (ICCV). 734–741, 2003.
[23] C. Papageorgiou, M. Oren, and T. Poggio, “A general framework for object detection,” In IEEE International Conference on Computer Vision (ICCV). 555–562, 1998.
[24] H. Tanizaki, ”Non-gaussian state-space modeling of nonstationary time series,” In J. Amer. Statist. Assoc. 82, 1032–1063, 1987.
[25] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” In IEEE Trans. Patt. Analy. Mach. Intell. 25, 564–575, 2003.
[26] B.D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” In International Joint Conference on Artificial Intelligence. 1981.
[27] D. Huttenlocher, J. Noh, and W. Rucklidge, “Tracking nonrigid objects in complex scenes,” In IEEE International Conference on Computer Vision (ICCV). 93–101, 1993.
[28] Y. Chen, Y. Rui, and T. Huang, “Jpdaf based hmm for real-time contour tracking,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 543–550, 2001.
[29] A. Yilmaz, X. Li, and M. Shan, “Contour based object tracking with occlusion handling in video acquired using mobile cameras,” In IEEE Trans. Patt. Analy. Mach. Intell. 26, 11, 1531–1536, 2004.
[30] C. Stauffer and K. Tieu, “Automated multi-camera planar tracking correspondence modeling,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 259, 2003.
[31] R. Collins, A. Lipton, H. Fujiyoshi, and T. K, “Algorithms for Multisensor Surveillance,” Proceedings of the IEEE, 89(10): 1456–1477, Oct. 2001.
[32] Q. Cai and J. K. Aggarwal, “Tracking Human Motion in Structured Environments Using a Distributed-camera System,” In IEEE Trans. PAMI, 24(11): 1241–1247, Nov. 1999.
[33] S. Khan and M. Shah, “Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View,” In IEEE Trans. PAMI, 25(10): 1355–1360, Oct. 2003.
[34] J. Kang, I. Cohen, and G. Medioni, “Continuous Tracking Within and Across Camera Streams,” In IEEE Conference on CVPR. 267–272, 2003.
[35] Y. C. Chung, J. M. Wang, and S. W. Chen, “Progressive Background Image Generation,” In Proc. of 15th IPPR CONF. ON Computer Vision, Graphics and Image Processing, pp. 858–865, 2002.
[36] J. Kennedy and R. Eberhart, “Particle swarm optimization,” In Proc. of IEEE International Conference on Neural Networks, Volume 4, pp. 1942–1948, 27 Nov.–1 Dec. 1995.
[37] B. M. Mehtre, M. S. Kankanhalli, A. D. Narasimhalu, and G. C. Man, “Color matching for image retrieval,” In Pattern Recognition Letters, pp. 325–331, 1995.
[38] H. Wu and Q. Zheng, “Self-evaluation of visual tracking systems,” In Proc. of ASC, Orlando (FL, USA), 29 Nov. –2 Dec. 2004.
[39] R. Liu, S. Li, X. Yuan, and R. He, “Online Determination of Track Loss Using Template Inverse Matching,” In Proc. of VS2008, Marseille (France), 17 Oct. 2008.
[40] N. Vaswani, “Additive change detection in nonlinear systems with unknown change parameters,” In IEEE Transactions on Signal Processing, 55(3):859–872, 2007.
[41] V. Badrinarayanan, P. Perez, F. Le Clerc, and L. Oisel, “Probabilistic Color and Adaptive Multi-Feature Tracking with Dynamically Switched Priority Between Cues,” In Proc. of ICCV’07, Rio de Janeiro (Brasil), 14–21 Oct. 2007.
[42] O. Javed, Z. Rasheed, K. Shafique, and M. Shan, “Tracking Across Multiple Cameras With Disjoint Views,” In IEEE International Conference on Computer Vision (ICCV). 952–957, 13–16 Oct. 2003.
[43] Y. Lu, “Intelligent Cooperative Tracking in Multi-camera Systems,” In Intelligent Systems Design and Application, 608–613, Nov. 30 2009–Dec. 2 2009.
[44] H. A. A. El-Halym, I. I. Mahmoud, A. AbdelTawab, and S. E. -D. Habib, “Particle Filter versus Particle Swarm Optimization for Object Tracking,” In 13th International Conference on Aerospace Sciences & Aviation Technology (ASAT), ASAT–13–RS–04, 26–28 May. 2009.
[45] B. Kwolek, “Multi Camera-based Person Tracking Using Region Covariance and Homography Constraint,” In 7th Advanced Video and Signal Based Surveillance (AVSS), 294–299, Aug.29 2010–Sept. 1 2010.
[46] C. Micheloni, G. L. Foresti and L. Snidaro, “A Cooperative Multicamera System For Video-surveillance of Parking Lots,” In IEE Symposium on Intelligence Distributed Surveillance Systems, pp.1–5, 26 Feb. 2003.