跳到主要內容

簡易檢索 / 詳目顯示

研究生: 張庭豪
Ting-Hao Zhang
論文名稱: 採用以取樣為基礎的資料關聯技術於混合影像序列之多物件追蹤
Multiple Objects Tracking Using Sample-based Data Association for Mixed Images
指導教授: 唐之瑋
Chih-Wei Tang
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 通訊工程學系
Department of Communication Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 84
中文關鍵詞: 多物件追蹤混合影像資料關聯共推論融合最大似然機率遮蔽聯合似然機率
外文關鍵詞: multiple objects tracking, mixed images, data association,, co-inference tracking, maximum joint likelihood,
相關次數: 點閱:10下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在進行物件追蹤時,當物件進到含有強烈鏡面反射(specular reflection)的影像區域時,容易因為物體外觀的強烈變化而降低追蹤的準確率,此外當進行多物件追蹤時,需計算每個量測資訊與物件的資料關聯(data association),錯誤的量測資訊將降低追蹤的準確率,因此,本論文提出針對混合影像(mixed images)的以取樣為基礎的多物件追蹤演算法。首先,本論文提出簡化的RANSAC 方案用以估測相機動量,藉此提升補償運動模型(compensated motion model)與動量補償之多層分離(motion compensated layer separation)的效能。本論文採用以取樣為基礎的聯合機率資料關聯濾波器(sample-based joint
    probabilistic data association filter),結合共推論追蹤(co-inference tracking)後的物件狀態,計算物件與量測資訊的關聯度,提升資料關聯正確性。此外本論文提出最大聯合似然機率,利用最大似然法(maximum likelihood)以優化利用外觀資訊與軌跡資訊計算出的聯合
    似然機率(joint likelihood),並提出利用量測信心指數,提供物件的遮蔽資訊,提升共推論追蹤(co-inference tracking)於更正階段的準確率,最後本論文利用外觀相似度判斷,進行物件的外觀模型更新。實驗結果顯示,本論文提出的多物件追蹤演算法可有效克服不同強度的反射以及遮蔽的影響,有效提升追蹤系統的強健性及準確性。


    For object tracking, object moves into the region with strong specular reflections will decrease tracking accuracy because of the significant change of the target appearance. In addition, data association between measurements and objects is needed for multiple objects
    tracking because the wrong measurement will decrease tracking accuracy. Thus, this thesis proposes a sample-based multiple objects tracking for mixed images. At first, this thesis proposes a simplified RANSAC method to estimate camera motion. It can promote the efficiency of compensated motion model and motion compensated layer separation. This thesis adopts the sample-based joint probabilistic data association filter that refers to the co-inference tracking based object state to improve accuracy of data association. In addition, this thesis
    proposes to maximize the joint likelihood that considers appearance and trajectory information at the correction stage. This thesis also proposes occlusion confidence indicator to provide the occlusion information to improve the accuracy in co-inference tracking based correction stage. Finally, this thesis updates target appearance model according to the similarity of appearance model. Experimental results show that the proposed scheme can effectively improve robustness and accuracy under the variation of specular reflection and the occlusion condition.

    摘要…………………………………………………………………………………………….I Abstract..………………………………………………………………………………………II 誌謝…………………………………………………………………………………………...III 目錄…………………………………………………………………………………………..IV 圖目錄..………………………………………………………………………………………VI 表目錄…..……………………………………………………………………………………IX 第一章 緒論 ............................................................................................................................ 1 1.1. 前言 ........................................................................................................................ 1 1.2. 研究動機 ................................................................................................................ 1 1.3. 研究方法 ................................................................................................................ 3 1.4. 論文架構 ................................................................................................................ 4 第二章 物件追蹤之資料關聯技術現況 .................................................................................. 5 2.1 單物件追蹤(Single Object Tracking) ......................................................................... 5 2.1.1 貝氏濾波器(Basiyen Filter) ............................................................................. 5 2.1.2 粒子濾波器(Particle Filter, PF) ....................................................................... 7 2.1.3 機率資料關聯濾波器(Probability Data Association Filter, PDAF) ................ 8 2.2 多物件追蹤之資料關聯(Data Association)技術現況 ............................................. 11 2.2.1 物件與量測之資料關聯(Data Association) .................................................. 12 2.2.3 多物件與多量測之資料關聯 ......................................................................... 16 2.3 總結 ......................................................................................................................... 21 第三章 混合影像序列下之追蹤技術現況 ............................................................................ 22 3.1 混合影像序列下之特徵點追蹤(Feature Point Tracking in Mixed Image) ............. 22 3.1.1 顏色獨立性分離(Layer Separation using Color Independence) ................... 23 3.1.2 本質影像分離(Layer Separation using Intrinsic Image) ............................... 23 3.2 混合影像序列之物件追蹤(Object Tracking in Mixed Sequences) ......................... 25 3.2.1 盲訊號分離之混合影像視覺追蹤(Visual Tracking Using Blind Source Separation for Mixed Images) .................................................................................. 26 3.2.2 基於資訊融合的混合影像之強健性追蹤(Robust Tracking Using Visual Cue Integration for Mobile Mixed Images) ..................................................................... 29 3.3 總結 ........................................................................................................................... 31 第四章 本論文所提之混合影像序列的多物件追蹤方案 .................................................. 32 4.1 系統架構 ................................................................................................................... 32 4.2 混合影像分離與動態層遮罩之建立(Layer Separation of Mixed Images and Constructions of Motion Masks) ...................................................................................... 33 4.2.1 相機動量估測(Estimation of Camera Motion) ............................................... 34 4.2.2 動態層遮罩之建立(Construction of Motion Mask) ...................................... 35 V 4.3 聯合似然機率之最大化(Maximum Joint Likelihood) ............................................. 36 4.4 以信心指數為基礎之量測可靠度計算(Calculation of Measurement Robustness Based on The Confidence Indicator) ................................................................................ 40 4.5 物件外觀模型更新(Target Appearance Model Update) .......................................... 43 4.7 總結 ............................................................................................................................ 45 第五章 實驗結果與討論 ........................................................................................................ 46 5.1 實驗參數與測試影片規格 ....................................................................................... 46 5.2 追蹤系統實驗結果 ................................................................................................... 47 5.2.1 ACF 行人偵測器實驗結果(Experimental Results of ACF Detector) .......... 48 5.2.2 方均根誤差之追蹤準確率(Tracking Accuracy with Root Mean Square Error) .................................................................................................................................. 49 5.2.3 多物件追蹤準確之追蹤準確率(Tracking Accuracy with Multiple Object Tracking Accuracy) .................................................................................................. 52 5.2.4 多物件追蹤精準(Multiple Object Tracking Precision, MOTP) ..................... 53 5.2.5 聯合似然機率之效能(Accuracy of Joint Likelihood)................................... 55 5.2.6 動態層空間資訊之效能(Accuracy of Spatial Information of Motion Mask) .................................................................................................................................. 56 5.2.7 物件外觀模型更新之效能(Target Accuracy of Appearance Model Update ) .................................................................................................................................. 58 5.2.8 遮蔽信心指數為基礎的資料關聯(Occlusion Confidence Indicator Based data association) ....................................................................................................... 60 5.2.9 時間複雜度(Time Complexity) ..................................................................... 61 5.3 總結 ........................................................................................................................... 63 第六章 結論與未來展望 ...................................................................................................... 64 參考文獻 .................................................................................................................................. 65 Publications .............................................................................................................................. 70

    [1]M. A. Elgharib, F. Pitie, A. Kokaram, and V. Saligrama, “User-assisted reflection detection
    and feature point tracking,” in Proc. European Conference on Visual Media Production, No.
    13, pp. 13-23, Nov. 2013.
    [2]C. T. Takeo and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon
    University Technical Report CMU-CS-91-132, 1991.
    [3]H.-T. Chen and C.-W. Tang, “Visual tracking using blind source separation for mixed
    images,” in Proc. IEEE International Conference on Acoustics, Speech and Signal
    Processing, pp. 6548-6552, May 2014.
    [4]H-T Chen, and C. W. Tang, “Robust tracking using visual cue integration for mobile mixed
    images,” Journal of Visual Communication and Image Representation, Vol.30, pp.208-218,
    July 2015.
    [5]I. J. Cox, and S. L. Hingorani, “An efficient implementation of Reid's multiple hypothesis
    tracking algorithm and its evaluation for the purpose of visual tracking,” IEEE Trans. Pattern
    Analysis and Machine Intelligence, Vol. 18, No. 2, pp. 138-150. Feb. 1996.
    [6]T. Fortmann, Y. Bar-Shalom, and M. Scheffe, “Multi-target tracking using joint probabilistic
    data association,” in Proc. IEEE Conference on Decision Control, pp. 807-812, Dec. 1980.
    [7]D. Schulz, W. Burgard, D. Fox and A. B. Cremers, “People tracking with mobile robots using
    sample-based joint probabilistic data association filters,” International Journal of Robotics
    Research, Vol. 22, No. 2, pp.99-116, Feb. 2003.
    [8]A. Rehman, S. M. Naqvi, L. Mihaylovay and J. A. Chambers, “Clustering and a joint
    probabilistic data association filter for dealing with occlusions in multi-target tracking,” in
    Proc. IEEE International Conference on Information Fusion, pp. 1730-1735, July 2013.
    [9]J. Xiao and M. Oussalah, “Collaborative tracking for multiple objects in the presence of
    inter-occlusions,” IEEE Trans. Circuits and Systems for Video Technology, Vol. 26, No. 2,
    66
    pp. 304-318, Feb. 2016
    [10] K. C. Chang and Y. Bar-Shalom, “Joint probabilistic data association for multitarget
    tracking with possibly unresolved measurements and maneuvers,” IEEE Trans. Automatic
    Control, Vol. ac-29, No. 7, pp. 585 -594, 1984.
    [11] Y. Bar-Shalom, P. K. Willett, X. Tian, “Tracking and data fusion: a handbook of
    algorithms,” Yaakov Bar-Shalom, 2011.
    [12] A. Genovesio and J.-C. Olivo-Marin, “Split and merge data association filter for dense
    multi-target tracking,” in Proc. IEEE International Conference on Pattern Recognition, pp.
    677-680, Vol. 4, Aug. 2004.
    [13] T.-H. Zhang, H.-T. Chen, and C.-W. Tang, “Multi-target tracking using sample-based data
    association for mixed images,” in Proc.International Symposium on Visual Computing,
    Vol. 9474, pp. 127-137, Dec 2015.
    [14] A. Yilmaz, O. Javed and M. Shah, “Object tracking: a survey,” ACM Computing Surveys,
    Vol. 38, No.4, pp. 1-45, Dec. 2006.
    [15] D. Salmond and N. Gordon, “An introduction to particle filters,”
    http://dip.sun.ac.za/~herbst/MachineLearning/ExtraNotes/ParticleFilters.pdf, Sept 2005
    [16] M. MŘhlich, “Particle filters an overview,” Institut fŘr Angewandte Physik. J. W. Goethe-
    Universitńt Frankfurt, March 2003.
    [17] M. Isard and A.Blake, “Condensation-conditional density propagation for visual tracking,”
    International Journal of Computer Vision, Vol. 29, No. 1, pp. 5-28, Aug. 1998.
    [18] Y. Bar-Shalom, F. Daum, and J. Huang, “The probabilistic data association filter,” IEEE
    Trans. Control Systems Magazine, Vol. 29, No. 6, pp. 82-100, Dec. 2009.
    [19] C. W. Park, T. J. Woehl, J. E. Evans, and N. D. Browning, “Minimum cost multi-way
    data association for optimizing multitarget tracking of interacting objects,” IEEE Trans.
    Pattern Analysis and Machine Intelligence, Vol. 37, No. 3, pp. 611-624, Mar. 2015.
    [20] A. F. Tchango, V. Thomas, O. Buffet, A. Dutech and F. Flacher,”Tracking multiple
    67
    interacting targets using a joint probabilistic data Association filter,” in Proc. IEEE
    International Conference on Information Fusion, pp. 1-8, July 2014
    [21] K. Nummiaro, E. Koller-Meier and L. V. Gool, “An adaptive color based particle filter,”
    Image and Vision Computing, Vol. 21, No. 1, pp. 99-110, Jan. 2003.
    [22] G. Kitagawa, “Monte carlo filter and smoother for non-gaussian nonlinear state space
    models,” International Journal of Computational and Graphical Statistics, Vol. 5 No. 1,
    pp. 1-25, Mar. 1996.
    [23] Q. Yu and G. Medioni, “Multiple-target tracking by spatiotemporal monte carlo markov
    chain data association,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 31,
    No. 12, pp. 2196-2210, Dec. 2012.
    [24] C. R. del-Blanco, F. Jaureguizar, and N. Garcia, “Bayesian visual surveillance: a model
    for detecting and tracking a variable number of moving objects,” in Proc. IEEE
    International Conference on Image Processing, pp. 1437-1440, Sept. 2011.
    [25] H. Mao, C. Yang, G. P. Abousleman and J. Si, “Automatic detection and tracking of
    multiple interacting targets from a moving platform,” Optical Engineering, Vol. 53, No. 1,
    pp.013102-013102, Jan. 2014.
    [26] H. G. Barrow and J. M. Tenenbaum, “Recovering intrinsic scene characteristics from
    images,” in A. Hanson and E. Riseman, editors, Computer Vision Systems. Academic Press,
    pp. 3-26, New York, USA, Apr. 1978.
    [27] M. A. Ahmed, F. Pitie, and A. Kokaram, “Reflection detection in image sequences, “in
    Proc. Computer Vision and Pattern Recognition, pp. 705-712, June 2011.
    [28] Y. Weiss, “Deriving intrinsic images from image sequences,” in Proc. IEEE International
    Conference on Computer Vision, Vol. 2, pp. 68-75, July 2001.
    [29] B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by
    learning a sparse code for natural images,” Nature, Vol. 381, pp.607-609, June 1996.
    [30] E. P. Simoncelli, “Statistical models for images: compression, restoration and
    68
    synthesis,” in Proc. IEEE Asilomar Conference on Signals Systems and Computers Pacific
    Grove, Vol. 1, pp.673-678, Nov. 1997.
    [31] Y. Wu and T. Huang, “Robust visual tracking by integrating multiple cues based on coinference
    learning,” International Journal of Computer Vision, Vol. 58, No. 1, pp. 55-71,
    June 2004.
    [32] J.-Y. Lu, Y.-C. Wei, and C.-W. Tang, “Visual tracking using compensated motion model
    for mobile cameras,” in Proc. IEEE International Conference on Image Processing, pp.
    489-492, Sept. 2011.
    [33] H. Bay, A. Ess, T. Tuytelaar, and L. Van Gool, “SURF: speeded up robust features,”
    Computer Vision and Image Understanding, Vol. 110, No.3, pp. 346-359, June 2008.
    [34] M. A. Fischler, and R. C. Bolles, “Random sample consensus: a paradigm for model fitting
    with applications to image analysis and automated cartography,” ACM
    Communications, Vol. 24, No. 6, pp. 381-395, June 1981.
    [35] F. Aherne, N. Thacker and P. Rockett, “The bhattacharyya metric as an absolute similarity
    measure for frequency coded Data,” Kybernetika, vol. 34, no. 4, pp. 363-368, 1998.
    [36] N. McLaughlin, J. M. D. Rincon and P. Miller, “Enhancing linear programming with
    motion modeling for multi-target tracking,” in Proc. IEEE Winter Conference on
    Applications of Computer Vision, pp. 71-77, Jan. 2015.
    [37] A. Andriyenko and K. Schindler, “Multi-target tracking by continuous energy
    minimization,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition,
    pp. 1265-1272, June 2011.
    [38] CAVIARDATA Dataset. http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/
    [39] J. Ferryman and A. Ellis, “PETS2010: Dataset and challenge,” in Proc. IEEE International
    Conference on Advanced Video and Signal Based Surveillance, pp. 143-150, Sept. 2010
    [40] L. Leal-Taixé, A. Milan, I. Reid, S. Roth and K. Schindler, “MOTChallenge 2015: towards
    a benchmark for multi-target tracking,” arXiv:1504.01942, 2015.
    69
    [41] P. Dollár, R. Appel, S. Belongie and P. Perona, “Fast feature pyramids for object
    detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 36, No. 8, pp.
    1532-1545, Aug. 2014.
    [43] K. Smith, D. Gatica-Perez, J. Odobez and S. Ba, “Evaluating multi-object tracking,” in
    Proc. IEEE Conference on Computer Vision and Pattern Recognition-Workshops, p. 36,
    June 2005.
    [42] R. Stiefelhagen , K. Bernardin , R. Bowers , J. Garofolo2 , D. Mostefa and P. Soundararajan,
    “The CLEAR 2006 evaluation,” Multimodal Technologies for Perception of Humans,
    pp.1-44, Springer, 2006.

    QR CODE
    :::