跳到主要內容

簡易檢索 / 詳目顯示

研究生: 楊善雯
Shan-wen Yang
論文名稱: 亮度一致的全周俯瞰監視與障礙物偵測
Brightness-consistent Surrounding Top-view Monitor and Obstacle Detection
指導教授: 曾定章
Din-Chang Tseng
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 98
中文關鍵詞: 全周俯瞰監視亮度一致化障礙物偵測
相關次數: 點閱:6下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 發生道路交通事故的部分因素是因為在車輛行進中,駕駛沒有注意到障礙物而造成碰撞意外。其中原因包含車體結構與後照鏡角度造成的盲點區域,使得駕駛無法充分了解車輛週遭環境導致人員傷亡與車輛損壞。為了避免因為盲點區域而造成的交通意外,並提高停車與低速會車時的安全性。本論文提出一套全周俯瞰監視與障礙物偵測系統。整個系統共包含兩大部份:一是全周俯瞰監視用於輔助駕駛監視車輛周遭的狀況,二是主動偵測車輛周遭的障礙物並提醒駕駛注意。
    全周俯瞰監視系統在車輛四周架設四台廣角相機以拍攝車輛週遭影像,經過離線計算相機內部參數、處理扭曲校正、暗角消除。再利用大型校正版,根據特徵點對應求得四周俯瞰影像的相對關係,將俯瞰影像快速對位為一張俯視車輛週遭的全周俯瞰影像,最後將各項參數建立一張查找表。在線上處理階段根據查找表資訊,內插產生即時的全周俯瞰影像,並根據重疊區域的亮度差調整整張影像的亮度均勻性。
    俯瞰影像式障礙物偵測系統針對不同的環境特性,選擇適合的偵測方式。在路面紋理複雜時,以單張俯瞰影像估計光流,估計車體自我運動向量,藉由光流濾除及群聚後,擷取障礙物。在單純紋理的路面則以靜態顏色資訊為基礎,計算路面顏色的分佈情形,濾除路面與地面標誌後擷取障礙物區域並提醒駕駛注意。
    全周俯瞰監視系統可在影像的解析度為720×480的情況下,於Intel@ Core™2 Duo 2.93GHz及4.00GB RAM的個人電腦上執行可達每秒24張的處理速度。而障礙物偵測程序在同樣的硬體設備上可達平均每秒16張,平均障礙物偵測正確率可達88%。


    A lot of traffic accidents are caused by driver's incomplete understanding of the whole vehicle surroundings. To reduce the accidents caused by collision with surrounding obstacles, we mount four wide-angle cameras at the front, rear, and both sides of the vehicle to capture consecutive images; then we present a real-time surrounding top-view monitor and obstacle detection system for slow driving and parking assistance.
    In offline steps of surrounding top-view monitor system, we first calibrate camera intrinsic parameters, distortion of lens, and vignetting effects of four wide-angle cameras. Then we calibrate the geometric relationships (extrinsic parameters) of four cameras using a proposed multi-camera calibration method. Third, we calculate the feathering weights of pixels on overlapped image areas to produce a seamless surrounding top-view image. At last, we build look-up tables for the mapping between the captured images and the surrounding synthesized image to speed up the processing. In online procedure, the proposed system interpolates and generates the surrounding synthesized image by those look-up tables directly.
    In obstacle detection system, we utilize different algorithms for driving environments of different texture complexity. If texture of road surface is complicated, we can generally detect enough features for estimating the optical flow from captured images. After estimating ego-motion of vehicle, we can distinct the non-ground features and ground-features. Obstacle detection is performed based on static color information if the texture of road surface is simple, and no features for detection was found, then we can use color of road region to separate obstacles from road. In our experiment, the detection accuracy is about 88%.

    摘要 i Abstract ii 致謝 iii 目錄 iv 圖目錄 viii 表目錄 xiii 第一章 緒論 1 1.1 動機 1 1.2 系統概述 2 1.3 論文架構 3 第二章 相關研究 5 2.1 車輛環場監視系統 5 2.2 暗角校正 11 2.3 扭曲校正 14 2.4 障礙物偵測 15 2.4.1 靜態資訊機器學習法 15 2.4.2 雙眼立體視覺法 16 2.4.3 單眼動態資訊 20 2.4.4 整合方法 23 第三章 相機校正 25 3.1 相機參數校正 25 3.1.1 相機模型 25 3.1.2 相機參數校正方法 29 3.1.3 內部參數的條件限制式 30 3.1.4 求解內部與外部參數 31 3.1.5 估計最佳解 32 3.2 鏡頭扭曲校正 33 3.2.1 扭曲模型 34 3.2.2 估計扭曲參數 35 3.3 鏡頭暗角校正 37 3.3.1 暗角模型 37 3.3.2 估計暗角參數 38 第四章 俯瞰轉換與接合 40 4.1 俯瞰轉換 40 4.1.1 平面投影轉換 40 4.1.2 特徵點對應求解平面投影轉換 41 4.2 快速影像對位 43 4.2.1 影像對應關係 44 4.2.2 相機對應關係 44 4.2.3 快速影像對位 45 4.3 內差與色彩混合建表 46 4.3.1 內插 46 4.3.2 色彩混合 48 4.3.3 建表 49 4.4 即時亮度調整 50 第五章 俯瞰影像障礙物偵測 54 5.1 特徵點偵測 54 5.1.1 角點偵測 54 5.1.2 邊點偵測 56 5.2 利用移動向量偵測障礙物 57 5.2.1 估計光流向量 58 5.2.2 濾除誤差光流 59 5.2.3 計算自我移動向量 60 5.2.4 偵測候選障礙物特徵點 60 5.2.5 群聚障礙物邊點 61 5.3 根據靜態顏色資訊偵測障礙物 62 5.3.1 定義像素種類 62 5.3.2 計算路面顏色分佈 62 5.3.3 濾除路面與地面標誌 63 5.3.4 以連結區塊標記障礙物 65 第六章 實驗 66 6.1 實驗環境 66 6.2 相機校正 67 6.3 暗角校正 69 6.4 全周俯瞰監視系統 70 6.5 俯瞰影像式障礙物偵測 71 6.5.1 利用移動向量偵測障礙物 71 6.5.2 根據顏色偵測障礙物 73 6.5.3 計算偵測準確率 74 第七章 結論與未來展望 76 參考文獻 78

    [1] Bertozzi, M., A. Broggi, P. Medici, P. P. Porta, and A. Sjogren, "Stereo vision-based start-inhibit for heavy goods vehicles," in Proc. IEEE Intelligent Vehicles Symp., Tokyo, Japan, Jun.13-15, 2006, pp.350-355.
    [2] Bouguet, J.-Y., "Pyramidal implementation of the Lucas kanade feature tracker description of the algorithm," in OpenCV Document, Intel Corporation, Microprocessor Research Labs, 1999.
    [3] Braillon, C., C. Pradalier, J. L. Crowley, and C. Laugier, "Real-time moving obstacle detection using optical flow models," in Proc. Intelligent Vehicles Symp., Tokyo, Japan, Jun.13-15, 2006, pp.466-471.
    [4] Deverney, F. and O. Faugeras, "Straight lines have to be straight," Machine Vision and Applications, vol.13, no.1, pp.14-24, 2001.
    [5] Ehlgen, T. and T. Pajdla, "Monitoring surrounding areas of truck-trailer combinations," in Proc. 5th Int. Conf. on Computer Vision Systems, Bielefeld, Germany, Mar.21-24, 2007, CD-ROM.
    [6] Ess, A., B. Leibe, K. Schindler, and L. van Gool, "Moving obstacle detection in highly dynamic scenes," in Proc. IEEE Int. Conf. on Robotics and Automation, Kobe, Japan, May.12-17, 2009, pp.56-63.
    [7] Fujitsu, 360-Degree Wrap-around Video Imaging Technology, in http://www.fujitsu.com/us/news/pr/fma_20101019-02.html
    [8] Gandhi, T. and M. M. Trivedi, "Vehicle surround capture: survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps," IEEE Trans. on Intelligent Transportation Systems, vol.7, no.3, pp.293-308, 2006.
    [9] Goldman, D. B., "Vignette and exposure calibration and compensation," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.32, no.12, pp.2276-2288, Dec. 2010.
    [10] Hartley, R. and S. B. Kang, "Parameter-free radial distortion correction with center of distortion estimation," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.29, no.8, pp.1309-1321, 2007.
    [11] Hoiem, D., A. A. Efros, and M. Hebert, "Putting objects in perspective," Int. Journal of Computer Vision, vol.80, no.1, pp.3-15, 2008.
    [12] Honda, Multi-view Camera System, in http://world.honda.com/news/
    2008/4080918Multi-View-Camera-System/
    [13] Kang, S. B. and R. S. Weiss, "Can we calibrate a camera using an image of a flat, textureless lambertian surface?," in Proc. 6th European Conf. on Computer Vision, Dublin, Ireland, Jun.26 - Jul.1, 2000, pp.640-653.
    [14] Liu, Y. C., K. Y. Lin, and Y. S. Chen, "Bird's-eye view vision system for vehicle surrounding monitoring," in Proc. Robot Vision, Berlin, Germany, Feb.20-22, 2008, pp.207-218.
    [15] Lucas, B. D. and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proc. 7th Int. Joint Conf. on Artificial Intelligence, Vancouver, Canada, Aug.24-28, 1981, pp.674-
    679.
    [16] Marquardt, D., "An algorithm for least-squares estimation of nonlinear parameters," SIAM Journal on Applied Mathematics, vol.11, pp.431-441, 1963.
    [17] Nissan, Around View Monitor, in http://www.nissan-global.com/EN
    /TECHNOLOGY/INTRODUCTION/DETAILS/AVM/
    [18] Ogale, A. S., C. Fermuller, and Y. Aloimonos, "Motion segmentation using occlusions," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.27, no.6, pp.988-992, 2005.
    [19] Oniga, F. and S. Nedevschi, "Processing dense stereo data using elevation maps: road surface, traffic isle, and obstacle detection," IEEE Trans. on Vehicular Technology, vol.59, no.3, pp.1172-1182, 2010.
    [20] Rosten, E. and T. Drummond, "Fusing points and lines for high performance tracking," in Proc. 10th IEEE Int. Conf. on Computer Vision, Beijing, China, Oct.17-20, 2005, pp.1508-1515.
    [21] Rosten, E., R. Porter, and T. Drummond, "Faster and better: a machine learning approach to corner detection," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.32, no.1, pp.105-119, 2010.
    [22] Saxena, A., S. H. Chung, and A. Y. Ng, "3-D depth reconstruction from a single still image," Int. Journal of Computer Vision, vol.76, no.1, pp.53-69, 2008.
    [23] Sotelo, M. A., J. Barriga, D. Fernández, I. Parra, J. E. Naranjo, M. Marrón, S. Alvarez, and M. Gavilán, "Vision-based blind spot detection using optical flow," Lecture Notes in Computer Science, vol.4739, pp.1113-1118, 2007.
    [24] Sung, K., J. Lee, J. An, and E. Chang, "Development of image synthesis algorithm with multi-camera," in Proc. of 75th IEEE Vehicular Technology Conf., South Korea, May.6-9, 2012.
    [25] Yamaguchi, K., T. Kato, and Y. Ninomiya, "Vehicle ego-motion estimation and moving object detection using a monocular camera," in Proc. 18th Int. Conf. on Pattern Recognition, Hong Kong, China, Aug.22-24, 2006, pp.610-613.
    [26] Yang, C., H. Hongo, and S. Tanimoto, "A new approach for in-vehicle camera obstacle detection by ground movement compensation," in Proc. 11th IEEE Int. Conf. on Intelligent Transportation System, Beijing, China, Oct.12-15, 2008, pp.151-156.
    [27] Yu, X., X. Chen, and H.Zhang, "Accurate motion detection in dynamic scenes based on ego-motion estimation and optical flow segmentation combined method," in Proc. of Photonics and Optoelectronics Symp., Wuhan, China, May.16-18, 2011, pp.1-4.
    [28] Yuan, P., K. Yang, and W. Tsai, "Real-time security monitoring around a video surveillance vehicle with a pair of two-camera omni-imaging devices," IEEE Trans. on Vehicular Technology, vol.60, no.8 pp.3603-3614, Oct. 2011.
    [29] Zhang, Z., "A flexible new technique for camera calibration," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.11, pp.1330-1334, 2000.
    [30] Zheng, Y., S. Lin, C. Kambhamettu, J. Yu, and S. B. Kang, "Single-image vignetting correction," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.31, pp.2243-2255, Dec. 2009.

    QR CODE
    :::