| 研究生: |
謝易軒 Yi-hsuan Hsieh |
|---|---|
| 論文名稱: |
自走車的立體障礙物偵測 Real obstacle detection for autonomous vehicle |
| 指導教授: |
曾定章
Din-chang Tseng |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2014 |
| 畢業學年度: | 102 |
| 語文別: | 中文 |
| 論文頁數: | 80 |
| 中文關鍵詞: | 自走車 、障礙物 、光流 |
| 外文關鍵詞: | autonomous vehicle, obstacle, optical flow |
| 相關次數: | 點閱:22 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在自動化的移動載具或設備上,如果有會隨著人或特定物體跟隨的自走車,可以應用在各種不同的環境下;例如,協助運貨或是移動人員。這類自走車可以跟隨前導者移動軌跡往前行進,但在沿途上可能會有移動的障礙物出現;因此要有一個方法來偵測自走車的移動軌跡上是否有影響前進的障礙物,以避免碰撞。這些碰撞狀況包括撞擊路徑上的障礙物或是過於過靠近前方物體;因此本研究的主要目的是在自走車上裝置單眼相機,偵測前方障礙物並分辨是否為真正障礙物;再即時調整自走車的速度以避免碰撞。
我們的偵測系統包含以下幾個步驟。首先影像中擷取邊點做為偵測障礙物的特徵點;接著將影像分割成一個個 cell,再計算每個 cell 的 HOG 特徵,並且只保留 cell 中有明顯反應的方向資料做為可能是障礙物邊緣的特徵,以減少雜訊的干擾。第三,將單一畫面分為三種解析度後將這些特徵點做金字塔的光流估計,以便更快更精確地獲得正確特徵點的移動向量。第四,將同一個平面的光流向量調整成為與位置相關,讓接下來的群聚可更加正確。第五,依照光流長度及顏色資訊將特徵點群聚成不同的區塊;並將可能為平面物體的區塊去除,再將重疊區塊刪除後,剩餘的區塊就當作是真實世界中立體的障礙物。最後,依照障礙物距離自走車的距離,來判斷是否要調整自走車行進的速度依據。
自走車的立體障礙物偵測系統是建立於電動輪椅的自主車,在自主車上架設影像擷取裝置,輸入影像為解析度為320×240,由 Intel CoreTM i3-2370M 2.4GHz 及 8GB RAM 的個人電腦上執行自走車的立體障礙物偵測系統,偵測速度可達每秒 20 至 30 張畫面,正確率可達 90%。
There are few kinds of automatic mobile platforms can move by following a person or a specific subject in various situations such as carrying commodity or people. These platforms can move by following the path of their guides, however, there might be some moving obstacles on their path. As a result, there should be some methods to detect whether there are real obstacles and prevent collisions. Our research is focused on detecting the obstacles in front of the autonomous vehicle and controlling the velocity of the vehicle in real-time by arranging a camera on the platform.
The method of obstacles detecting includes following steps. In the first step, the characteristic points are selected from the edge points the image for detecting the obstacles, in additional, split the image into a number of cells, then calculate HOG characteristics of each cell, and retain only the cell in response to an obviously direction information may be characterized as an obstacle edge to reduce interference noise. In the third step, a single image is separated in three different resolutions and the motion vectors are calculated more accurately and more efficiently by Lucas-Kanade method. In the fourth step, to make the following clustering be more accurate, the optical flow in the same plat are adjusted to be related with their positions. In the fifth step, different areas are clustered according to the length of the vectors and the color information. Moreover, the areas which may be plane subjects are removed and the regions which are overlapped are removed. As a result, the rest areas can be considered as the real obstacles in real world. Finally, the velocity of the autonomous vehicle is controlled according to the distance between the obstacles and the autonomous vehicle.
The real obstacle detection for autonomous vehicle is going to build on a electric wheelchair. The image detect device on the vehicle can be entered resolution of 240 x 320 images, in additional, the detect system is executed through a personal computer with Intel CoreTM i3-2370M 2.4GHz and 8GB RAM and the frame rate is 20 to 30 frames per second, as a result, the detection rate is about 90 %.
[1] Bab-Hadiashar, A. and D. Suter, "Robust total least squares based optic flow computation," in Proc. Asian Conf. on Computer Vision, Hong Kong, China, Jan.8-10, 1998, pp.566-573.
[2] Batavia, P. H., D. A. Pomerleau, and C. E. Thorpe, "Overtaking vehicle detection using implicit optical flow," in Proc. IEEE Conf. on Intelligent Transportation System, Pittsburgh, PA, Nov.9-12, 1997, pp.729-734.
[3] Bertozzi, M. and A. Broggi, "GOLD: a parallel real-time stereo visionsystem for generic obstacle and lane detection," IEEE Trans. on Image Processing, vol.7, no.1, pp.62-81, 1998.
[4] Bertozzi, M., A. Broggi, P. Medici, P. P. Porta, and A. Sjogren, "Stereo vision-based start-inhibit for heavy goods vehicles," in Proc. IEEE Intelligent Vehicles Symp. , Tokyo, Japan, Jun.13-15, 2006, pp.350-355.
[5] Bouguet, J. Y., Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of The Algorithm, Technique Report, Intel Corporation Microprocessor Research Labs., 2003.
[6] Dalad, N. and B. Triggs, ''Histograms of oriented gradients for human detection,'' in Proc. IEEE Int. Conf. Computer Vision and Pattern recognition, San Diego, CA, June 20-26, 2005, pp.886-893.
[7] Enkelmann, W., "Obstacle detection by evaluation of optical flow fields from image sequences," Image and Vision Computing, vol.9, no.3, pp.160-168, 1991.
[8] Fernando, W. S. P., L. Udawatta, and P. Pathirana, "Identification of moving obstacles with pyramidal Lucas Kanade optical flow and k means clustering," in Proc. 3rd Int. Conf. on Information and Automation for Sustainability, Melbourne, Australia, Dec.4-6, 2007, pp.111-117.
[9] Gandhi, T. and M. M. Trivedi, "Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera," Machine Vision and Applications, vol.16, no.2, pp.85-95, 2005.
[10] Gandhi, T. and M. M. Trivedi, "Vehicle surround capture: survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps," IEEE Trans. on Intelligent Transportation Systems, vol.7, no.3, pp.293-308, 2006.
[11] Hoiem, D., A. A. Efros, and M. Hebert, "Putting objects in perspective," Int. Journal of Computer Vision, vol.80, no.1, pp.3-15, 2008.
[12] Horn, B. K. P. and B. G. Schunck, "Determining optical flow," Artificial Intelligence, vol.17, pp.185-203, 1981.
[13] Inoue, O., A. Seonju, and S. Ozawa, "Following vehicle detection using multiple cameras," in Proc. Int. Conf. on Vehicular Electronics and Safety, Columbus, OH, Sep.22-24, 2008, pp.79-83.
[14] Jin, J.-S., Z. Zhu, and G. Xu, "A stable vision system for moving vehicles," IEEE Trans. on Intelligent Transportation Systems, vol.1, no.1, pp.32-39, 2000.
[15] Jones, W. D., "Keeping cars from crashing," IEEE Spectrum, vol.38, no.9, pp.40-45, 2001.
[16] Kim, S. Y., S. Y. Oh, J. K. Kang, Y. W. Ryu, K. S. Kim, and S.C. Park, "Front and rear vehicle detection and tracking in the day and night times using vision and sonar sensor fusion," in Proc. IEEE/RSJ Int. Conf. on Intelligent Robot and System, Edmonton, Canada, Aug.2-6, 2005, pp.2173-2178.
[17] Ko, S.-J., S.-H. Lee, and K.-H. Lee, "Digital image stabilizing algorithm based on bit-plane matching," IEEE Trans. on Consumer Electronics, vol.44, no.3, pp.617-622, 1998.
[18] Lee, C.-H., Y.-C. Su, L.-G. Chen, "An intelligent depth-based obstacle detection system for visually-impaired aid applications," in Proc. IEEE Int. Conf. on Consumer Electronic, Berlin,Germany, Sep.3-5,2012, pp.223-225.
[19] Liu, J.-F., Y.-F. Su, M.-K. Ko, and P.-N. Yu, "Development of a vision-based driver assistance system with lane departure warning and forward collision warning functions," in Proc. Digital Image Computing Techniques and Applications, Canberra, Australia, Dec.1-3, 2008, pp.480-485.
[20] Lucas, B. D. and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proc. Int. Joint Conf. on Artificial Intelligence, Vancouver, Canada,Aug.24-28, 1981, pp.674-679.
[21] Nguyen, T. H., J. S. Nguyen, D. M. Pham, and H. T. Nguyen, "Real-time obstacle detection for an autonomous wheelchair using stereoscopic cameras", in Proc. IEEE Conf. Engineering in Medicine and Biology Society, Lyon, France, Aug.22-26, 2007, pp.4775-4778.
[22] Ogale, A. S., C. Fermuller, and Y. Aloimonos, "Motion segmentation using occlusions," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.27, no.6, pp.988-992, 2005.
[23] Ong, E. P. and M. Spann, "Robust optical flow computation based on least-median-of-squares regression," Int. Journal of Computer Vision, vol.31, no.1, pp.51-82, 1999.
[24] Oniga, F. and S. Nedevschi, "Processing dense stereo data using elevation maps-road surface, traffic isle, and obstacle detection," IEEE Trans. on Vehicular Technology, vol.59, no.3, pp.1172-1182, 2010.
[25] Paik, J. K., Y. C. Park, and D. W. Kim, "An adaptive motion decision system for digital image stabilizer based on edge pattern matching," IEEE Trans. on Consumer Electronics, vol.38, no.3, pp.607-616, 1992.
[26] Ratakonda, K., "Real-time digital video stabilization for multimedia applications," in Proc. IEEE Symp. on Circuits and Systems, Monterey, CA, May 31-Jun.3, 1998, pp.69-72.
[27] Saxena, A., S. H. Chung, and A. Y. Ng, "3-D depth reconstruction from a single still image," Int. Journal of Computer Vision, vol.76, no.1, pp.53-69, 2008.
[28] Sotelo, M. A., J. Barriga, D. Fernández, I. Parra, J. E. Naranjo, M. Marrón, S. Alvarez, and M. Gavilán, "Vision-based blind spot detection using optical flow," Lecture Notes in Computer Science, vol.4739, pp.1113-1118, 2007.
[29] Uornori, K., A. Morimura, H. Ishii, T. Sakaguchi, and Y. Kitamura, "Automatic image stabilizing system by full-digital signal processing," IEEE Trans. on Consumer Electronics, vol.36, no.3, pp.510-519, 1990.
[30] Wang, J., G. Bebis, and R. Miller, "Overtaking vehicle detection using dynamic and quasi-static background modeling," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, San Diego, CA, Jun.20-26, 2005, pp.64-71.
[31] Wu, B.-F., C.-J. Chen, H.-H. Chiang, H.-Y. Peng, J.-W. Ma, and T.-T. Lee, "The design of an intelligent real-time autonomous vehicle, Taiwan iTS-1," Journal of the Chinese Institute of Engineerings, vol.30, no.5, pp.829-842, 2007.