| 研究生: |
張鈞為 Jun-wei Zhang |
|---|---|
| 論文名稱: |
改良影像式倒車導引與全周俯瞰監視 Refining Image-based Parking Guiding andSurrounding Top-view Monitoring |
| 指導教授: |
曾定章
Din-Chang Tseng |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 畢業學年度: | 100 |
| 語文別: | 中文 |
| 論文頁數: | 85 |
| 中文關鍵詞: | 相機參數校正 、影像對位 、鳥瞰轉換 、影像扭曲校正 、光流 、影像暗角補償 |
| 外文關鍵詞: | distortion corr, camera calibratio1n, optical flow |
| 相關次數: | 點閱:14 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
汽車交通事故發生的主要因素是駕駛人在車輛行進中沒有注意到障
礙物而產生的。特別是因為後照鏡死角及車身遮蔽所造成的視線死角,
更是許多駕駛人無法察看的區域;常常會在這個死角範圍內發生擦撞造
成車體損壞、人員受傷。為提高停車時的安全性,並減少停車所需的時
間。我們提出一套改良影像式倒車導引與全周俯瞰監視系統,並將之部
份實現於嵌入式系統中。整個系統共包含兩部份:一是影像式倒車導引
用於協助駕駛調整方向盤停入車位中,二是全周俯瞰監視用於輔助駕駛
觀看車體周遭的狀況。
影像式倒車導引系統則是以車尾影像估計光流,藉由光流濾除及累
積後,計算出車輛前輪轉角,繪製出行車軌跡。而全周俯瞰監視系統在
車輛四邊架設廣角相機以拍攝車輛週遭影像,經過離線處理的相機校正、
扭曲校正、暗角校正,俯瞰轉換後,得到四張俯瞰影像的相對關係。再
以一部相機由上方拍攝車輛周圍的特徵,將俯瞰影像快速對位成一張俯
視車輛週邊的全周俯瞰影像,最後計算色彩混合權重將各項參數建立一
張查找表,在線上處理階段則先以直方圖均勻化調整影像的亮度分佈,
再根據查找表查表內插與暗角消除影像,產生全周俯瞰影像。
後視倒車導引在輸出解析度為 720×480 的解析度下,在 Intel?
Core™2 Duo 2.83GHz 及 3GB RAM 的個人電腦上可達每秒 170 張。在
Texas Instruments? DaVinci™ DM3730 1GHz Digital Media Processor 開
發板上達每秒 12 張的處理速度。全周俯瞰倒車導引在 Intel? Core™2
Duo 2.83GHz 及 3GB RAM 的個人電腦上可達每秒 43 張。
The main reason for car accidents is that drivers couldn’t see the area
around the vehicles. In order to avoid accidents while driving and to reduce
the time for parking. We propose a refining image-based parking guiding
system and a surrounding top-view monitoring system for parking assistance.
Image-based parking guidance system uses the rear view image to
estimate, filter, and accumulated optical flows to calculate the front wheel
angle of vehicle, and then draws the driving trajectory.
The surrounding top-view monitoring system has four wide-angle
cameras mounted on front, rear, and the both sides of the vehicle to capture
images. The system consists of off-line and on-line processes. In the offline
process, we calculate the camera intrinsic and extrinsic parameters and then
estimate parameters of distortion model and vignetting model for distortion
correction and vignetting compensation. Then we estimate the homography
matrices of four cameras by a top-view image. Lastly, we calculate the feature
weights of overlapped regions and bulid a lookup table for the on-line process.
In on-line process, we use histogram equalization to adjust brightness, and
then interpolate the top-view image, compensate for vignetting effect, and
blend the overlapped regions. In our experiment, the frame rate of
Image-based parking guidance system is 170 frames per second.
[1] BMW, Park Assistant., in
http://www.fujitsu.com/us/news/pr/fma_20101019-02.html
[2] Brown, D. C., “Close-range camera calibration,” Photogrammetric
Engineering, vol.37, no.8, pp.855-866, 1971.
[3] Devernay, F. and O. Faugeras, "Straight lines have to be straight,"
Machine Vision and Applications, vol.13, no.1, pp.14-24, 2001.
[4] Ehlgen, T. and T. Pajdla, "Monitoring surrounding areas of truck-trailer
combinations," in Proc. 5th Int. Conf. on Computer Vision Systems,
Bielefeld, Germany, Mar.21-24, 2007, CD-ROM.
[5] Ehlgen, T., M. Thorn, and M. Glaser, "Omnidirectional cameras as
backing-up aid," in Proc. IEEE 11th Int. Conf. on Computer Vision,
Rio de Janeiro, Brazil, Oct.14-20, 2007, pp.1-5.
[6] Faig, W., “Calibration of close-range photogrammetry systems:
Mathematical formulation,” Photogrammetric Engineering and Remote
Sensing, vol.41, no.12, pp.1479-1486, 1975.
[7] Faugeras, O., T. Luong, and S. Maybank, “Camera self-calibration:
Theory and experiments,” in Proc. of 2nd European Conf. on Computer
Vision, Santa Margherita Ligure, Italy, May 19-22, 1992, vol.588,
pp.321-334.
[8] Fleck, M. M., Perspective Projection: The Wrong Imaging Model,
Technical Report TR 95-01, Computer Science, University of Iowa,
1995.
[9] Fujitsu, 360-Degree Wrap-around Video Imaging Technology, in
http://www.fujitsu.com/us/news/pr/fma_20101019-02.html
[10] Gandhi, T. and M. M. Trivedi, "Vehicle surround capture: survey of
techniques and a novel omni-video-based approach for dynamic
panoramic surround maps," IEEE Trans. on Intelligent Transportation
Systems, vol.7, no.3, pp.293-308, 2006.
[11] Gennery, D., “Stereo-camera calibration,” in Proc. of 10th Image
Understanding Workshop, Los Angeles, CA, Nov.7-8, 1979,
pp.101-108.
[12] Geyer, C. and K. Daniilidis, “Catadioptric projective geometry,” Inte.
Journal of Computer Vision, vol.45, no.3, pp.223-243, 2001.
[13] Hartley, R. and A. Zisserman, Multiple View Geometry in Computer
Vision, 2nd Edition, Cambridge University Press, 2004.
[14] Harris, C. and M. Stephens, "A combined corner and edge detector," in
Proc. 4th Alvey Vision Conf., Manchester, UK, Aug.30-Sep.2, 1988,
pp.147-152.
[15] Honda, Multi-view Camera System, in
http://world.honda.com/news/2008/4080918Multi-View-Camera-Syste
m/
[16] Jung, H.-G., D.-S. Kim, P.-J. Yoon, and J. Kim, "Parking slot markings
recognition for automatic parking assist system," in Proc. IEEE
Intelligent Vehicles Symp., Tokyo, Japan, Jun.13-15, 2006, pp.106-113.
[17] Jung, H. G., D. S. Kim, P. J. Yoon, and J. Kim, “Light stripe projection
based parking space detection for intelligent parking assist system,”
IEEE Intelligent Vehicles Symp., Istanbul, Turkey, Jun.13-15, 2007, pp.
962-968.
[18] Kang, S. B. and R. S. Weiss, "Can we calibrate a camera using an
image of a flat, textureless lambertian surface?," in Proc. 6th European
Conf. on Computer Vision, Dublin, Ireland, Jun.26-Jul.1, 2000,
pp.640-653.
[19] Liu, Y. C., K. Y. Lin, and Y. S. Chen, "Bird''s-eye view vision system
for vehicle surrounding monitoring," in Proc. 2nd Int. Conf. Robot
Vision, Berlin, Germany, Feb.20-22, 2008, pp.207-218.
[20] Lucas, B. D. and T. Kanade, "An iterative image registration technique
with an application to stereo vision," in Proc. 7th Int. Joint Conf. on
Artificial Intelligence, Vancouver, Canada, 1981, pp.674-679.
[21] Marquardt, D., "An algorithm for least-squares estimation of nonlinear
parameters," SIAM Journal on Applied Mathematics, vol.11,
pp.431-441, 1963.
[22] Milliken, W. F. and D. L. Milliken, Race car vehicle dynamics, SAE,
Warrendale, PA, 1995, pp.715.
[23] Moravec, H., "Towards automatic visual obstacle avoidance," in Proc.
Int. Joint Conf. on Artificial Intelligence, Cambridge, MA, Aug.22-25,
1977, pp.584.
[24] Nissan, Around View Monitor, in
http://www.nissan-global.com/EN/TECHNOLOGY/INTRODUCTION
/DETAILS/AVM/
[25] Rosten, E. and T. Drummond, "Fusing points and lines for high
performance tracking," in Proc. 10th IEEE Int. Conf. on Computer
Vision, Beijing, China, Oct.17-20, 2005, pp.1508-1515.
[26] Rosten, E., R. Porter, and T. Drummond, "Faster and better: a machine
learning approach to corner detection," IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol.32, no.1, pp.105-119, 2010.
[27] Slama, C. C., editor., Manual of Photogrammetry, 4th edition,
American Society of Photogrammetry and Remote Sensing, Falls
Church, Virginia, 1980.
[28] Wada, M., K. S. Yoon, and H. Hashimoto, “Development of advanced
parking assistance system,” IEEE Trans. on Industrial Electronics,
vol.50, no.1, pp. 4-17, 2003.
[29] Wei, G. and S. Ma, “A complete two-plane camera calibration method
and experimental comparisons,” in Proc. of 4th Int. Conf. on Computer
Vision, Berlin, Germany, May 11-14, 1993, pp.439-446.
[30] Weng, J., P. Cohen, and M. Herniou, “Camera calibration with
distortion models and accuracy evaluation,” IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol.14, no.10, pp.965-980, 1992.
[31] Zhang, Z., "A flexible new technique for camera calibration," IEEE
Trans. on Pattern Analysis and Machine Intelligence, vol.22, no.11,
pp.1330-1334, 2000.