跳到主要內容

簡易檢索 / 詳目顯示

研究生: 張鈞翔
Chun-Hsiang Chang
論文名稱: 基於深度學習之雨滴偵測及還原技術之研發
The Developments of the Deep learning-based Raindrop Detection and Inpainting Technologies
指導教授: 蘇木春
Mu-Chun Su
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 64
中文關鍵詞: 生成對抗神經網路深度學習車道線偵測影像復原影像辨識應用
外文關鍵詞: GAN, Deep Learning, Lan Detection, Image Inpainting, Image Recognition Application
相關次數: 點閱:14下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 四面環海的台灣,即使面積只有 36197 平方公里,但平均年降雨量
    相當大,為 2515 毫米,是世界平均雨量的 3 倍之多,台灣地理位置位於
    季風氣候範圍內,並且也是太平洋生成颱風的路徑上,每年平均有 3 到
    4 次颱風侵襲,如此大量的降雨會引起許多災害,也會造成車輛電子設
    備的失準。
    隨著先進車輛輔助系統日益發達,我們的日常行車有了許多科技的
    支援,不僅如傳統雷達偵測,針對四周障礙及車輛的危險偵測,近年來
    更加入了深度學習的技術偵測車道線,確保車子行駛在正確的道路上,
    如果駕駛人可能偶爾不專心或其他因素分心,此系統可預先警示駕駛者
    道路狀況或自動控制車輛路徑,使交通事故發生率大幅降低。除了預防
    危險系統,近年由於電腦視覺及深度學習的發展,越來越多影像辨識應
    用在車輛行駛上,許多車輛將速限辨識系統整合進抬頭顯示器中,駕駛
    人即可專心於道路狀況,不須時刻分心注意是否超速,提供便利的同時,
    也增加行車安全。
    然而,若行駛於雨天或充滿泥淖的環境,可能會因為鏡頭被遮擋而
    使辨識能力降低、喪失,更甚者會影響行車安全。因此本篇論文針對白
    天的街景圖生成模擬雨滴遮擋鏡頭的問題,使用生成對抗網路還原被雨
    滴遮擋的影像,以提升車道線偵測準確率,從原本被雨滴遮擋後的 0.916
    提升到影像還原後的 0.933,以及提升招牌偵測的準確度,進而降低誤判
    的機率,進而提升行車安全。


    Taiwan is surrounded by sea on all sides. Even though the area is only
    36,197 square kilometers, the average annual rainfall is is 2515 mm. It is three
    times more than the world’s average rainfall. Taiwan is located in the range of
    monsoon climate, and it is also on the path of the typhoons generated in the
    Pacific Ocean. There are an average of 3 to 4 typhoons each year, and such a
    large amount of rain will cause many disasters. It can also cause misalignment
    of vehicle electronics.
    With the increasing development of advanced vehicle assistance systems,
    Our daily driving is supported by many technologies. Not only is traditional
    radar detection and hazard detection for surrounding obstacles and vehicles, but
    the technology of deep learning has been also used to detect lane lines in recent
    years. If the driver may be occasionally inattentive or distracted, this system
    can warn the driver of road conditions in advance or automatically control the
    vehicle path. As a result, the incidence of traffic accidents is greatly reduced.
    Besides preventing danger systems, more and more image recognition applications are used in vehicle driving because of the development of computer vision
    and deep learning in recent years. Many cars have integrated the speed limit
    recognition system into the head-up display. With the help of the system, the
    driver doesn’t need to be distracted by seeing if they are speeding and conceniii
    trates on the road conditions. While providing convenience, the head-up display
    system also increases driving safety.
    If the diver drives in rainy or muddy environment, the recognition ability
    of advanced vehicle assistance systems may be reduced because the camera lens
    is blocked. What’s more, it will affect the driving safety. Therefore, this paper
    aims at the problem of raindrops blocking the lens pn the sunny day. We improve
    the accuracy of lane line detection from 0.916 to 0.933 after restoring the images
    being blocked by artificial raindrops using Generative Adversarial Networks.
    As a result, our method reduces the probability of misjudgment of the advanced
    vehicle assistance system and improves driving safety.

    摘要 i Abstract iii 誌謝 v 目錄 vi 一、 緒論 1 1.1 研究動機 .................................................................. 1 1.2 研究目的 .................................................................. 2 1.3 論文架構 .................................................................. 3 二、 背景知識以及文獻回顧 4 2.1 背景知識 .................................................................. 4 2.1.1 風格轉換模型 ................................................... 4 2.1.2 語意分割模型 ................................................... 5 2.1.3 車道線偵測模型 ................................................ 5 2.1.4 物件偵測模型 ................................................... 6 2.2 文獻回顧 .................................................................. 7 2.2.1 惡劣天氣解決 ................................................... 7 2.2.2 雨滴偵測 ......................................................... 8 三、 研究方法 9 3.1 生成雨滴方法 ............................................................ 9 3.2 雨滴偵測 .................................................................. 12 3.2.1 注意力圖 ......................................................... 13 3.2.2 CycleGAN........................................................ 14 3.3 雨滴區塊影像還原 ...................................................... 17 3.4 車道線偵測 ............................................................... 17 3.5 招牌偵測 .................................................................. 22 四、 實驗設計與結果 25 4.1 資料集 ..................................................................... 25 4.2 雨滴遮罩生成的實驗 ................................................... 27 4.2.1 評估指標 ......................................................... 27 4.2.2 雨滴遮罩生成的實驗結果 .................................... 28 4.3 車道線偵測實驗 ......................................................... 33 4.4 招牌偵測實驗 ............................................................ 39 4.5 影像還原相關研究 ...................................................... 42 4.6 測試實際道路雨滴圖 ................................................... 42 4.7 本研究實驗貢獻結論 ................................................... 46 五、 總結 47 5.1 結論 ........................................................................ 47 5.2 未來展望 .................................................................. 48 參考文獻 49

    [1] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network
    for raindrop removal from a single image,” in Proceedings of the IEEE conference on
    computer vision and pattern recognition, 2018, pp. 2482–2491.
    [2] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using
    cycle-consistent adversarial networks,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232.
    [3] H. Ouyang, T. Wang, and Q. Chen, “Internal video inpainting by implicit long-range
    propagation,” in Proceedings of the IEEE/CVF International Conference on Computer
    Vision, 2021, pp. 14 579–14 588.
    [4] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2881–
    2890.
    [5] L. Liu, X. Chen, S. Zhu, and P. Tan, “Condlanenet: A top-to-down lane detection framework based on conditional convolution,” in Proceedings of the IEEE/CVF International
    Conference on Computer Vision, 2021, pp. 3773–3782.
    [6] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “Yolov4: Optimal speed and accuracy
    of object detection,” arXiv preprint arXiv:2004.10934, 2020.
    [7] C.-Y. Wang, H.-Y. M. Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “Cspnet: A new backbone that can enhance learning capability of cnn. 2020 ieee,” in CVF
    Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020,
    pp. 1571–1580.
    [8] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE
    transactions on pattern analysis and machine intelligence, vol. 33, no. 12, pp. 2341–
    2353, 2010.
    [9] R. T. Tan, “Visibility in bad weather from a single image,” in 2008 IEEE conference on
    computer vision and pattern recognition, IEEE, 2008, pp. 1–8.
    [10] W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and
    removal from a single image,” in Proceedings of the IEEE conference on computer vision
    and pattern recognition, 2017, pp. 1357–1366.
    [11] H. Kurihata, T. Takahashi, I. Ide, et al., “Rainy weather recognition from in-vehicle camera images for driver assistance,” in IEEE Proceedings. Intelligent Vehicles Symposium,
    2005., IEEE, 2005, pp. 205–210.
    [12] M. Roser, J. Kurz, and A. Geiger, “Realistic modeling of water droplets for monocular
    adherent raindrop recognition using bezier curves,” in Asian conference on computer
    vision, Springer, 2010, pp. 235–244.
    [13] F. Al Machot, M. Ali, A. Haj Mosa, C. Schwarzlmüller, M. Gutmann, and K. Kyamakya,
    “Real-time raindrop detection based on cellular neural networks for adas,” Journal of
    Real-Time Image Processing, vol. 16, no. 4, pp. 931–943, 2019.
    [14] V. Soboleva and O. Shipitko, “Raindrops on windshield: Dataset and lightweight gradientbased detection algorithm,” in 2021 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, 2021, pp. 1–7.
    [15] B. Zhou, H. Zhao, X. Puig, et al., “Semantic understanding of scenes through the ade20k
    dataset,” International Journal of Computer Vision, vol. 127, no. 3, pp. 302–321, 2019.
    [16] X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open
    dataset for autonomous driving and its application,” arXiv preprint arXiv:1803.06184,
    2018.
    [17] Y. Liao, J. Xie, and A. Geiger, “KITTI-360: A novel dataset and benchmarks for urban
    scene understanding in 2d and 3d,” arXiv preprint arXiv:2109.13410, 2021.
    [18] X. Huang, P. Wang, X. Cheng, D. Zhou, Q. Geng, and R. Yang, “The apolloscape open
    dataset for autonomous driving and its application,” IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 10, pp. 2702–2719, 2019.

    QR CODE
    :::