| 研究生: |
潘提歐 Agung Prasetio |
|---|---|
| 論文名稱: |
使用WGAN-GP合成基於智慧手錶的現實安全與不安全的駕駛行為 Synthesis of Realistic Safe and Unsafe Smartwatch-based Driving Behavior using WGAN-GP |
| 指導教授: |
梁德容
張欽圳 |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 英文 |
| 論文頁數: | 53 |
| 中文關鍵詞: | Wasserstein Generative Adversarial Network 、時間序列綜合 、駕駛行為 |
| 外文關鍵詞: | Wasserstein Generative Adversarial Network, time series synthesis, driving behavior |
| 相關次數: | 點閱:10 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在真實環境收集駕駛行為資料是相當危險的事。需準備許多預防措施以免在收集資料時發生危險的事。要收集像左右搖晃這種不安全的駕駛行為在現實中更是困難許多。利用模擬的環境來收集資料既安全又方便,但模擬環境與現實仍有一段差距,因此我們無法將在模擬環境建置的模型使用在真實環境中。為了我們的研究,我們調查了Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP)產生可以用於建置真實環境的模型的現實時間序的駕駛行為資料的能力,目的是希望從模擬的駕駛行為數據合成出近似真實環境的駕駛行為數據。過去WGAN-GP已成功用在與真實影像沒什麼差異的高品質影像。在這份研究中,我們比較一系列生成器的架構,以合成出最好的駕駛行為。三種指標用來評估合成出的資料與真實環境應用(像是駕駛員的身分驗證)中的相似度。最後,我們展示並測量WGAN-GP在產生真實環境中正常駕駛行為的資料有多成功,不過,在產生左右搖晃的資料上仍需改進。
Collecting driving behavior data in a real environment is dangerous dan risky. A lot of precautions need to be prepared to prevent dangerous things happen while doing the data collection. Collecting unsafe behavior such as weaving behavior is even more difficult to do in a real environment. Using a simulation environment is safer and convenient, but there is some gap between simulation and real environment. Hence, we cannot use simulation data to build a model for the real environment. For our research, we investigate the ability of Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) to produce realistic time series smartwatch-based driving behavior data that can be used to build a model for the real environment. The aim is to generate synthetic driving behavior from simulated driving data similar to real driving data. WGAN-GP has been used successfully to generate a good quality image that indistinguishable from a real image. In this work, we compare a range of generator architecture to generate the best synthetic driving behavior. Three evaluation metrics are then used to quantitatively assess how similar synthetic data is for real-world applications such as driver authentication. Finally, we demonstrate and quantitatively measure how successful WGAN-GP on generating realistic normal-driving data but still need some improvement when generating realistic weaving-driving behavior data.
[1] Gartner Inc., "Gartner Says Worldwide Wearable Device Sales to Grow 17 Percent in 2017.," 24 August 2017. [Online]. Available: Gartner Says Worldwide Wearable Device Sales to Grow 17 Percent in 2017.. [Accessed 04 June 2020].
[2] C.-h. Yang, C.-c. Chang and D. Liang, "A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication," Sensors, vol. 18, no. 28 March 2018, pp. 1-17, 2018.
[3] C. Wang, X. Guo, Y. Wang, Y. Chen and B. Liu, "Friend or foe? Your wearable devices reveal your personal PIN.," in In Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security, Xi'an, China, 2016.
[4] C. Xu, P. Pathak and P. Mohapatra, "Finger-writing with smartwatch: A case for finger and hand gesture recognition using smartwatch.," in Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, Santa Fe, 2015.
[5] B. Philips and T. Morton, "Making Driving Simulators more Useful for Behavioral Research," U.S. Department of Transportation, Iowa City, 2015.
[6] National Highway Traffic Safety Administration, "U.S. Transportation Secretary Elaine L. Chao Announces Further Decreases in Roadway Fatalities," National Highway Traffic Safety Administration, 22 October 2019. [Online]. Available: https://www.nhtsa.gov/press-releases/roadway-fatalities-2018-fars. [Accessed 2 June 2020].
[7] H. B. Ekanayake, P. Backlund, T. Ziemke, R. Ramberg, K. P. Hewagamage and M. Lebram, "Comparing Expert Driving Behavior in Real World and Simulator Contexts," International Journal of Computer Games Technology, pp. 1-14, 2013.
[8] U.S. NHTSA, "The Visual Detection of DWI Motorists," March 2010. [Online]. Available: https://www.nhtsa.gov/sites/nhtsa.dot.gov/files/808677.pdf. [Accessed 11 December 2020].
[9] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza , B. Xu, D. Warde-Farley, S. Ozair, A. Courville and B. Yoshua, "Generative Adversarial Nets," in NIPS: Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, 2014.
[10] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. Courville, "Improved training of wasserstein GANs," in 31st International Conference on Neural Information Processing Systems, 2017.
[11] K. M. Igarashi, K. Itou, K. I. Takeda and H. Abut, "Biometric identification using driving," in In Proceedings of IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 2004.
[12] D. H. Wolpert, "Stacked generalization," Neural Network, vol. 5, no. 2, pp. 241-259, 1992.
[13] M. Arjovsky , S. Chintala and L. Bottou, "Wasserstein GAN," in Proceedings of the 34th International Conference on Machine Learning, 2017.
[14] M. Arjovsky and L. Bottou, "Towards Principled Methods for Training Generative Adversarial Networks," in 5th International Conference on Learning Representations, 2017.
[15] K. S., Information Theory and Statistics., Wiley, New York: Dover Publications, Inc, 1959.
[16] H. Sakoe and S. Chiba, "A dynamic programming approach to continuous speech recognition," in Proceedings of the Seventh International Congress on Acoustics, 1971.
[17] D. Sankoff and J. Kruskal, Time Warps, String Edits and Macromolecules: The Theory and Practice of Sequence Comparison, Addison Wesley Publishing Company, 1983.
[18] L. Sheng, D.-Y. Huang and E. N. Pavlovskiy, "High quality speech synthesis using super resoltion mel-spectrogram," 3 Dec 2019. [Online]. Available: https://arxiv.org/abs/1912.01167. [Accessed 5 11 2020].
[19] M. Pasini, "MelGAN-VC: Voice Conversion and Audio Style Transfer on arbitrarily long samples using Spectrograms," 5 Dec 2019. [Online]. Available: https://arxiv.org/abs/1910.03713. [Accessed 5 11 2020].