| 研究生: |
黃靖雅 Jing-Ya Huang |
|---|---|
| 論文名稱: |
基於摺積神經網路於 H.266/FVC 視訊編碼畫面內模式預測 Intra Mode Prediction for H.266/FVC Video Coding based on CNNs |
| 指導教授: |
張寶基
Pao-Chi Chang |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 通訊工程學系 Department of Communication Engineering |
| 論文出版年: | 2018 |
| 畢業學年度: | 106 |
| 語文別: | 中文 |
| 論文頁數: | 80 |
| 中文關鍵詞: | 未來視訊壓縮編碼 、預測單位 、畫面內編碼 、模式預測 、深度學習 、摺積神經網路 |
| 外文關鍵詞: | Future Video Coding (FVC), Prediction Unit (PU), Intra Coding, Mode Prediction, Deep Learning, Convolutional Neural Network (CNN) |
| 相關次數: | 點閱:25 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著網路和多媒體技術的快速發展,在人們的日常生活中高解析度視頻的重要性與日俱增,目前市面上已出現許多 4K 解析度的視訊內容,相信在未來高解析度視頻勢必會成為主流,然而目前最新的視頻壓縮標準 H.265/HEVC 已經逐漸不敷使用,因此 ISO/IEC MPEG 和 ITU-T VCEG 共同組成聯合視頻探勘小組 (Joint Video Exploration Team, JVET) 並制定下一代視訊壓縮標準 H.266/FVC (Future Video Coding),從2015年開始討論並預計於2020年正式發佈為國際視訊壓縮標準。
H.266/FVC 相較於 H.265/HEVC 在預測單元之畫面內編碼的預測模式由35種擴增至67種,以適應更多不同畫面的任意邊緣方向。H.266/FVC 雖然提供更好的編碼效能,但預測模式數量的增加使在選擇預測模式時,執行複雜度也增加許多,因此針對畫面內編碼,發展如何在畫面品質與編碼複雜度平衡狀態下之預測模式決策是非常重要的議題。
本論文結合近年來非常熱門的人工智慧系統 (Artificial Intelligence, AI),提出基於摺積神經網路於 H.266/FVC 畫面內編碼之模式預測。主要分為兩部分探討:首先第一部份針對預測模型的訓練及訓練資料的選擇來做討論;而第二部份則將訓練好的預測模型整合至 H.266/FVC 壓縮參考軟體中來執行編碼。本論文所提出之方法平均可降低 0.1 % 的 BDBR。
With the rapid development of Internet and multimedia technology, the importance of high-resolution video in daily life has been increasing day by day. However, the latest video compression standard H.265/HEVC has gradually become insufficient. Therefore, ISO/IEC MPEG and ITU-T VCEG together form JVET (Joint Video Exploration Team) and develop the next-generation video compression standard H.266/FVC (Future Video Coding).
Compared to the previous generation of video coding standard H.265/HEVC, the number of prediction modes is added from 35 to 67 to adapt to various local characteristics. Although H.266/FVC can provide better coding performance, it even increases lots of complexity in intra mode prediction dramatically. Therefore, how to develop intra mode prediction decisions in the balance between quality and coding complexity is an important issue.
This paper combines the artificial intelligence system (AI), which is popular in recent years. We proposed intra mode prediction decision in H.266/FVC intra coding based on convolutional neural networks (CNNs). First, we train our intra mode prediction models and select the training data. And then, we integrate the trained prediction models into the reference software JEM7.0 to perform the coding. The proposed method in this paper can achieve 0.1 % BDBR decreasing on average, while the increases in coding time is negligible compared to JEM7.0.
[1] J. Chen, E. Alshina, G. J. Sullivan, J. R. Ohm and J. Boyce, “Algorithm Description of Joint Exploration Test Model 7 (JEM7),” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting, Doc. JVET-G1001, Torino, July 2017.
[2] JEM reference software, https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/.
[3] High Efficiency Video Coding (HEVC), Rec. ITU-T H.265 and ISO/IEC 23008-2, Jan. 2013.
[4] G. J. Sullivan, J. R. Ohm, W. J. Han, and T. Wiegand, “Overview of the High Efficiency Video Coding (HEVC) Standard,” in IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 12, pp. 1649-1668, Dec. 2012
[5] C. Rosewarne, B. Bross, M. Naccari, K. Sharman, and G. J. Sullivan, “High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Update 4 of Encoder Description,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 22nd Meeting, Doc. JCTVC-V1002, Oct. 2015.
[6] HEVC reference software, https://hevc.hhi.fraunhofer.de/svn/svn_HEVCSoftware/tags/HM-16.6/.
[7] H. Huang, K. Zhang, Y. W. Huang, and S. M. Lei, “EE2.1: Quadtree plus binary tree structure integration with JEM tools,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 3rd Meeting, Doc. JVET-C0024, Geneva, May 2016.
[8] M. Karczewicz and E. Alshina, “JVET AHG report: Tool evaluation (AHG1),” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3
64
and ISO/IEC JTC 1/SC 29/WG 11 8th Meeting, Doc. JVET-H0001, Macau, Oct. 2017.
[9] Y. Yamamoto and T. Ikai, “AHG5: Fast QTBT encoding configuration,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 4th Meeting, Doc. JVET-D0095, Chengdu, Oct. 2016.
[10] J. Chen, W. J. Chien, M. Karczewicz, X. Li, H. Liu, A. Said, L. Zhang, and X. Zhao, “Further improvements to HMKTA-1.0,” ITU-T SG16/Q6, Doc. VCEG-AZ07, Jun. 2015.
[11] J. Chen, Y. Chen, M. Karczewicz, X. Li, H. Liu, L. Zhang and X. Zhao, “Coding tools investigation for next generation video coding,” ITU-T SG16/Q6, Doc. COM16-C806, Feb. 2015.
[12] W. J. Chien, J. Chen, S. Lee, and M. Karczewicz, “Modification of merge candidate derivation,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 2nd Meeting, Doc. JVET-B0058, San Diego, Feb 2016.
[13] J. L. Lin, Y. W. Chen, Y. W. Huang, and S. M. Lei, “Motion Vector Coding in the HEVC Standard,” in IEEE Journal of Selected Topics in Signal Processing, vol. 7, no. 6, pp. 957-968, Dec. 2013
[14] A. Said, X. Zhao, J. Chen, M. Karczewicz, W. J. Chien, and F. Zhou, “Position dependent intra prediction combination,” ITU-T SG16/Q6, Doc. COM16-C1016, Oct. 2015.
[15] K. Suehring and X. Li, “JVET common test conditions and software reference configurations,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting, Doc. JVET-G1010, Torino, July 2017.
[16] W. S. Mcculloch and W. Pitts, “A Logical Calculus of the Ideas
65
Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics, vol.5, no.4, pp.115-133, Dec. 1943.
[17] D. O. Hebb, “Organization of Behavior,” New York: Wiley & Sons.
[18] V. Nair, and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML-10), Jun. 2010.
[19] S. Sigtia, and S. Dixon, "Improved Music Feature Learning with Deep Neural Networks," in 2014 IEEE International Conference on Acoustics, speech and signal processing (ICASSP), pp. 6959-6963, May 2014.
[20] K. Alex, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, pp.1097-1105, 2012.
[21] Y. Lecun, et al., “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[22] I. Mrazova, M. Kukacka, “Hybrid convolutional neural networks”, Industrial Informatics INDIN 2008. 6th IEEE International Conference, 2008.
[23] S. Lawrence, et al., “Face recognition: A convolutional neural-network approach”, IEEE Transactions on Neural Networks, vol.8, no. 1, pp. 98-113, 1997.
[24] T. Laude, J. Ostermann, "Deep learning-based intra prediction mode decision for HEVC", Picture Coding Symp. (PCS), pp. 1-5, 2016.
[25] TensorFlow: an open source Python package for machine intelligence, https://www.tensorflow.org, retrieved Dec. 1, 2016.
[26] J. Dean, et al. “Large-Scale Deep Learning for Building Intelligent
66
Computer Systems,” in Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, pp. 1-1, Feb. 2016.
[27] G. Bjontegaard, “Calculation of Average PSNR Difference Between RD-curves,” ITU-T Q.6/SG16 VCEG 13th Meeting, Doc. VCEG-M33, 2001.