| 研究生: |
周哲宇 Che-Yu Chou |
|---|---|
| 論文名稱: |
整合錯誤更正碼技術之自動化編碼簿學習 Automated Codebook Learning with Error Correcting Output Code Technique |
| 指導教授: |
陳弘軒
Hung-Hsuan Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 資訊工程學系 Department of Computer Science & Information Engineering |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 中文 |
| 論文頁數: | 68 |
| 中文關鍵詞: | 對比學習 、自監督式學習 、錯誤更正碼 、對抗攻擊 |
| 外文關鍵詞: | Contrastive Learning, Self-Supervised Learning, Error Correcting Output Codes, Adversarial Attacks |
| 相關次數: | 點閱:14 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
錯誤更正碼(Error Correcting Output Codes, ECOC)是一種用於解決多元分類問題的技術,其核心概念是設計編碼簿(Codebook),將每個類別映射到唯一的碼字(Codeword),並將編碼簿作為標籤讓模型學習。在基於錯誤更正碼技術的模型中,編碼簿的設計至關重要。過去的研究中,編碼簿多為人為設計、使用已知的編碼技術或隨機生成。然而,這些方法不僅需在模型訓練前額外產生,其產生的編碼簿也不一定能適用於任意資料集。本論文基於對比學習的模型框架,提出了三種自動化編碼簿學習的錯誤更正碼模型。這些模型無需在訓練前生成編碼簿,且編碼簿的生成由模型根據資料集的特性自動學習,從而解決了上述提及的編碼簿問題。我們在四種資料集中與兩種基礎模型進行比較,並評估三種錯誤更正碼模型的優劣與限制。此外,我們還實驗了自動化編碼簿學習的錯誤更正碼模型是否具有抵禦對抗攻擊的能力,並討論了未來改進的方向。
Error Correcting Output Codes (ECOC) is a technique for solving multi-class classification problems. Its core concept involves designing a codebook: each class maps to a unique codeword; these codewords are treated as labels for model training. Thus, the design of the codebook is crucial. In past research, codebooks were often manually designed based on known encoding techniques or generated randomly. However, these methods require manual codebook design before model training, and there may be better choices of codebooks for the given datasets. This paper proposes three automated codebook learning models for ECOC based on the framework of contrastive learning. These models do not require manual codebook design before training, and the model automatically learns the codebook based on the dataset's characteristics. We compare these models with two baseline models on four open datasets and evaluate the strengths, weaknesses, and limitations of the three ECOC models. Additionally, we experiment with whether the ECOC models with automated codebook learning can resist adversarial attacks and discuss directions for future improvements.
[1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012.
[3] G. Hinton, L. Deng, D. Yu, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal processing magazine, vol. 29, no. 6, pp. 82–97, 2012.
[4] D. Andor, C. Alberti, D. Weiss, et al., “Globally normalized transition-based neural networks,” arXiv preprint arXiv:1603.06042, 2016.
[5] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[6] C. Szegedy, W. Zaremba, I. Sutskever, et al., “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[7] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
[8] S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2574–2582.
[9] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
[10] N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” in 2017 ieee symposium on security and privacy (sp), Ieee, 2017, pp. 39–57.
[11] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP), IEEE, 2016, pp. 582–597.
[12] D. Hendrycks and K. Gimpel, “A baseline for detecting misclassified and out-ofdistribution examples in neural networks,” arXiv preprint arXiv:1610.02136, 2016.
[13] D. Meng and H. Chen, “Magnet: A two-pronged defense against adversarial examples,” in Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 135–147.
[14] G. Verma and A. Swami, “Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks,” Advances in Neural Information Processing Systems, vol. 32, 2019.
[15] Y. Song, Q. Kang, and W. P. Tay, “Error-correcting output codes with ensemble diversity for robust learning in neural networks,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, 2021, pp. 9722–9729.
[16] L. Wan, T. Alpcan, E. Viterbo, and M. Kuijper, “Efficient error-correcting output codes for adversarial learning robustness,” in ICC 2022-IEEE International Conference on Communications, IEEE, 2022, pp. 2345–2350.
[17] T. Philippon and C. Gagné, “Improved robustness against adaptive attacks with ensembles and error-correcting output codes,” arXiv preprint arXiv:2303.02322, 2023.
[18] T. G. Dietterich and G. Bakiri, “Solving multiclass learning problems via errorcorrecting output codes,” Journal of artificial intelligence research, vol. 2, pp. 263–286, 1994.
[19] I. Evron, O. Onn, T. Weiss, H. Azeroual, and D. Soudry, “The role of codeword-toclass assignments in error-correcting codes: An empirical study,” in International Conference on Artificial Intelligence and Statistics, PMLR, 2023, pp. 8053–8077.
[20] A. Zhang, Z.-L. Wu, C.-H. Li, and K.-T. Fang, “On hadamard-type output coding in multiclass learning,” in Intelligent Data Engineering and Automated Learning: 4th International Conference, IDEAL 2003, Hong Kong, China, March 21-23, 2003. Revised Papers 4, Springer, 2003, pp. 397–404.
[21] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, PMLR, 2020, pp. 1597–1607.
[22] N. Papernot, P. McDaniel, and I. Goodfellow, “Transferability in machine learning: From phenomena to black-box attacks using adversarial samples,” arXiv preprint arXiv:1605.07277, 2016.
[23] A. Ilyas, L. Engstrom, A. Athalye, and J. Lin, “Query-efficient black-box adversarial examples (superceded),” arXiv preprint arXiv:1712.07113, 2017.
[24] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
[25] C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, “Mitigating adversarial effects through randomization,” arXiv preprint arXiv:1711.01991, 2017.
[26] S. Yang, P. Luo, C. C. Loy, K. W. Shum, and X. Tang, “Deep representation learning with target coding,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29, 2015.
[27] P. Rodríguez, M. A. Bautista, J. Gonzalez, and S. Escalera, “Beyond one-hot encoding: Lower dimensional target embedding,” Image and Vision Computing, vol. 75, pp. 21–31, 2018.
[28] A. Kusupati, M. Wallingford, V. Ramanujan, et al., “Llc: Accurate, multi-purpose learnt low-dimensional binary codes,” Advances in neural information processing systems, vol. 34, pp. 23 900–23 913, 2021.
[29] S. Gupta and S. Amin, “Integer programming-based error-correcting output code design for robust classification,” in Uncertainty in Artificial Intelligence, PMLR, 2021, pp. 1724–1734.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
[31] D. Shah and T. M. Aamodt, “Learning label encodings for deep regression,” arXiv preprint arXiv:2303.02273, 2023. [32] K. Sohn, “Improved deep metric learning with multi-class n-pair loss objective,” Advances in neural information processing systems, vol. 29, 2016.
[33] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised feature learning via nonparametric instance discrimination,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 3733–3742.
[34] A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with contrastive predictive coding,” arXiv preprint arXiv:1807.03748, 2018.
[35] P. Khosla, P. Teterwak, C. Wang, et al., “Supervised contrastive learning,” Advances in neural information processing systems, vol. 33, pp. 18 661–18 673, 2020.
[36] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel, “Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition,” Neural networks, vol. 32, pp. 323–332, 2012.
[37] Y. You, I. Gitman, and B. Ginsburg, “Large batch training of convolutional networks,” arXiv preprint arXiv:1708.03888, 2017.
[38] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 9729–9738.
[39] J.-B. Grill, F. Strub, F. Altché, et al., “Bootstrap your own latent-a new approach to self-supervised learning,” Advances in neural information processing systems, vol. 33, pp. 21 271–21 284, 2020.
[40] M. Caron, I. Misra, J. Mairal, P. Goyal, P. Bojanowski, and A. Joulin, “Unsupervised learning of visual features by contrasting cluster assignments,” Advances in neural information processing systems, vol. 33, pp. 9912–924, 2020.