| 研究生: |
游景翔 Ching-Siang You |
|---|---|
| 論文名稱: |
時間序列預測中變換器架構之位置編碼設計 Design of Transformer Architecture based on different Position Encoding in Time series forecasting |
| 指導教授: |
曾富祥
Fu-Shiang Tseng |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 工業管理研究所 Graduate Institute of Industrial Management |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 中文 |
| 論文頁數: | 134 |
| 中文關鍵詞: | 資料探勘 、深度學習 、時間序列 、變換器模型 |
| 外文關鍵詞: | Data mining, Deep learning, Time series, Transformer model |
| 相關次數: | 點閱:6 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
時間序列分析和預測是資料探勘中的一個重要領域。時間序列資料是在統一的時間間隔內收集大量的數據值,例如年、月、週、日等。透過分析時間序列,我們可以預測數據的變化,並提供未來資料的預測。近年來,時間序列預測一直是研究的焦點,並在機器學習和人工智能領域引發了各種研究和發展。隨著數據可用性的增加和計算能力的提升,也出現了許多基於深度學習的模型。且因為不同領域間的多樣性,衍伸許多不同的深度學習模型。時間序列趨勢預測一直是一個重要的課題,其結果可以為各領域的應用提供基礎,例如生產計劃的控制和優化等。
變換器模型(Transformer Model)最初是為了處理自然語言而提出的一種神經網絡架構。它使用一種稱為注意力或自我注意力(Self-attention)的機制來檢測序列中元素之間的相互影響和相互依賴關係。在本研究中,我們將變換器模型應用於時間序列資料的預測,並探討其並行計算的特性是否能解決長短期記憶模型(LSTM)在學習長序列時的限制。
此外,我們通過使用不同的位置編碼機制(Positional Encoding)來提供時間序列資料在序列中的位置信息,並探討不同的位置編碼方式對於時間序列預測在變換器模型中的影響。我們在實驗中使用了五種真實的時間序列資料,評估各種模型對於不同時間趨勢的預測結果。
Time series analysis and forecasting are essential components of data mining. Time series data refers to a collection of data values gathered at regular time intervals, such as yearly, monthly, weekly, or daily intervals. By analyzing time series data, we can predict the changes occurring within the dataset and forecast future data trends. Time series prediction has been a research hotspot in the past ten years, with the increase in data availability and the improvement of computing power, many deep learning-based models have also emerged in recent years, and many different deep learning model designs have also emerged considering the diversity of time series problems between other domains. Time series trend forecasting has always been an important topic. The predicted results can provide the basis for applications in various fields, such as the control and optimization of production planning.
Transformer Model is a kind of neural network, which was mainly applied to natural language processing when it was first proposed. It primarily uses a set of mechanisms called attention or self-attention to detect data elements in sequences that influence and depend on each other. In this study, we use the Transformer model to predict time series data and explore whether its parallel operation characteristics can solve the long-short-term memory model (LSTM) with a certain length limit in sequence learning. In addition, we use different Positional Encoding mechanisms to give time series data position information in the sequence and discuss the impact of different position encoding methods to express the positional relationship of time series data at time points on the time series prediction of the transformer model. In Chapter 4, we used 5 kinds of real-world time series data to examine each model's results to predict different time trends.
[1] Box, G. E. and Jenkins, G. M. (1976). Time series analysis: Forecasting and control san
francisco. Calif: Holden-Day.
[2] Box, G. E., Jenkins, G. M., and Reinsel, G. (1970). Time series analysis: forecasting and
control holden-day san francisco. BoxTime Series Analysis: Forecasting and Control Holden Day1970.
[3] Brown, R. G. (1960). Statistical forecasting for inventory control.
[4] Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., and Zhou, Y.
(2021). Transunet: Transformers make strong encoders for medical image segmentation.
CoRR, abs/2102.04306.
[5] Cho, K., van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724–1734, Doha, Qatar. Association for Computational Linguistics.
[6] Chu, X., Tian, Z., Zhang, B., Wang, X., Wei, X., Xia, H., and Shen, C. (2021). Conditional positional encodings for vision transformers. Computing Researc Repository (CoRR).
[7] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. (2021). An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
[8] Gardner Jr, E. S. (1985). Exponential smoothing: The state of the art. Journal of forecasting, 4(1):1–28.
[9] Gehring, J., Auli, M., Grangier, D., Yarats, D., and Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. In International conference on machine learning, pages 1243–1252. PMLR.
[10] Gers, F. A., Schmidhuber, J., and Cummins, F. (2000). Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451–2471.
[11] Gong, Y., Chung, Y.-A., and Glass, J. (2021). Ast: Audio spectrogram transformer. Computing Research Repository (CoRR).
[12] Han, Z., Zhao, J., Leung, H., Ma, K. F., and Wang, W. (2019). A review of deep learning models for time series prediction. IEEE Sensors Journal, 21(6):7833–7848.
[13] Harrell, F. E. et al. (2001). Regression modeling strategies: with applications to linear models, logistic regression, and survival analysis, volume 608. Springer.
[14] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8):1735–1780.
[15] Holt, C. C. (2004). Forecasting seasonals and trends by exponentially weighted moving averages. International journal of forecasting, 20(1):5–10.
[16] Hoshmand, A. R. (2009). Business forecasting: a practical approach. Routledge.
[17] Jenkins, G. M., Box, G. E., and Reinsel, G. C. (2011). Time series analysis: forecasting and control, volume 734. John Wiley & Sons.
[18] Jordan, M. (1986). Serial order: a parallel distributed processing approach. technical report, june 1985-march 1986. Technical report, California Univ., San Diego, La Jolla (USA). Inst. for Cognitive Science.
[19] Kenton, J. D. M.-W. C. and Toutanova, L. K. (2019). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of naacL-HLT, volume 1, page 2.
[20] Khan, S., Naseer, M., Hayat, M., Zamir, S. W., Khan, F. S., and Shah, M. (2022). Transformers in vision: A survey. ACM computing surveys (CSUR), 54(10s):1–41.
[21] LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. nature, 521(7553):436–444.
[22] Lin, T., Wang, Y., Liu, X., and Qiu, X. (2022). A survey of transformers. AI Open.
[23] Liu, Z., Zhu, Z., Gao, J., and Xu, C. (2021). Forecast methods for time series data: a survey. IEEE Access, 9:91896–91912.
[24] Noever, D., Ciolino, M., and Kalin, J. (2020). The chess transformer: Mastering play using generative language models. arXiv: Artificial Intelligence.
[25] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
[26] Sezer, O. B., Gudelek, M. U., and Ozbayoglu, A. M. (2020). Financial time series forecasting with deep learning: A systematic literature review: 2005–2019. Applied soft computing, 90:106181.
[27] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
[28] Winters, P. R. (1960). Forecasting sales by exponentially weighted moving averages. Management science, 6(3):324–342.