| 研究生: |
曾益 Yi-Tseng |
|---|---|
| 論文名稱: |
應用深度學習於低軌衛星相控陣列之天線場型修復技術與晶片實現 Deep Learning-Based Antenna Pattern Recovery and Chip Implementation for Phased Array Systems in LEO Satellites |
| 指導教授: |
薛木添
Muh-Tian Shiue |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
資訊電機學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2025 |
| 畢業學年度: | 114 |
| 語文別: | 中文 |
| 論文頁數: | 94 |
| 中文關鍵詞: | 低軌衛星 、波束成型 、天線陣列 、天線失效 、旁瓣抑制 、深度神經網路 |
| 外文關鍵詞: | LEO, Beamforming, Antenna array, Element failure, Sidelobe Level, Deep neural networks |
| 相關次數: | 點閱:6 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究針對低軌道(Low Earth Orbit, LEO)衛星所使用之相控陣列天線系統進行探討,聚焦於天線單元失效所造成的波束品質劣化問題。由於衛星運行環境嚴苛,天線元件長期運作下容易因輻射、溫度循環或製程缺陷而產生故障,進而導致陣列輻射場型偏移,降低通訊效能。為解決此一挑戰,本研究提出一種基於深度學習之天線權重修復方法,透過輸入失效位置與原始權重配置,利用神經網路自動重新分配剩餘正常天線之權重,以實現容錯波束成形(fault-tolerant beamforming),有效恢復近似原始狀態之天線輻射模式。
在演算法設計方面,本研究採用自編碼器(Autoencoder)神經網路架構,並透過大量模擬天線失效資料進行訓練,使模型能學習不同失效組合下的修復策略。模擬結果顯示,在單一、雙重與三重天線失效情境下,所提出方法皆能有效抑制旁瓣水準(SLL)並維持原始半功率波束寬度(HPBW),其平均修復率相較傳統啟發式演算法具有顯著提升。此外,本研究特別考慮天線失效位置之各種排列組合,不論位於中央或邊緣,皆能展現穩定修復效果,顯示其優異的泛化能力。
本研究亦進一步將所提神經網路演算法實現為晶片架構,設計專用之運算單元、權重記憶體與近似激活函數電路,並透過Cell-based 流程於40 奈米CMOS 製程下完成電路實現,達成238 MHz 運作頻率,具備即時推論能力。研究成果驗證了深度學習結合硬體加速於衛星通訊應用的可行性,成功達成具備自我修復能力之衛星相控陣列系統設計目標。
This study focuses on phased array antenna systems in Low Earth Orbit (LEO) satellites, addressing the beam quality degradation caused by antenna element failures. Due to the harsh space environment, antenna elements are prone to long-term faults from radiation, thermal cycling, or manufacturing defects, leading to deviations in radiation patterns and reduced communication performance. To tackle this challenge, we propose a deep learning–based antenna weight recovery method that reallocates the weights of functional elements based on failure locations and original configurations, thereby enabling fault-tolerant beamforming and restoring radiation patterns close to the original state.
An Autoencoder neural network architecture is employed and trained with a large dataset of simulated antenna failures, enabling the model to learn recovery strategies under different fault combinations. Simulation results show that, for single, double, and triple element failures, the proposed method effectively suppresses sidelobe levels (SLL) and preserves the half-power beamwidth (HPBW), achieving significant improvements in average recovery rates compared with heuristic algorithms. Moreover, all possible failure position combinations are considered, and consistent recovery performance is achieved for both central and edge failures, demonstrating strong generalization capability.
The algorithm is further implemented in hardware through a circuit architecture with dedicated computing units, weight memory, and approximate activation function circuits. Using a cell-based flow in a 40-nm CMOS process, the design achieves 238 MHz operating frequency with real-time inference capability. These results confirm the feasibility of combining deep learning with hardware acceleration for satellite communications and demonstrate the potential of self-healing phased array antenna systems.
[1] G. He, X. Gao, L. Sun, and R. Zhang, “A Review of Multibeam Phased Array Antennas as
LEO Satellite Constellation Ground Station,” IEEE Access, vol. 9, pp. 147 142–147 152,
2021.
[2] B.-K. Yeo and Y. Lu, “Array Failure Correction with a Genetic Algorithm,” IEEE
Transactions on Antennas and Propagation, vol. 47, no. 5, pp. 823–828, 1999.
[3] O. P. Acharya, A. Patnaik, and S. N. Sinha, “Limits of Compensation in a Failed Antenna
Array,” International Journal of RF and Microwave Computer-Aided Engineering, vol. 24,
no. 6, pp. 635–645, 2014.
[4] T. T. Taylor, “Design of line-source antennas for narrow beamwidth and low side lobes,”
Naval Research Laboratory, Tech. Rep. NRL Report 5756, 1955.
[5] C. L. Dolph, “A current distribution for broadside arrays which optimizes the relationship
between beam width and side-lobe level,” Proceedings of the IRE, vol. 34, no. 6, pp. 335–
348, 1946.
[6] M. N. Hamdy, “Beamformers explained,” White Paper, CommScope, 2020, accessed:
2025-05-26. [Online]. Available: https://www.commscope.com
[7] J. Campos, “Understanding the 5g nr physical layer,” Keysight Technologies, 2017.
[8] D. H. Ballard, “Modular Learning in Neural Networks,” in Proceedings of the AAAI
Conference on Artificial Intelligence. AAAI Press, 1987, pp. 279–284.
[9] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-
Propagating Errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.
[10] M. A. Kramer, “Nonlinear Principal Component Analysis Using Autoassociative Neural
Networks,” AIChE Journal, vol. 37, no. 2, pp. 233–243, 1991.
[11] G. E. Hinton and R. S. Zemel, “Autoencoders, Minimum Description Length and
Helmholtz Free Energy,” in Advances in Neural Information Processing Systems 6
(NeurIPS), 1993, pp. 3–10.
[12] G. E. Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural
Networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
[13] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and Composing
Robust Features with Denoising Autoencoders,” Proceedings of the 25th International
Conference on Machine Learning (ICML), vol. –, no. –, pp. 1096–1103, 2008.
[14] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” arXiv preprint
arXiv:1312.6114, vol. 1, no. 1, pp. 1–14, 2014.
[15] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv preprint
arXiv:1412.6980, 2014. [Online]. Available: https://arxiv.org/abs/1412.6980
[16] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch,
Y. Jia, and K. He, “Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour,” arXiv
preprint arXiv:1706.02677, 2017. [Online]. Available: https://arxiv.org/abs/1706.02677
[17] N. S. Keskar, D. Mudigere, J. Nocedal, M. Smelyanskiy, and P. T. P. Tang, “On Large-
Batch Training for Deep Learning: Generalization Gap and Sharp Minima,” arXiv preprint
arXiv:1609.04836, vol. 1, no. 1, pp. 1–14, 2016.
[18] J. Snoek, H. Larochelle, and R. P. Adams, “Practical Bayesian Optimization of Machine
Learning Algorithms,” Advances in Neural Information Processing Systems (NeurIPS),
vol. 25, pp. 2951–2959, 2012.
[19] P. I. Frazier, “A Tutorial on Bayesian Optimization,” arXiv preprint arXiv:1807.02811,
2018.
[20] Y. LeCun, J. S. Denker, and S. A. Solla, “Optimal Brain Damage,” Advances in Neural
Information Processing Systems, vol. 2, pp. 598–605, 1989.
[21] J. Frankle and M. Carbin, “The lottery ticket hypothesis: Finding sparse, trainable neural
networks,” in Proceedings of the 7th International Conference on Learning Representations
(ICLR), 2019. [Online]. Available: https://openreview.net/forum?id=rJl-b3RcF7
[22] N. S. Grewal, M. Rattan, and M. S. Patterh, “A Linear Antenna Array Failure Correction
with Null Steering using Firefly Algorithm,” Defence Science Journal, vol. 64, no. 2, pp.
136–142, 2014.
[23] S. K. Mandal, S. Patra, S. Salam, K. Mandal, G. K. Mahanti, and N. N.
Pathak, “Failure Correction of Linear Antenna Arrays with Optimized Element
Position using Differential Evolution,” in 2016 IEEE International Conference on
Communication and Signal Processing (ICCSP). IEEE, 2016, pp. 1–5. [Online].
Available: https://ieeexplore.ieee.org/document/7754269
[24] A. Tisan, S. Oniga, D. Mic, and A. Buchman, “Digital Implementation of the
Sigmoid Function for FPGA Circuits,” Acta Technica Napocensis, Electronics and
Telecommunications, vol. 50, no. 2, pp. 15–20, 2009.
[25] S. R. Chiluveru, Gyanendra, S. Chunarkar, M. Tripathy, and B. K. Kaushik, “Efficient
Hardware Implementation of DNN-Based Speech Enhancement Algorithm With Precise
Sigmoid Activation Function,” IEEE Transactions on Circuits and Systems II: Express
Briefs, vol. 68, no. 11, pp. 3461–3465, 2021.
[26] H. Amin, K. M. Curtis, and B. R. Hayes-Gill, “Piecewise Linear Approximation Applied
to Nonlinear Function of a Neural Network,” IEE Proceedings - Circuits, Devices and
Systems, vol. 144, no. 6, pp. 313–317, 1997.
[27] H. S. Jeon, A. Yazdanbakhsh, and H. Esmaeilzadeh, “MAERI: Enabling Flexible Dataflow
Mapping over DNN Accelerators via Reconfigurable Interconnects,” Proceedings of
the International Conference on Architectural Support for Programming Languages and
Operating Systems (ASPLOS), pp. 461–475, 2018.
[28] J. Jo, S. Kim, and I.-C. Park, “Energy-Efficient Convolution Architecture Based on
Rescheduled Dataflow,” IEEE Transactions on Circuits and Systems I: Regular Papers,
vol. 65, no. 9, pp. 3006–3019, 2018.
[29] Y. Choi, D. Bae, J. Sim, S. Choi, M. Kim, and L.-S. Kim, “Energy-efficient design of
processing element for convolutional neural network,” IEEE Transactions on Circuits and
Systems II: Express Briefs, vol. 64, no. 9, pp. 1047–1051, 2017.