跳到主要內容

簡易檢索 / 詳目顯示

研究生: 曾嘉貞
Jia-Zhen Zeng
論文名稱: 動態權重引導條件式拉丁超立方採樣法
Dynamic-weighted Guided Conditional Latin Hypercube Sampling
指導教授: 林錦德
Chin-Te Lin
口試委員:
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 76
中文關鍵詞: 實驗設計拉丁超立方採樣條件式採樣貝葉斯最佳化高斯過程黑箱函數最佳化
外文關鍵詞: Experimental Design, Latin Hypercube Sampling, Conditional Sampling, Bayesian Optimization, Gaussian Process, Black-box Optimization
相關次數: 點閱:61下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在智慧製造環境中,許多產業製程的參數設定仰賴經驗調整,導致效率低落與穩定性不足。尤其當目標函數無法解析且實驗成本高昂時,如何在有限樣本下有效探索參數空間並快速收斂至近似最優解,成為實務優化的重要挑戰。
    為此,本研究提出一種動態權重引導條件式拉丁超立方採樣(Dynamic-weighted Guided Conditional Latin Hypercube Sampling, DW-G-cLHS),具備導引性與隨機性並存的搜尋特性。方法中結合高斯過程模型,並融合期望改善值、預測值與不確定性等多元選點指標,根據歷史改善幅度動態調整權重比例,有效開發樣本分佈朝潛在最優區域集中。
    為驗證所提方法之效能,本研究選用三種具有代表性的黑箱測試函數進行實驗,包括具階梯性質之階梯函數、含局部擾動與雙峰結構之混合雙峰函數,以及具高度非線性與多峰特徵的 Ackley函數。比較方法涵蓋經典的 EI、LogEI、UCB、TuRBO-1 等最佳化技術。透過多次隨機初始化試驗,分析其在最小值逼近能力、收斂速度與穩定性等指標上的表現差異。
    實驗結果顯示,本研究所提之方法在多數情境下皆能以更少的樣本數逼近最佳解,且具備較高的搜尋穩定性與避免陷入局部最小值的能力。其彈性導引與結構控制機制,使其特別適用於需進行多變數調參或高成本設計優化等場域,具備良好的應用潛力與實務可行性。


    In smart manufacturing, many industrial processes rely on trial-and-error parameter tuning, leading to low efficiency and instability. When the target function is expensive to evaluate and lacks an analytical form, it is a major challenge to find optimal solutions using only limited samples.
    This study proposes a Dynamic-weighted Guided Conditional Latin Hypercube Sampling (DW-G-cLHS) method, which combines both guidance and randomness in sampling. By using a Gaussian Process Regression model and integrating multiple selection indicators, such as Expected Improvement (EI), prediction values, and uncertainty, the method dynamically adjusts weights based on past improvements to focus sampling on promising areas.
    We test the method on three benchmark black-box functions: a step function, a bimodal perturbed function, and the multi-peak Ackley function. The performance is compared with well-known optimization methods like EI, LogEI, UCB, and TuRBO-1.
    The results show that DW-G-cLHS achieves better solutions with fewer samples and avoids local optima more effectively. Its flexible guidance and structure control make it suitable for high-cost design problems and multi-variable optimization tasks.

    聲明 i 摘要 ii Abstract iii 致謝 iv 目錄 v 圖目錄 vii 表目錄 ix 符號表 x 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的與貢獻 2 1.3 文獻探討 3 1.3.1 拉丁超立方與其變體設計 3 1.3.2 貝葉斯最佳化與模型導引補點策略 3 1.3.3 結構導向與條件式樣本選擇策略 4 第二章 相關技術 5 2.1 均勻設計抽樣法 5 2.1.1 Latin Hypercube Sampling 5 2.1.2 Sobol 序列 6 2.1.3 小結與圖例比較 7 2.2 候選點生成與篩選策略 7 2.2.1 候選點 7 2.2.2 Conditioned Latin Hypercube Sampling 8 2.3代理模型與樣本導引策略 9 2.3.1 Gaussian Process Regression 9 2.3.2 Bayesian Optimization 10 2.3.3 Expected Improvement 11 2.3.4 Logarithmic Expected Improvement 12 2.3.5 Upper Confidence Bound 12 2.4自適應信賴區域最佳化法 13 2.4.1 Trust Region Bayesian Optimization 13 第三章 研究流程與系統架構 15 3.1 研究流程 15 3.2 架構說明 17 3.2.1初始樣本設定 17 3.2.2候選點指標計算 18 3.2.3補點選擇 23 3.2.4搜尋半徑動態調整機制 25 3.2.5終止條件 27 第四章 實驗設計 29 4.1 測試函數 29 4.1.1 階梯函數 29 4.1.2 混合雙峰函數 30 4.1.3 Ackley函數 32 4.2 測試方法與評估指標 34 第五章 結果與討論 36 5.1 實驗結果 36 5.1.1階梯函數之結果 36 5.1.2混合雙峰函數之結果 39 5.1.3 Ackley函數之結果 42 5.1.3.1 5維測試 43 5.1.3.2 20維測試 53 5.2 綜合討論 57 第六章 結論與未來展望 60 6.1 具體貢獻 60 6.2 應用限制 60 6.3未來展望 61 參考文獻 63

    [1] Box, G. E. P., Hunter, W. G., & Hunter, J. S. (1978). Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. Wiley.
    [2] Montgomery, D. C. (2017). Design and Analysis of Experiments (9th ed.). Wiley.
    [3] McKay, M. D., Beckman, R. J., & Conover, W. J. (1979). A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics, 21(2), 239–245. DOI: 10.2307/1268522.
    [4] Niederreiter, H. (1992). Random Number Generation and Quasi-Monte Carlo Methods. SIAM.
    [5] Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems (Vol. 25), Lake Tahoe, Nevada, DOI: 10.48550/arXiv.1206.2944.
    [6] Zhou, X., Lin, D. K. J., Hu, X., & Ouyang, L. (2019). Sequential Latin hypercube design with both space-filling and projective properties. Quality and Reliability Engineering International, 35(6), 1941–1951. DOI: 10.1002/qre.2485.
    [7] Jones, D. R., Schonlau, M., & Welch, W. J. (1998). Efficient global optimization of expensive black-box functions. Journal of Global Optimization, 13(4), 455–492. DOI: 10.1023 /A:1008306431147.
    [8] Williams, C. K. I., & Rasmussen, C. E. (1996). Gaussian processes for regression. In D. S. Touretzky, M. C. Mozer, & M. E. Hasselmo (Eds.), Advances in Neural Information Processing Systems, Vol. 8. Denver, CO: MIT Press. ISBN: 0262201070.
    [9] Minasny, B., & McBratney, A. B. (2006). A conditioned Latin hypercube method for sampling in the presence of ancillary information. Computers & Geosciences, 32(9), 1378–1388. DOI: 10.1016/j.cageo.2005.12.009.
    [10] Iman, R. L., & Conover, W. J. (1982). A distribution-free approach to inducing rank correlation among input variables. Communications in Statistics – Simulation and Computation, 11(3), 311–334. DOI: 10.1080/03610918208812265.
    [11] Sutton, R. S., & Barto, A. G. (2018). Reinforcement Learning: An Introduction (2nd ed.). Cambridge, MA: MIT Press. ISBN: 9780262039246.
    [12] Belakaria, S., Deshwal, A., & Doppa, J. R. (2020). Max-value entropy search for multi-objective Bayesian optimization with constraints. In NeurIPS Workshop on Machine Learning and the Physical Sciences, Vancouver, Canada. DOI: 10.48550/arXiv.2009.01721.
    [13] Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671–680. DOI: 10.1126/science.220.4598.671.
    [14] Brungard, C. W., & Boettinger, J. L. (2010). Conditioned Latin hypercube sampling: Optimal sample size for digital soil mapping of arid rangelands in Utah, USA. In R. A. Viscarra Rossel, A. B. McBratney, & B. Minasny (Eds.), Digital Soil Mapping, 67–75. Springer, Berlin, Germany. DOI: 10.1007/978-90-481-8863-5_6.
    [15] Wu, M., Xu, J., Wang, L., Zhang, C., & Tang, H. (2023). Adaptive multi-surrogate and module-based optimization algorithm for high-dimensional and computationally expensive problems. Information Sciences, 665, 119308. DOI: 10.1016/j.ins.2023.119308.
    [16] Li, M., Tang, J., & Meng, X. (2022). Multiple surrogate-model-based optimization method using the multimodal expected improvement criterion for expensive problems. Mathematics, 10(23), 4467. DOI: 10.3390/math10234467.
    [17] Ko, J., & Fox, D. (2009). GP-BayesFilters: Bayesian filtering using Gaussian process prediction and observation models. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3471–3476, St. Louis, MO, USA. DOI: 10.1007/s10514-009-9119-x.
    [18] Lookman, T., Balachandran, P. V., Xue, D., & Yuan, R. (2019). Active learning in materials science with emphasis on adaptive sampling using uncertainties for targeted design. npj Computational Materials, 5, 21. DOI: 10.1038/s41524-019-0153-8.
    [19] Brochu, E., Cora, V. M., & de Freitas, N. (2010). A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. DOI: 10.48550/arXiv.1012.2599.
    [20] Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., & De Freitas, N. (2016). Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1), 148–175. DOI: 10.1109/JPROC.2015.2494218.
    [21] Ament, S., Daulton, S., Eriksson, D., Balandat, M., & Bakshy, E. (2023). Unexpected improvements to expected improvement for Bayesian optimization. In Advances in Neural Information Processing Systems (NeurIPS 2023), New Orleans, Louisiana, USA. DOI: 10.1137/arXiv.2310.20708.
    [22] Conn, A. R., Gould, N. I. M., & Toint, P. L. (2000). Trust Region Methods. SIAM. DOI: 10.1137/1.9780898719857.
    [23] Yuan, Y. (2015). Recent advances in trust region algorithms. Mathematical Programming, 151(1), 249–281. DOI: 10.1007/s10107-015-0893-2.
    [24] Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015). Trust Region Policy Optimization. In Proceedings of the 32nd International Conference on Machine Learning, 1889–1897. DOI: 10.48550/arXiv.1502.05477.
    [25] Santos, S. A. (2014). Trust-region-based methods for nonlinear programming: Recent advances and perspectives. ResearchGate preprint. DOI: 10.1590/0101-7438.2014.034.03.0447.
    [26] Balandat, M., Karrer, B., Jiang, D. R., Daulton, S., Letham, B., Wilson, A. G., & Bakshy, E. (2020). BoTorch: A framework for efficient Monte-Carlo Bayesian optimization. Advances in Neural Information Processing Systems, 33, 21524–21538. DOI: 10.48550/arXiv.1910.06403.
    [27] Hernández-Lobato, J. M., Gelbart, M. A., Hoffman, M. W., Adams, R. P., & Ghahramani, Z. (2015). Predictive Entropy Search for Bayesian Optimization with Unknown Constraints. DOI: 10.48550/arXiv.1502.05312.
    [28] Chollet, F., & Keras Team. (2015). ExponentialDecay learning-rate schedule (Keras documentation).https://keras.io/api/optimizers/learning_rate_schedules/exponential_decay/
    [29] Ackley, D. H. (1987). A Connectionist Machine for Genetic Hillclimbing. Kluwer Academic Publishers. ISBN: 978-0-89838-236-5.
    [30] Box, G. E. P., & Wilson, K. B. (1951). On the Experimental Attainment of Optimum Conditions. Journal of the Royal Statistical Society: Series B (Methodological), 13(1), 1–45. DOI: 10.1111/j.2517-6161.1951.tb00067.x.
    [31] Hennig, P., & Schuler, C. J. (2012). Entropy Search for Information-Efficient Global Optimization. Journal of Machine Learning Research, 13, 1809–1837. DOI: 10.48550/arXiv.1112.1217.
    [32] Kandasamy, K., Dasarathy, G., Oliva, J. B., Schneider, J., & Póczos, B. (2017). Multi-fidelity Bayesian Optimisation with Continuous Approximations. In Proceedings of the 34th International Conference on Machine Learning, 1799–1808. DOI: 10.48550/arXiv.1703.06240.
    [33] Clark, E., Ji, Y., & Smith, N. A. (2021). All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics.DOI: 10.48550/arXiv.2107.00061
    [34] Ma, Z., Wang, Z., & Chai, J. (2024). Babysit a language model from scratch: Interactive language learning by trials and demonstrations. DOI: 10.48550/arXiv.2405.13828

    QR CODE
    :::