| 研究生: |
鄭秉豪 Ping-hao Cheng |
|---|---|
| 論文名稱: |
監督式學習演算法於填補遺漏值之比較與研究 |
| 指導教授: |
蔡志豐
Chih-fong Tsai |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 資訊管理學系 Department of Information Management |
| 論文出版年: | 2015 |
| 畢業學年度: | 103 |
| 語文別: | 中文 |
| 論文頁數: | 163 |
| 中文關鍵詞: | 資料探勘 、遺漏值 、資料補值 、監督式學習 |
| 相關次數: | 點閱:16 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著資訊科技的日益進步,人們在資訊蒐集與應用上的受益,是最貼近生活且最明顯的部分。資料的記載、儲存並不僅限於經驗的保留及傳承。透過資訊系統的建置、方法的改良及優化,人們更能將資料有效率的分門別類及管理、應用和推測,而資料探勘(Data Mining)技術便是在這樣的背景下日趨成熟、演進。資料探勘採用了多樣的統計分析及模組方式來針對大量資料進行分析,並設法提取具有隱含價值的特徵及關聯性加以應用。然而,在這些隱藏價值的萃取過程中,資料本身所具有的部份特質將一定程度的對結果造成影響,例如:資料遺漏。
遺漏值(Missing Value)之於資料探勘,是造成探勘資料不完整的一項原因,而資料遺漏的原因可能來自人為的資料輸入錯誤、隱瞞或背景差異等主觀影響所造成的缺失;亦可能來自機器本身,如:儲存失敗、硬體故障、毀損等導致特定時段內的資料遺漏等。因此,在進行資料探勘時遺漏值問題往往導致了探勘效能的降低。
目前,人們針對遺漏值的處理提出了許多解決策略。其中,使用監督式學習演算法做為補值預測的應用更是其中的佼佼者。然而,針對各種演算法在補值應用上的成效卻無一統整性的應用與建議。著眼於此,本研究嘗試透過使用多種較為知名的監督式學習演算法來針對遺漏資料進行預測並補值後,再將補值結果輔以多項的正確率評估,進而分析及探討各類補值法在不同情境下的表現,統整、歸納並提出建議供後續研究者(或具有補值需求者)在針對遺漏值處理上能更切實的以最具效力及效益的方法來進行應用。
With the progress of Information Technology, people are benefited from efficient data collection and its related applications. In addition, since the number and the size of online databases grow rapidly, the way to retrieve useful information from these large databases effectively and efficiently is getting more important. This has become the research issue of data mining.
Data mining is a process of using a variety of statistical analyses or machine learing techniques for large amounts of data, including analyzing and managing the way of extracting the hidden values of features and their relevance to vairous applications. It helps people to learn novel knowledge by passing experiences that they can make the decision or forecaste the trend. However, from the retrieval process, there are some problems that should be considered, such as “Missing Values”.
Missing values can briefly defined as the (attribute) value that is missed in a chosen dataset. For example, when registering on websites, users have to fill in some columns sequentially, such as “Name”,”Birthday”…etc. However, because of some reasons, like data input errors, information concealing and so on, we may lost some data values through this process and these lost may cause data incomplete or some errors. Moreover, it can reduce the efficiency and accuracy of data mining results. In this case, people try to use some methods to impute missing values, and supervised learning algorithms is one of these common approach for the missing value impution problem.
In this thesis, I try to conduct experiments to compare the efficiency and accuracy between five famous supervised learning algorithms, which are Bayes, SVM, MLP, CART, k-NN, over categorical, numerical, and mix types of datasets. This allows us to know which imputation method performs better in what data type over the dataset with how many missing rates. The experimental results show that the CART method is the best choice for missing value imputation, which not only requires relative lower imputation time, but also can make the classifier provide the higher classification accuracy.
[1] Fayyad, U., Shapiro, g. P., Smyth, P., 1996. From data mining to knowledge discovery in databases, AI Magazine, 17(3):37-54.
[2] Hand, D., Mannila, H., Smyth, P., 2001. Principles of data mining, Adaptive
Computation and Machine Learning Series.
[3] Cios, K. J., Kurgan, L. A., 2002. Trends in Data Mining and Knowledge Discovery. In: Knowledge discovery in advanced information systems, Pal, N.R., Jain, L Pal, N.R.. C., Teoderesku N. (eds.), Springer.
[4] Ader, H. J., Mellenbergh, G. J., Hand, D. J., 2008. Advising on Research Methods: A consultant’s Companion. Huizen, The Netherlands: Johannes van Kessel.
[5] Kurgan, L. A., Cios, K. J., 2004. CAIM Discretization Algorithm. IEEE Transactions on Data and Knowledge Engineering, 16(2):145-153.
[6] Kamakshi Lakshminarayan, Steven A. Harp, Tariq Samad, 1999. Imputation of Missing Data in Industrial Databases, Appl. Intell, 11(3): 259-275
[7] Allison, P. D., 2001. Missing Data Thousand Oaks, CA: Sage Publications.
[8] Little, R. J. A., Rubin, D. B., 1987. Statistical analysis with missing data, New York, Wiley.
[9] Little, R. J. A., Rubin, D. B., 2002. Statistical Analysis with Missing Data, New York, John Wiley.
[10] XindongWu, Vipin Kumar, J. Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J. McLachlan, Angus Ng, Bing Liu, Philip S. Yu, Zhi-Hua Zhou, Michael Steinbach, David J. Hand, Dan Steinberg, Top 10 algorithms in data mining, Knowl Inf Syst (2008) 14:1–37.
[11] Lewis, C. D., 1982. Industrial and business forecasting methods: A practical guide to exponential smoothing and curve fitting. London: Butterworth Scientific.
[12] J. Scott Armstrong and Fred Collopy, 1992. Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons.
[13] Hyndman, Rob J. Koehler, Anne B.; Koehler, 2006. Another look at measures of forecast accuracy, International Journal of Forecasting.
[14] Scheffer, J., 2002. Dealing with missing data.
[15] Rubin, D.B., 1976. Inference and Missing Data. Biometrika 63 581-592
[16] Schafer, J.L., 1997. The Analysis of Incomplete Multivariate Data. Chapman & Hall
[17] J. Cohen and P. Cohen, 1983. Applied multiple regression/correlation analysis for the behavioral sciences (2nd ed.), Hillsdale, NJ: Erlbaum.
[18] J. L. Schafer and M. K. Olsen, 1998. “Multiple imputation for multivariate missing-data problems: A data analyst's perspective”, Multivariate Behavioral Research, Vol.33, pp.545-57.
[19] G. Kalton and D. Kasprzyk, 1982. Imputing for Missing Survey Responses. Proceedings of the Survey Research Methods Section, American Statisitcal Association.
[20] 楊棋全、呂理裕(2004),指數與韋伯分佈遺失值之處理,國立中央大學 統計研究所
[21] 林盈秀、蔡志豐(2013),資料遺漏率、補值法與資料前處理關係之研究,國立中央大學 資訊管理研究所
[22] B. G. Tabachnick and L. S. Fidell, 1983. Using multivariate statistics, New York: Haper & Row.
[23] 趙民德、謝邦昌(1999),探索真相:抽樣理論和實務,暁園
[24] Witten, I. H., & Frank, E., 2005. Data Mining: Practical machine learning tools and techniques: Morgan Kaufmann.
[25] D. E. Rumelhart, G. E. Hinton and R. J. Williams, 1986. “Learning Internal Representations by Error Propagation,” in D. E. Rumelhart and J. L. McCelland (Eds.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Vol 1: Foundations. MIT Press.
[26] Chih-Jen Lin, LIBSVM -- A Library for Support Vector Machines. Retrived by 2015/05. http://www.csie.ntu.edu.tw/~cjlin/libsvm/
[27] Cortes, C., & Vapnik, V. N., 1995. Support-vector networks. Machine Learning, 20(3), 273-297.
[28] Jiawei, H., & Kamber, M., 2001. Data mining: concepts and techniques. San Francisco, CA, itd: Morgan Kaufmann, 5.
[29] Suykens, J. A., & Vandewalle, J., 1999a. Least squares support vector machine classifiers. Neural processing letters, 9(3), 293-300.
[30] Burges, C. J. (1998). A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery, 2(2), 121-167.
[31] Chang, C.-C., & Lin, C.-J., 2011. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3), 27.
[32] Lewis, R. J., 2000. An introduction to classification and regression tree (CART) analysis. In Annual Meeting of the Society for Academic Emergency Medicine in San Francisco, California (pp. 1-14).
[33] Breiman, L., Friedman, J.H., Olshen, R.A., and Stone, C.J. Classification and Regression Trees, Wadsworth, Belmont, CA, 1984. Since 1993 this book has been published by Chapman & Hall, New York.
[34] Fix, E., Hodges, J.L., 1951. Discriminatory analysis, nonparametric discrimination: Consistency properties, Technical Report 4, USAF School of Aviation Medicine, Randolph Field, Texas.
[35] CHO, S. B., 2002. Towards Creative evolutionary Systems with Interactive Genetic Algorithm, Applied Intelligence, 16(2): 129-138.
[36] Quinlan, J. R., 1987. Generating Production Rules from Decision Trees. Paper presented at the IJCAI.