跳到主要內容

簡易檢索 / 詳目顯示

研究生: 吳若平
Jo-Ping Wu
論文名稱: 結合主成分分析之貝氏分類模型
Naive Bayes classifier with Principal Components Analysis for continuous attributes
指導教授: 曾富祥
Fu-Shiang Tseng
口試委員:
學位類別: 碩士
Master
系所名稱: 管理學院 - 工業管理研究所
Graduate Institute of Industrial Management
論文出版年: 2015
畢業學年度: 103
語文別: 英文
論文頁數: 37
中文關鍵詞: 分類方法貝氏分類主成分分析
外文關鍵詞: Classification, Naïve Bayesian Classifier, Principal Components Analysis
相關次數: 點閱:10下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 大數據(Big data)的時代來臨,資料量急劇增加,資料處理分類的速度也成為資料探勘這門學問的一個很重要的環節。單純貝氏分類器(Naïve Bayes classifier)是一種簡單且實用的分類方法,其主要是根據貝氏定理的理論而來,它透過事前機率和事後機率和各屬性彼此間互相獨立的假設,來預測分類結果;此分類法為一種監督式的學習方法。它可以透過簡單的運算,而快速的獲得分類結果,也是其最大的優點。而獨立性假設就是為了能快速得到結果而設定的,但是現實生活中的資料大多是相依的並無法滿足這個假設。所以Naïve Bayes classifier的缺點主要有兩點,一個是現實資料並無法滿足這個獨立性假設,另一個則是他只能使用於類別行變數。
    我們所提出的新方法就是為了繼續保有Naïve Bayes classifier簡單且快速的優點,並且去除現實例子中資料各屬性彼此間無法滿足獨立性假設的問題。我們利用主成分分析的轉換將各屬性轉成相互線性獨立的狀態,再用連續型資料離散化的方法將資料轉換成類別行變數,最後再進行Naïve Bayes classifier,進而提高預測模型的準確度。
    我們利用UCI資料庫的資料進行模型的測試和比較,並建構出一個優於其他分類法(如:原始的Naïve Bayes classifier、decision tree、logistic regression等)的新模型。所以我們分別進行分析及測試,並測試其準確率和信賴區間,觀察資料在不同預測模型中的表現。最後我們還進一步的去探討不同離散化方法以及利用主成分分析後的結果去降低維度時,對於整個模型準確率的影響。


    Due to the progressing of the science and technology, the data is growing rapidly. The speed of classifier has become an important part of data mining. Naïve Bayes classifier model is a simple and practical method of classification, it is based on applying Bayes’ theorem with strong independence assumptions between the features. But this assumption is not very realistic as in many real situations.
    We propose a classifier method, PC-Naïve, which is based on Naïve Bayes classifier. We keep the simple and fast advantages of the Naïve Bays classifier and relax vital assumption for independence of the Naïve Bayes classifie model. We use Principal components analysis to transform the original data, make the attributes mutual linearly independence. Then discretization the transform data and calculate the prior and conditional probability. Final we can get the posterior probability and classifier the data.
    We have used the examples to present the classifier procedures in our research and compare the accuracy with four models, including PC-Naïve model, tradition Naïve Bayes model, Decision Tree model and Stepwise Logistic Regression model. At the end, we have discuss the accuracy of different dimension and discretization methods.

    摘要 III Abstract IV Table of Contents V List of Figures VI List of Tables VII Chapter 1 Introduction 1 1-1 Background and Motivation 1 1-2 Research Objectives and frameworks 2 Chapter 2 Literature Review 4 2-1 Classification 4 2-2 Naïve Bayesian Classifier 5 2-3 Principal Components Analysis 8 Chapter 3 Methodology 9 Chapter 4 Numerical Example 17 4-1 The “Glass (1987)” data problem 17 4-2 The “Pima Indians Diabetes (1990)” data problem 25 4-3 Accuracy of different dimension 31 4-4 Different discretization method with data 32 4-5 Different settings of attribute numbers in Glass data 33 Chapter 5 Conclusion and Future Research 34 Reference 36

    1. Cortizo, J. C., I. Giraldez, and M. C. Gaya, “ Wrapping the Naïve Bayes Classifier to Relax the Effect of Dependences”, Lecture Notes in Computer Science, Volume 4881, 2007, pp 229-239.
    2. Cortizo, J. C., and J. I. Gir´aldez, “Multi criteria wrapper improvements to naive bayes learning." Intelligent Data Engineering and Automated Learning, (2006) 419–427.
    3. Domingos, P., and M. J. Pazzani, “On the optimality of the simple bayesian classifier under zero-one loss.”, Machine Learning 29(2-3), (1997) 103–130.
    4. Domingos, P., and M. J. Pazzani, “Beyond independence: Conditions for the optimality of the simple bayesian classifier.”, International Conference on Machine Learning., (1996) 105–112.
    5. Farid, D. M., L. Zhang, C. M. Rahman, M. A. Hossain, and R. Strachan, “Hybrid decision tree and naïve Bayes classifiers for multi-class.”, Expert Systems with Applications, (2014), 1937–1946.
    6. Friedman, N., D. Geiger, and M. Goldszmidt, “ Bayesian network classifiers.”, Machine Learning, 29(2-3) (1997) 131–163.
    7. Ghorbanian, P., A. Ghaffari, A. Jalali, and C. Nataraj, “Heart Arrhythmia Detection Using Continuous Wavelet Transform and Principal Component Analysis with Neural Network Classifier.”, Computing in Cardiology, (2010), 669 – 672.
    8. Kohavi, R., “Scaling up the accuracy of Naive-Bayes classifiers: a decision-tree hybrid”, Proceedings of the Second International Conference on Knowledge Discovery and Data Mining. (1996) 202–207.
    9. Kononenko, I., “ Semi-naive bayesian classifier.”, EWSL-91: Proceedings of the European working session on learning on Machine learning., (1991) 206–219.
    10. Liu, J. L., Y. T. Hsu, and C. L. Hung, “Development of Evolutionary Data Mining Algorithms and their Applications to Cardiac Disease Diagnosis.”, Evolutionary Computation (CEC), IEEE Congress on, (2012) 1-8.
    11. Pazzani, M., “Constructive induction of cartesian product attributes.”, ISIS: Information, Statistics and Induction in Science, (1996).
    12. Pazzani, M. J., “searching for dependencies in Bayesian classifiers.”, Springer-Verlag New York, (1996), 239-248.
    13. Zhang, H., C. X. Ling, and Z. Zhao, “The learnability of naive bayes.” Lecture Notes in Computer Science 1822, (2000) 432–441.
    14. Zou, F., C. Li, X. Hu, and C. Zhou, “Combination of Principal Component Analysis and Bayesian Network and its Application on Syndrome Classification for Chronic Gastritis in Traditional Chinese Medicine.” ICNC '07 Proceedings of the Third International Conference on Natural Computation, (2007), 588 – 592.

    QR CODE
    :::