| 研究生: |
洪子軒 Tzu-hsuan Hung |
|---|---|
| 論文名稱: |
從關聯規則集中建立分類決策樹 Using Decision Tree to Summarize Associative Classification Rules |
| 指導教授: |
陳彥良
Yen-liang Chen |
| 口試委員: | |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 資訊管理學系 Department of Information Management |
| 畢業學年度: | 95 |
| 語文別: | 英文 |
| 論文頁數: | 49 |
| 中文關鍵詞: | 資料探勘 、規則歸納法 、以規則為基礎的分類法 |
| 外文關鍵詞: | rule summarization, rule-based classification, data mining |
| 相關次數: | 點閱:13 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
關聯規則探勘是資料探勘領域其中一種最廣為人之的探勘方法,其主要內容是在一組交易資料中計算不同商品同時被購買的頻率,進而找出這些共同被購買之關係中的規則。另一方面關聯規則在解決分類問題之應用層面亦已行之有年(關聯式分類)。然而一旦分類規則產生出來,其缺乏組織反而造成閱讀與理解上的缺陷。為了解決此點,因此本文提出從關聯規則集中摘要以及建立決策樹的構想與具體作法。期望結合兩者優點來建立分類模型。就分類模型而言,此方法連結關聯式分類與決策樹二者之優點:相較於前者更加具理解力、有組織,精簡、容易使用的分類模型;相較於後者分類正確度亦比傳統C4.5建立決策樹方式來的更為精確。
Association rule mining is one of the most popular areas in data mining. It is to discover items that co-occur frequently within a set of transactions, and to discover rules based on these co-occurrence relations. Association rules have been adopted into classification problem for years (associative classification). However, once rules have been generated, their lacking of organization causes readability problem, i.e., it is difficult for user to analyze them and understand the domain. To resolve this weakness, our work presented two algorithms that can use decision tree to summarize associative classification rules. As a classification model, it connects the advantages of both associative classification and decision tree. On one hand, it is a more readable, compact, well-organized form and easier to use when compared to associative classification. On the other hand, it is more accurate than traditional TDIDT (abbreviated from Top-Down Induction of Decision Trees) classification algorithm.
[1] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, “Classification and Regression Trees,” Wadsworth, California, USA, 1984.
[2] T. Calders and B. Goethals, “Mining All Non-Derivable Frequent Itemsets,” Proc.of 2002 European Conf. on Principles of Data Mining and Knowledge Discovery, pp. 74–85, 2002.
[3] G. Dong, X. Zhang, L. Wong, and J. Li, “CAEP: Classification by Aggregating Emerging Patterns,” DS’99 (LNCS1721), Japan, Dec.1999.
[4] J. Gehrke, V. Ganti, R. Ramakrishnan, and W-Y. Loh, “BOAT—Optimistic Decision Tree Construction,” Proceedings of the 1999 ACM SIGMOD international conference on Management of Data, pp. 169–180, 1999.
[5] J. Gehrke, R. Ramakrishnan, and V. Ganti, “RainForest—A Framework for Fast Decision Tree Construction of Large Datasets,” Data Mining and Knowledge Discovery, 4:2/3, pp. 127–162, 2000.
[6] D. Gunopulos, H. Mannila, R. Khardon, and H. Toivonen, “Data Mining, Hypergraph Transversals, and Machine Learning,” Proc. 1997 ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pp. 209–216, 1997.
[7] J. Han, J. Wang, Y. Lu, and P. Tzvetkov, “Mining Top-K Frequent Closed Patterns Without Minimum Support,” Proc. of 2002 Int. Conf. on Data Mining, pp. 211–218, 2002.
[8] B. Liu, W. Hsu, Y. Ma, “Integrating Classification and Association Rule Mining,” Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining, pp. 80–86, 1998.
[9] B. Liu, W. Hsu, and Y. Ma, “Pruning and Summarizing the Discovered Associations,” KDD-99. 1999.
[10] B. Liu, M. Hu, and W. Hsu, “Multi-Level Organization and Summarization of the Discovered Rules,” Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, pp. 208-217, 2000.
[11] M. Mehta, R. Agrawal, and J. Rissanen, “SLIQ: A Fast Scalable Classifier for Data Mining,” Advances in Database Technology—Proceedings of the Fifth International Conference on Extending Database Technology, pp.18–32, 1996.
[12] N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal, “Discovering Frequent Closed Itemsets for Association Rules,” Proc. of 7th Int. Conf. on Database Theory, pp. 398–416, 1999.
[13] J. Pei, G. Dong, W. Zou, and J. Han, “On Computing Condensed Frequent Pattern Bases,” Proc. 2002 Int. Conf. on Data Mining, pp. 378–385, 2002.
[14] J. R. Quinlan, “Induction on Decision Trees,” Machine Learning, 1, pp. 81–106, 1986.
[15] J. R. Quinlan, “C4.5: Programs for Machine Learning,” Morgan Kaufmann Series in Machine Learning. Kluwer Academic Publishers, 1993
[16] J. R. Quinlan and R. M. Cameron-Jones, “Cameron-Jones. Foil: A midterm Report,” Proceedings of the 1993 European Conference on Machine Learning, pp. 3–20, 1993.
[17] R. Rastogi, and K. Shim, “PUBLIC: A Decision Tree Classifier That Integrates Building and Pruning,” VLDB’98, Proceedings of 24th International Conference on Very Large Data Bases, pp. 404–415, 1998.
[18] J. C. Shafer, R. Agrawal, and M. Mehta, “SPRINT: A Scalable Parallel Classifier for Data Mining” VLDB’96, Proceedings of 22nd International Conference on Very Large Data Bases, pp. 544–555, 1996.
[19] L. Wenmin, H. Jiawei, and P. Jian, “CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules,” ICDM 2001: 369-376. 2001.
[20] X. Yan, H. Cheng, J. Han, and D. Xin, “Summarizing Itemset Patterns: A Profile-Based Approach,” Proceedings of the 2005 ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug 2005.
[21] C. Yang, U. Fayyad, and P. S. Bradley, “Efficient Discovery of Error-Tolerant Frequent Itemsets in High Dimensions,” Proc. Of 2001 ACM Int. Conf. on Knowledge Discovery in Databases, pp. 194–203, 2001.
[22] X. Yin and J. Han, “CPAR: Classification Based on Predictive Association Rules” Proceedings of the Third SIAM International Conference on Data Mining, pp. 208–217, 2003.