跳到主要內容

簡易檢索 / 詳目顯示

研究生: 曾增仁
Tseng-Jen Tseng
論文名稱: 發展少量特徵擷取方法之問題分類技術
A Method to Extract Fewer Features for Question Classification
指導教授: 周世傑
Shihchieh Chou
口試委員:
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理學系
Department of Information Management
畢業學年度: 96
語文別: 英文
論文頁數: 20
中文關鍵詞: 文件分類問題分類問答系統特徵擷取機器學習
外文關鍵詞: text classification, question classification, question answering system, machine learning, feature extraction
相關次數: 點閱:12下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 現今使用者利用問題回答系統 (question answering system) 進行資訊檢索時,通常期望在其查詢問題的當中得到一個確切的答案;而非傳統的檢索系統一般,回應一連串相關的文件列表。在問題回答系統的架構之中,系統在回答問題之前必須先進行問題的分類,以便了解問題的義涵。而問題分類也是問題回答系統裡處理程序之中最易出現錯誤的模組。以機器學習導向來說,問題分類與文件分類是兩個相似的程序。因此,特徵擷取在問題分類的處理之中是相當重要的任務。傳統特徵擷取的方法是依賴成百上千甚至更多的特徵,研究者在處理大量的特徵面臨了許多的問題。因此,本篇研究發展一個新的特徵擷取方法,試圖以少量的特徵擷取用於機器學習的分類器。在實驗當中,我們使用統計顯著性檢定來判別每一種不同特徵對於分類器效能的影響。實驗發現我們所擷取的特徵與一般常使用的bag-of-words 特徵表現一樣好。而在小型訓練資料集當中,我們所擷取的特徵也跟bag-of-ngrams 特徵的表現一樣好。


    Today, some users usually prefer to receive answers in response to their questions by a question answering (QA) system, as opposed to the document lists returned by information retrieval (IR) system. In the architecture of a QA system, question classification is needed to extract the meaning of a question for answering the question. It causes most errors in the procedure of QA system. And question classification is very similar to text classification in machine learning approach. Therefore, the one of its important issues is to extract effective
    features. Traditional feature extraction depends on thousands or more features. Researches have problems in handling a large-dimension feature vectors. In view of this, this study is aimed to define a small number of features for machine learning classifiers. In our experiment, we test the efficacy of each feature with statistical significant test. We discover that our features are as good as bag-of-words feature. In small training dataset, our features are as good as bag-of-ngrams
    feature.

    Index.................................................... i Figure Index................................................... ii Table Index ................................................. iii 1.Introduction .......................................... 1 2. Question Classification .......................................... 3 2.1 Question Taxonomy ................................... 4 2.2 Machine Learning Approach ........................... 5 2.3 Handcrafted Rules ................................... 6 2.4 Using Internet ...................................... 7 3. Feature Extraction ................................... 9 3.1 Category Frequency................................... 9 3.2 Category Frequency for Question Classification ..... 11 4. Experiment........................................... 14 4.1 Data ............................................... 14 4.2 Evaluation ......................................... 14 4.3 Experimental Results ............................... 15 4.4 Discussion ......................................... 17 5. Conclusion and Future Works ......................... 18 Reference .............................................. 19

    [1] A. Singhal, S. Abney, M. Bacchiani, M. Collins, D. Hindle, and F. Pereira, “AT&T at
    TREC-8”, Proceedings of the Eighth Text Retrieval Conference (TREC-8), pp. 500-246,
    2000.
    [2] C. Cumby and D. Roth, “Relational representations that facilitate learning”, Proc.of the
    International Conference on the Principles of Knowledge Representation and Reasoning, pp.
    425-434, 2000.
    [3] D. Moldovan, M. Pasca, S. Harabagiu, and M. Surdeanu,“ Performance issues and error
    analysis in an open-domain Question Answering system”, Proceedings of the 40th Annual
    Meeting on Association for Computational Linguistics, pp. 33-40, 2001.
    [4] D. Zhang and W. S. Lee, “Question classification using support vector machines”,
    Proceedings of the 26th Annual International ACM SIGIR Conference on Research and
    Development in Information Retrieval, pp. 26-32, 2003.
    [5] E. M. Voorhees, “Overview of the TREC 2001 question answering track”, Proceedings of
    TREC, pp. 42-51, 2002.
    [6] E. M. Voorhees, “The TREC-8 Question Answering Track Report”, Proceedings of TREC,
    vol. 8, pp. 77-82, 1999.
    [7] E. Voorhees and D. Tice, “The TREC-8 question answering track evaluation”, Text
    Retrieval Conference TREC, vol. 8, 2000.
    [8] K. S. Jones, “A statistical interpretation of term specificity and its application in retrieval”,
    Journal of Documentation, vol. 28, pp. 11-21, 1972.
    20
    [9] O. Ferret, B. Grau, M. Hurault-Plantet, G. Illouz, L. Monceaux, I. Robba, and A. Vilnat,
    “Finding an answer based on the recognition of the question focus”, Proceedings of the
    Tenth Text Retrieval Conference(TREC 2001), pp. 500-250, 2002.
    [10] P. Parveen and B. Thuraisingham, “Face Recognition Using Multiple Classifiers”,
    Proceedings of the 18th IEEE International Conference on Tools with Artificial Intelligence,
    pp. 179-186, 2006.
    [11] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, Addison-Wesley
    Harlow, England, 1999.
    [12] S. M. Harabagiu, M. A. Pasca, and S. J. Maiorano, “Experiments with open-domain textual
    Question Answering”, Proceedings of the 18th Conference on Computational
    Linguistics-Volume 1, pp. 292-298, 2000.
    [13] T. Solorio, M. Perez-Coutino, M. M. y Gomez, L. Villasenor-Pineda, and A. Lopez-Lopez,
    “A language independent method for question classification”, Proc.of the 20th Int.Conf.on
    Computational Linguistics (COLING-04).Geneva, Switzerland, 2004.
    [14] W. B. Cavnar and J. M. Trenkle, “N-Gram-Based Text Categorization”, Ann Arbor MI, vol.
    48113, pp. 4001, 1994.
    [15] X. Li and D. Roth, “Learning question classifiers”, Proceedings of the 19th International
    Conference on Computational Linguistics, pp. 556-562, 2002.
    [16] Y. Yang and X. Liu, “A re-examination of text categorization methods”, Proceedings of the
    22nd Annual International ACM SIGIR Conference on Research and Development in
    Information Retrieval, pp. 42-49, 1999.

    QR CODE
    :::