跳到主要內容

簡易檢索 / 詳目顯示

研究生: 王霈玄
Pei-Syuan Wang
論文名稱: 基於圖神經網路自監督對比式學習實現數學式檢索
Formula Retrieval based on Self-Supervised Graph Contrastive Learning
指導教授: 陳弘軒
Hung-Hsuan Chen
口試委員:
學位類別: 碩士
Master
系所名稱: 資訊電機學院 - 資訊工程學系
Department of Computer Science & Information Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 72
中文關鍵詞: 數學式檢索圖神經網路對比學習
外文關鍵詞: Math Information Retrieval, GNN, Contrastive Learning
相關次數: 點閱:13下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 數學式可以用不同的符號或語句順序表達出同樣意義的式子,因此數學式檢索與一般的文字檢索有不同挑戰。本論文的研究目標是在大量的數學式中檢索與目標數學式相似的數學式。採用自監督圖神經網路對比學習方法,在NTCIR-12資料集上進行數學式檢索任務,並以nDCG及bpref作為評估指標。為了獲取更好的表現,本論文利用Tangent-CFT的嵌入作為圖模型預訓練特徵。當不考慮數學式上下文時,圖模型使用這些預訓練特徵在NTCIR-12資料集上取得了最佳的表現結果。


    One mathematical formula can be expressed using different symbols or sequences. Therefore, retrieving mathematical expressions poses unique challenges compared to general text retrieval. This paper aims to retrieve mathematical formulas similar to target formula from a large collection of mathematical formulas. We adopt graph neural with self-supervised contrastive learning approached to tackle the task. We utilize the pre-trained embedding learned from Tangent-CFT as the features for the nodes and edges in graph. We evaluate the performance using the NTCIR-12 dataset with nDCG and bpref as evaluation metric. The graph neural networks using these pretraining embeddings perform best on the NTCIR-12 dataset.

    目錄 頁次 摘要 ix Abstract xi 誌謝 xiii 目錄 xv 圖目錄 xix 表目錄 xxi 一、 緒論 1 1.1 研究動機 .................................................................. 1 1.2 研究目標 .................................................................. 2 1.3 研究貢獻 .................................................................. 2 1.4 論文架構 .................................................................. 2 二、 相關研究 5 2.1 基於文字檢索 ............................................................ 5 2.2 基於樹狀結構檢索 ...................................................... 6 2.3 Tangent-CFT............................................................. 6 三、 研究方法與模型 9 3.1 方法流程 .................................................................. 9 3.2 數學式表達圖 ............................................................ 9 xv 目錄 3.3 GCL 模型介紹 ........................................................... 13 3.3.1 InfoGraph ........................................................ 13 3.3.2 GraphCL ......................................................... 14 3.3.3 BGRL............................................................. 15 3.4 比較三種模型之差異 ................................................... 16 3.5 利用 Tangent-CFT 作為預訓練特徵................................ 16 四、 實驗結果 19 4.1 資料集 ..................................................................... 19 4.2 評估方法 .................................................................. 19 4.2.1 nDCG ............................................................. 19 4.2.2 bpref............................................................... 20 4.3 實驗設置 .................................................................. 21 4.3.1 超參數挑選 ...................................................... 21 4.3.2 模型詳細架構 ................................................... 21 4.4 比較不同圖神經網路自監督對比式學習效果 ..................... 24 4.4.1 使用標籤邊碼作圖特徵 ....................................... 24 4.4.2 使用 Tangent-CFT 做預訓練的結果....................... 25 4.4.3 結合 SLT 與 OPT 架構....................................... 27 4.4.4 案例分析: 不同模型及不同架構下的排序結果 .......... 28 4.5 改良評估方式結果差異 ................................................ 35 4.6 消融實驗 .................................................................. 36 五、 總結 41 5.1 結論 ........................................................................ 41 5.2 未來展望 .................................................................. 42 參考文獻 43 xvi 目錄 程式碼 45 圖增強效果 47

    [1] NTCIR-12 MathIR Task Overview, NTCIR, 2016.
    [2] Mansouri, B., Rohatgi, S., Oard, D. W., Wu, J., Giles, C. L., Zanibbi, R., “TangentCFT: An Embedding Model for Mathematical Formulas,” ACM SIGIR International Conference on Theory of Information Retrieval, 2019.
    [3] P. Sojka and M. Líška, “The art of mathematics retrieval,” Sep. 2011, pp. 57–60.
    doi: 10.1145/2034691.2034703.
    [4] A. Thanda, A. Agarwal, K. Singla, A. Prakash, and A. Gupta, “A document retrieval system for math queries,” in NTCIR Conference on Evaluation of Information Access Technologies, 2016.
    [5] L. Gao, Z. Jiang, Y. Yin, K. Yuan, Z. Yan, and Z. Tang, Preliminary exploration
    of formula embedding for mathematical information retrieval: Can mathematical
    formulae be embedded like a natural language? 2017. arXiv: 1707.05154 [cs.IR].
    [6] Y. Hijikata, H. Hashimoto, and S. Nishida, “An investigation of index formats for
    the search of mathml objects,” in 2007 IEEE/WIC/ACM International Conferences
    on Web Intelligence and Intelligent Agent Technology - Workshops, 2007, pp. 244–
    248. doi: 10.1109/WI-IATW.2007.121.
    [7] W. Zhong and H. Fang, “Opmes: A similarity search engine for mathematical content,” in Advances in Information Retrieval, N. Ferro, F. Crestani, M.-F. Moens,
    et al., Eds., Cham: Springer International Publishing, 2016.
    [8] K. Yokoi and A. Aizawa, “An approach to similarity search for mathematical expressions using mathml,” Towards a Digital Mathematics Library. Grand Bend,
    Ontario, Canada, July 8-9th, 2009, pp. 27–35, 2009.
    [9] G. Y. Kristianto, G. Topic, and A. Aizawa, “Mcat math retrieval system for ntcir-12
    mathir task,” in NTCIR Conference on Evaluation of Information Access Technologies, 2016.
    [10] W. Zhong and R. Zanibbi, “Structural similarity search for formulas using leaf-root
    paths in operator subtrees,” in Advances in Information Retrieval, L. Azzopardi,
    B. Stein, N. Fuhr, P. Mayr, C. Hauff, and D. Hiemstra, Eds., Cham: Springer
    International Publishing, 2019, pp. 116–129.
    [11] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word vectors with
    subword information,” Transactions of the association for computational linguistics,
    vol. 5, pp. 135–146, 2017.
    [12] K. Davila and R. Zanibbi, “Layout and semantics: Combining representations for
    mathematical formula search,” in Proceedings of the 40th International ACM SIGIR
    Conference on Research and Development in Information Retrieval, 2017, pp. 1165–
    1168.
    [13] Sun, Fan-Yun and Hoffman, Jordan and Verma, Vikas and Tang, Jian, “InfoGraph:
    Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization,” in International Conference on Learning Representations, 2019.
    [14] Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” in Advances in Neural Information Processing Systems, H.
    Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., vol. 33, Curran
    Associates, Inc., 2020, pp. 5812–5823. [Online]. Available: https://proceedings.
    neurips.cc/paper/2020/file/3fe230348e9a12c13120749e3f9fa4cd-Paper.
    pdf.
    [15] S. Thakoor, C. Tallec, M. G. Azar, et al., Large-scale representation learning on
    graphs via bootstrapping, 2021. arXiv: 2102.06514 [cs.LG].
    [16] C. Buckley and E. M. Voorhees, “Retrieval evaluation with incomplete information,” in Proceedings of the 27th annual international ACM SIGIR conference on
    Research and development in information retrieval, 2004, pp. 25–32.
    [17] F. Hutter, H. H. Hoos, and K. Leyton-Brown, “Sequential model-based optimization
    for general algorithm configuration,” in Learning and Intelligent Optimization: 5th
    International Conference, LION 5, Rome, Italy, January 17-21, 2011. Selected
    Papers 5, Springer, 2011, pp. 507–523.
    [18] K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?” arXiv preprint arXiv:1810.00826, 2018.

    QR CODE
    :::