Survey on Pre-trained Models Fusing Knowledge Graphs
YANG Jie;LIU Na;XU Zhenshun;ZHENG Guofeng;LI Chen;DAO Lu
【目的】针对预训练模型仍面临处理复杂任务所需的知识信息质量不高和数量庞杂的挑战,而融合知识图谱的预训练模型可增强其性能。进一步研究并深入探讨如何有效地融合知识图谱到预训练模型中,以丰富目前综述所包含的知识增强类型。【方法】分析并总结了近年来融合知识图谱的预训练模型的相关文献,首先简要介绍了预训练模型引入知识图谱的原因、优势以及难点;其次详细讨论了隐性结合、显性结合两类方法,并对代表模型的特点与优缺点进行了对比总结;最后对融合知识图谱的预训练模型将面临的挑战以及未来研究发展趋势进行了讨论。【结论】融合知识图谱的预训练模型核心问题是解决如何将知识库中的信息有效地融合到预训练模型中,未来可以探索更加有效和高效的知识融合方法,以提高模型的性能和泛化能力。
【Purpose】 In practical applications, the pre-trained model still faces the challenge of low quality and quantity of knowledge information required for complex tasks, while the fusion of knowledge graph into the pre-trained model can enhance its performance. 【Methods】 In this pa-per, literatures about knowledge graph fusion pre-training model in recent years have been ana-lyzed and summarized. First, the reasons, advantages, and difficulties of introducing knowledge graph into pre-training model have been introduced briefly. Second, two kinds of methods of im-plicit combination and explicit combination are discussed in detail, and the characteristics, advan-tages, and disadvantages of representative models are compared and summarized. Finally, the challenges and future research trends of pre-training models with fusion knowledge graph are dis-cussed. 【Conclusions】 The core issue of pre-training models incorporating knowledge graphs is to solve how to effectively integrate information from knowledge bases into the training model. In the future, more effective and efficient knowledge fusion methods can be explored to improve model performance and generalization ability.
deep learning; pre-training model; knowledge graph; enhance
主办单位:煤炭科学研究总院有限公司 中国煤炭学会学术期刊工作委员会