TITLE:
Summary of Research Methods on Pre-Training Models of Natural Language Processing
AUTHORS:
Yu Xiao, Zhezhi Jin
KEYWORDS:
Natural Language Processing, Pre-Training Model, Language Model, Self-Training Model
JOURNAL NAME:
Open Access Library Journal,
Vol.8 No.7,
July
1,
2021
ABSTRACT: In recent years, deep learning technology has been widely used and developed. In natural language processing tasks, pre-training models have been more widely used. Whether it is sentence extraction or sentiment analysis of text, the pre-training model plays a very important role. The use of a large-scale corpus for unsupervised pre-training of models has proven to be an excellent and effective way to provide models. This article summarizes the existing pre-training models and sorts out the improved models and processing methods of the relatively new pre-training models, and finally summarizes the challenges and prospects of the current pre-training models.