Multi-Task Learning for Semantic Relatedness and Textual Entailment

HTML  XML Download Download as PDF (Size: 1261KB)  PP. 199-214  
DOI: 10.4236/jsea.2019.126012    780 Downloads   2,088 Views  Citations

ABSTRACT

Recently, several deep learning models have been successfully proposed and have been applied to solve different Natural Language Processing (NLP) tasks. However, these models solve the problem based on single-task supervised learning and do not consider the correlation between the tasks. Based on this observation, in this paper, we implemented a multi-task learning model to joint learn two related NLP tasks simultaneously and conducted experiments to evaluate if learning these tasks jointly can improve the system performance compared with learning them individually. In addition, a comparison of our model with the state-of-the-art learning models, including multi-task learning, transfer learning, unsupervised learning and feature based traditional machine learning models is presented. This paper aims to 1) show the advantage of multi-task learning over single-task learning in training related NLP tasks, 2) illustrate the influence of various encoding structures to the proposed single- and multi-task learning models, and 3) compare the performance between multi-task learning and other learning models in literature on textual entailment task and semantic relatedness task.

Share and Cite:

Zhang, L. and Moldovan, D. (2019) Multi-Task Learning for Semantic Relatedness and Textual Entailment. Journal of Software Engineering and Applications, 12, 199-214. doi: 10.4236/jsea.2019.126012.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.