A Joint Many-task Model: Growing A Neural Network For Multiple NLP Tasks | Awesome LLM Papers Add your paper to Awesome LLM Papers

A Joint Many-task Model: Growing A Neural Network For Multiple NLP Tasks

Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, Richard Socher . Arxiv 2016 – 52 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Model Architecture Training Techniques

Transfer and multi-task learning have traditionally focused on either a single source-target pair or very few, similar tasks. Ideally, the linguistic levels of morphology, syntax and semantics would benefit each other by being trained in a single model. We introduce a joint many-task model together with a strategy for successively growing its depth to solve increasingly complex tasks. Higher layers include shortcut connections to lower-level task predictions to reflect linguistic hierarchies. We use a simple regularization term to allow for optimizing all model weights to improve one task’s loss without exhibiting catastrophic interference of the other tasks. Our single end-to-end model obtains state-of-the-art or competitive results on five different tasks from tagging, parsing, relatedness, and entailment tasks.

Similar Work