Learning To Compose Words Into Sentences With Reinforcement Learning | Awesome LLM Papers Add your paper to Awesome LLM Papers

Learning To Compose Words Into Sentences With Reinforcement Learning

Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, Wang Ling . Arxiv 2016 – 119 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Interdisciplinary Approaches Neural Machine Translation Reinforcement Learning Variational Autoencoders

We use reinforcement learning to learn tree-structured neural networks for computing representations of natural language sentences. In contrast with prior work on tree-structured models in which the trees are either provided as input or predicted using supervision from explicit treebank annotations, the tree structures in this work are optimized to improve performance on a downstream task. Experiments demonstrate the benefit of learning task-specific composition orders, outperforming both sequential encoders and recursive encoders based on treebank annotations. We analyze the induced trees and show that while they discover some linguistically intuitive structures (e.g., noun phrases, simple verb phrases), they are different than conventional English syntactic structures.

Similar Work