Cached Long Short-term Memory Neural Networks For Document-level Sentiment Classification | Awesome LLM Papers Add your paper to Awesome LLM Papers

Cached Long Short-term Memory Neural Networks For Document-level Sentiment Classification

Jiacheng Xu, Danlu Chen, Xipeng Qiu, Xuangjing Huang . Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016 – 184 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Affective Computing Datasets EMNLP Interdisciplinary Approaches Memory & Context Model Architecture Neural Machine Translation Variational Autoencoders

Recently, neural networks have achieved great success on sentiment classification due to their ability to alleviate feature engineering. However, one of the remaining challenges is to model long texts in document-level sentiment classification under a recurrent architecture because of the deficiency of the memory unit. To address this problem, we present a Cached Long Short-Term Memory neural networks (CLSTM) to capture the overall semantic information in long texts. CLSTM introduces a cache mechanism, which divides memory into several groups with different forgetting rates and thus enables the network to keep sentiment information better within a recurrent unit. The proposed CLSTM outperforms the state-of-the-art models on three publicly available document-level sentiment analysis datasets.

Similar Work