Learning Disentangled Semantic Spaces Of Explanations Via Invertible Neural Networks | Awesome LLM Papers Add your paper to Awesome LLM Papers

Learning Disentangled Semantic Spaces Of Explanations Via Invertible Neural Networks

Yingji Zhang, Danilo S. Carvalho, AndrΓ© Freitas . Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing 2023 – 44 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization EMNLP Interdisciplinary Approaches Interpretability Model Architecture Neural Machine Translation Variational Autoencoders Visual Question Answering

Disentangled latent spaces usually have better semantic separability and geometrical properties, which leads to better interpretability and more controllable data generation. While this has been well investigated in Computer Vision, in tasks such as image disentanglement, in the NLP domain sentence disentanglement is still comparatively under-investigated. Most previous work have concentrated on disentangling task-specific generative factors, such as sentiment, within the context of style transfer. In this work, we focus on a more general form of sentence disentanglement, targeting the localised modification and control of more general sentence semantic features. To achieve this, we contribute to a novel notion of sentence semantic disentanglement and introduce a flow-based invertible neural network (INN) mechanism integrated with a transformer-based language Autoencoder (AE) in order to deliver latent spaces with better separability properties. Experimental results demonstrate that the model can conform the distributed latent space into a better semantically disentangled sentence space, leading to improved language interpretability and controlled generation when compared to the recent state-of-the-art language VAE models.

Similar Work