Knowledge-prompted Estimator: A Novel Approach To Explainable Machine Translation Assessment | Awesome LLM Papers Add your paper to Awesome LLM Papers

Knowledge-prompted Estimator: A Novel Approach To Explainable Machine Translation Assessment

Hao Yang, Min Zhang, Shimin Tao, Minghan Wang, Daimeng Wei, Yanfei Jiang . Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2023 – 107 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Interdisciplinary Approaches Interpretability KDD Multimodal Semantic Representation Neural Machine Translation Prompting Variational Autoencoders Visual Question Answering

Cross-lingual Machine Translation (MT) quality estimation plays a crucial role in evaluating translation performance. GEMBA, the first MT quality assessment metric based on Large Language Models (LLMs), employs one-step prompting to achieve state-of-the-art (SOTA) in system-level MT quality estimation; however, it lacks segment-level analysis. In contrast, Chain-of-Thought (CoT) prompting outperforms one-step prompting by offering improved reasoning and explainability. In this paper, we introduce Knowledge-Prompted Estimator (KPE), a CoT prompting method that combines three one-step prompting techniques, including perplexity, token-level similarity, and sentence-level similarity. This method attains enhanced performance for segment-level estimation compared with previous deep learning models and one-step prompting approaches. Furthermore, supplementary experiments on word-level visualized alignment demonstrate that our KPE method significantly improves token alignment compared with earlier models and provides better interpretability for MT quality estimation. Code will be released upon publication.

Similar Work