Empirical Study Of Transformers For Source Code | Awesome LLM Papers Add your paper to Awesome LLM Papers

Empirical Study Of Transformers For Source Code

Nadezhda Chirkova, Sergey Troshin . ESEC/FSE '21: 29th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering 2021 – 44 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Compositional Generalization Content Enrichment Image Text Integration Interactive Environments Interdisciplinary Approaches Llm For Code Model Architecture Multimodal Semantic Representation Neural Machine Translation Productivity Enhancement Question Answering Tools

Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model.

Similar Work