Improved Automatic Summarization Of Subroutines Via Attention To File Context | Awesome LLM Papers Add your paper to Awesome LLM Papers

Improved Automatic Summarization Of Subroutines Via Attention To File Context

Sakib Haque, Alexander Leclair, Lingfei Wu, Collin McMillan . Proceedings of the 17th International Conference on Mining Software Repositories 2020 – 94 citations

[Paper]   Search on Google Scholar   Search on Semantic Scholar
Has Code Model Architecture

Software documentation largely consists of short, natural language summaries of the subroutines in the software. These summaries help programmers quickly understand what a subroutine does without having to read the source code him or herself. The task of writing these descriptions is called “source code summarization” and has been a target of research for several years. Recently, AI-based approaches have superseded older, heuristic-based approaches. Yet, to date these AI-based approaches assume that all the content needed to predict summaries is inside subroutine itself. This assumption limits performance because many subroutines cannot be understood without surrounding context. In this paper, we present an approach that models the file context of subroutines (i.e. other subroutines in the same file) and uses an attention mechanism to find words and concepts to use in summaries. We show in an experiment that our approach extends and improves several recent baselines.

Similar Work