Challenge on Optimization of Context Collection for Code Completion
By: Dmitry Ustalov , Egor Bogomolov , Alexander Bezzubov and more
Potential Business Impact:
Helps AI write code faster by reading more of it.
The rapid advancement of workflows and methods for software engineering using AI emphasizes the need for a systematic evaluation and analysis of their ability to leverage information from entire projects, particularly in large code bases. In this challenge on optimization of context collection for code completion, organized by JetBrains in collaboration with Mistral AI as part of the ASE 2025 conference, participants developed efficient mechanisms for collecting context from source code repositories to improve fill-in-the-middle code completions for Python and Kotlin. We constructed a large dataset of real-world code in these two programming languages using permissively licensed open-source projects. The submissions were evaluated based on their ability to maximize completion quality for multiple state-of-the-art neural models using the chrF metric. During the public phase of the competition, nineteen teams submitted solutions to the Python track and eight teams submitted solutions to the Kotlin track. In the private phase, six teams competed, of which five submitted papers to the workshop.
Similar Papers
Beyond More Context: How Granularity and Order Drive Code Completion Quality
Software Engineering
Helps computers write better code by finding good examples.
An Empirical Study of Developer-Provided Context for AI Coding Assistants in Open-Source Projects
Software Engineering
Helps AI understand project rules for better coding.
Towards an Understanding of Context Utilization in Code Intelligence
Software Engineering
Helps computers understand code better with more info.