C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling
By: Jin Qin , Zihan Liao , Ziyin Zhang and more
Potential Business Impact:
Makes computer code easier for AI to understand.
We present C2LLM - Contrastive Code Large Language Models, a family of code embedding models in both 0.5B and 7B sizes. Building upon Qwen-2.5-Coder backbones, C2LLM adopts a Pooling by Multihead Attention (PMA) module for generating sequence embedding from token embeddings, effectively 1) utilizing the LLM's causal representations acquired during pretraining, while also 2) being able to aggregate information from all tokens in the sequence, breaking the information bottleneck in EOS-based sequence embeddings, and 3) supporting flexible adaptation of embedding dimension, serving as an alternative to MRL. Trained on three million publicly available data, C2LLM models set new records on MTEB-Code among models of similar sizes, with C2LLM-7B ranking 1st on the overall leaderboard.
Similar Papers
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence
Software Engineering
Helps computers write computer programs from words.
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence
Software Engineering
Makes computers write computer programs from your words.
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence
Software Engineering
Helps computers write computer programs from descriptions.