y0news
← Feed
Back to feed
🧠 AI NeutralImportance 6/10

Do not copy and paste! Rewriting strategies for code retrieval

arXiv – CS AI|Andrea Gurioli, Federico Pennino, Maurizio Gabbrielli|
🤖AI Summary

Researchers evaluated multiple code retrieval strategies using LLM-based rewriting, finding that full natural language transcription with query-corpus augmentation achieves the largest gains but corpus-only approaches often degrade performance. They introduced Delta H (token entropy) as a cheap, rewriter-agnostic metric to predict when LLM rewriting justifies its computational cost.

Analysis

This research addresses a fundamental challenge in code retrieval systems: embedding encoders that overfit to surface-level syntax rather than semantic meaning. By systematically evaluating three rewriting strategies across multiple benchmarks and encoder families, the authors provide empirical guidance on when expensive LLM-based augmentation actually improves retrieval quality. The key finding—that full natural language rewriting with joint query-corpus augmentation yields the largest gains while corpus-only approaches fail in 62% of configurations—challenges the intuitive assumption that offline augmentation provides universal benefits. This distinction matters because online augmentation requires per-query LLM calls, creating a cost-performance tradeoff that varies with encoder strength and query characteristics. The introduction of Delta H as a predictive metric solves a practical problem: developers can now estimate retrieval improvements without running expensive experiments. The research reveals that LLM rewriting functions best as a remediation layer for lightweight encoders rather than a universal enhancement, establishing clear boundaries for its application. The consistent correlation of Delta H across different rewriter families (Spearman rho up to 0.593) suggests the metric captures genuine semantic phenomena rather than model-specific artifacts. This nuanced analysis reframes code retrieval optimization as a conditional strategy where resource allocation depends on encoder capacity and query composition, not a universally applicable technique. The findings should influence how development teams architect retrieval systems, particularly in codebases where query types vary significantly between code-heavy and documentation-heavy searches.

Key Takeaways
  • Full natural language rewriting with query-corpus augmentation delivers the largest retrieval gains (+0.51 NDCG@10), but requires per-query LLM calls
  • Corpus-only rewriting degrades retrieval in 62% of configurations, revealing offline augmentation as unreliable without corresponding query rewriting
  • Delta H (token entropy) predicts retrieval success across different LLM families with statistically significant correlation (p < 0.001)
  • LLM rewriting optimally serves lightweight encoders on code-dominant queries; strong encoders and NL-heavy queries show diminishing returns
  • The research establishes a cost-benefit decision framework for when LLM rewriting justifies computational overhead versus accepting baseline retrieval performance
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles