AIBullisharXiv โ CS AI ยท 9h ago6/10
๐ง
Retrieval-Feedback-Driven Distillation and Preference Alignment for Efficient LLM-based Query Expansion
Researchers developed a framework to make large language model-based query expansion more efficient by distilling knowledge from powerful teacher models into compact student models. The approach uses retrieval feedback and preference alignment to maintain 97% of the original performance while dramatically reducing inference costs.