50 articles tagged with #retrieval-augmented-generation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv β CS AI Β· Apr 67/10
π§ Researchers published a comprehensive technical survey on Large Language Model augmentation strategies, examining methods from in-context learning to advanced Retrieval-Augmented Generation techniques. The study provides a unified framework for understanding how structured context at inference time can overcome LLMs' limitations of static knowledge and finite context windows.
AIBearisharXiv β CS AI Β· Mar 277/10
π§ Researchers have identified a new attack vector called Epistemic Bias Injection (EBI) that manipulates AI language models by injecting factually correct but biased content into retrieval-augmented generation databases. The attack steers model outputs toward specific viewpoints while evading traditional detection methods, though a new defense mechanism called BiasDef shows promise in mitigating these threats.
AIBullisharXiv β CS AI Β· Mar 277/10
π§ Researchers introduce WriteBack-RAG, a framework that treats knowledge bases in retrieval-augmented generation systems as trainable components rather than static databases. The method distills relevant information from documents into compact knowledge units, improving RAG performance across multiple benchmarks by an average of +2.14%.
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers introduce APEX-Searcher, a new framework that enhances large language models' search capabilities through a two-stage approach combining reinforcement learning for strategic planning and supervised fine-tuning for execution. The system addresses limitations in multi-hop question answering by decoupling retrieval processes into planning and execution phases, showing significant improvements across multiple benchmarks.
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers propose pΒ²RAG, a new privacy-preserving Retrieval-Augmented Generation system that supports arbitrary top-k retrieval while being 3-300x faster than existing solutions. The system uses an interactive bisection method instead of sorting and employs secret sharing across two servers to protect user prompts and database content.
$RAG
AIBullisharXiv β CS AI Β· Mar 97/10
π§ Researchers introduce RAG-Driver, a retrieval-augmented multi-modal large language model designed for autonomous driving that can provide explainable decisions and control predictions. The system addresses data scarcity and generalization challenges in AI-driven autonomous vehicles by using in-context learning and expert demonstration retrieval.
AINeutralarXiv β CS AI Β· Mar 97/10
π§ Researchers evaluated 34 large language models on radiology questions, finding that agentic retrieval-augmented reasoning systems improve consensus and reliability across different AI models. The study shows these systems reduce decision variability between models and increase robust correctness, though 72% of incorrect outputs still carried moderate to high clinical severity.
AIBullisharXiv β CS AI Β· Mar 56/10
π§ Researchers demonstrate that coreference resolution significantly improves Retrieval-Augmented Generation (RAG) systems by reducing ambiguity in document retrieval and enhancing question-answering performance. The study finds that smaller language models benefit more from disambiguation processes, with mean pooling strategies showing superior context capturing after coreference resolution.
AIBullisharXiv β CS AI Β· Mar 56/10
π§ Researchers introduce OSCAR, a new query-dependent online soft compression method for Retrieval-Augmented Generation (RAG) systems that reduces computational overhead while maintaining performance. The method achieves 2-5x speed improvements in inference with minimal accuracy loss across LLMs from 1B to 24B parameters.
π’ Hugging Face
AINeutralarXiv β CS AI Β· Mar 57/10
π§ Researchers propose RAG-X, a diagnostic framework for evaluating retrieval-augmented generation systems in medical AI applications. The study reveals an 'Accuracy Fallacy' showing a 14% gap between perceived system success and actual evidence-based grounding in medical question-answering systems.
AIBullisharXiv β CS AI Β· Mar 47/103
π§ Researchers developed a type-aware retrieval-augmented generation (RAG) method that translates natural language requirements into solver-executable optimization code for industrial applications. The method uses a typed knowledge base and dependency closure to ensure code executability, successfully validated on battery production optimization and job scheduling tasks where conventional RAG approaches failed.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers propose G-Defense, a graph-enhanced framework that uses large language models and retrieval-augmented generation to detect fake news while providing explainable, fine-grained reasoning. The system decomposes news claims into sub-claims, retrieves competing evidence, and generates transparent explanations without requiring verified fact-checking databases.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers have developed Luwen, an open-source Chinese legal language model built on Baichuan that uses continual pre-training, supervised fine-tuning, and retrieval-augmented generation to excel at legal tasks. The model outperforms baselines on five legal benchmarks including judgment prediction, judicial examination, and legal reasoning, demonstrating effective domain adaptation for specialized legal applications.
AINeutralarXiv β CS AI Β· 6d ago6/10
π§ Researchers introduce chain-of-illocution (CoI) prompting to improve source faithfulness in retrieval-augmented language models, achieving up to 63% gains in source adherence for programming education tasks. The study reveals that standard RAG systems exhibit low fidelity to source materials, with non-RAG models performing worse, while a user study confirms improved faithfulness does not compromise user satisfaction.
AIBullisharXiv β CS AI Β· Apr 76/10
π§ Researchers propose ScalDPP, a new retrieval mechanism for RAG systems that uses Determinantal Point Processes to optimize both density and diversity in context selection. The approach addresses limitations in current RAG pipelines that ignore interactions between retrieved information chunks, leading to redundant contexts that reduce effectiveness.
AIBullisharXiv β CS AI Β· Mar 276/10
π§ CodeRefine is a new AI framework that automatically converts research paper methodologies into functional code using Large Language Models. The system creates knowledge graphs from papers and uses retrieval-augmented generation to produce more accurate code implementations than traditional zero-shot prompting methods.
AIBullisharXiv β CS AI Β· Mar 276/10
π§ Researchers have developed UniAI-GraphRAG, an enhanced framework that improves upon existing GraphRAG systems for complex reasoning and multi-hop queries. The framework introduces three key innovations including ontology-guided extraction, multi-dimensional clustering, and dual-channel fusion, showing superior performance over mainstream solutions like LightRAG on benchmark tests.
AIBullisharXiv β CS AI Β· Mar 266/10
π§ Researchers propose MixDemo, a new GraphRAG framework that uses a Mixture-of-Experts mechanism to select high-quality demonstrations for improving large language model performance in domain-specific question answering. The framework includes a query-specific graph encoder to reduce noise in retrieved subgraphs and significantly outperforms existing methods across multiple textual graph benchmarks.
AIBearisharXiv β CS AI Β· Mar 266/10
π§ Research reveals that Retrieval-Augmented Generation (RAG) systems exhibit fairness issues, with queries from certain demographic groups systematically receiving higher accuracy than others. The study identifies three key factors affecting fairness: group exposure in retrieved documents, utility of group-specific documents, and attribution bias in how generators use different group documents.
π’ Meta
AIBullisharXiv β CS AI Β· Mar 266/10
π§ Researchers propose a new four-phase architecture to reduce AI hallucinations using domain-specific retrieval and verification systems. The framework achieved win rates up to 83.7% across multiple benchmarks, demonstrating significant improvements in factual accuracy for large language models.
AIBullisharXiv β CS AI Β· Mar 176/10
π§ Researchers have developed QA-Dragon, a new Query-Aware Dynamic RAG System that significantly improves knowledge-intensive Visual Question Answering by combining text and image retrieval strategies. The system achieved substantial performance improvements of 5-6% across different tasks in the Meta CRAG-MM Challenge at KDD Cup 2025.
AIBullisharXiv β CS AI Β· Mar 176/10
π§ Researchers propose a new framework for large language models that separates planning from factual retrieval to improve reliability in fact-seeking question answering. The modular approach uses a lightweight student planner trained via teacher-student learning to generate structured reasoning steps, showing improved accuracy and speed on challenging benchmarks.
AIBullisharXiv β CS AI Β· Mar 166/10
π§ Researchers improved agentic Retrieval-Augmented Generation (RAG) systems by introducing contextualization and de-duplication modules to address inefficiencies in complex question-answering. The enhanced Search-R1 pipeline achieved 5.6% better accuracy and 10.5% fewer retrieval turns using GPT-4.1-mini.
π§ GPT-4
AIBullisharXiv β CS AI Β· Mar 116/10
π§ Researchers propose TaSR-RAG, a new framework that improves Retrieval-Augmented Generation systems by using taxonomy-guided structured reasoning for better evidence selection. The system decomposes complex questions into triple sub-queries and performs step-wise evidence matching, achieving up to 14% performance improvements over existing RAG baselines on multi-hop question answering benchmarks.
AIBullisharXiv β CS AI Β· Mar 96/10
π§ Researchers introduce EpisTwin, a neuro-symbolic AI framework that creates Personal Knowledge Graphs from fragmented user data across applications. The system combines Graph Retrieval-Augmented Generation with visual refinement to enable complex reasoning over personal semantic data, addressing current limitations in personal AI systems.