61 articles tagged with #retrieval-augmented-generation. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Feb 276/107
๐ง Researchers released the Asta Interaction Dataset containing over 200,000 user queries from AI-powered scientific research tools, revealing how scientists interact with LLM-based research assistants. The study shows users treat these systems as collaborative research partners, submitting longer queries and using outputs as persistent artifacts for non-linear exploration.
AIBullisharXiv โ CS AI ยท Feb 276/108
๐ง Researchers introduce G-reasoner, a unified framework combining graph and language foundation models to enable better reasoning over structured knowledge. The system uses a 34M-parameter graph foundation model with QuadGraph abstraction to outperform existing retrieval-augmented generation methods across six benchmarks.
AIBullisharXiv โ CS AI ยท Feb 276/107
๐ง Researchers introduce RELOOP, a new retrieval-augmented generation framework that improves multi-step question answering across text, tables, and knowledge graphs. The system uses hierarchical sequences and structure-aware iteration to achieve better accuracy while reducing computational costs compared to existing RAG methods.
AIBullishOpenAI News ยท Aug 215/106
๐ง Blue J is transforming tax research by leveraging GPT-4.1 and Retrieval-Augmented Generation to provide AI-powered tools that deliver fast, accurate, and fully-cited tax answers. The company serves tax professionals across the US, Canada, and the UK, combining domain expertise with advanced AI technology for regulated industry applications.
AINeutralarXiv โ CS AI ยท Apr 65/10
๐ง Researchers introduce ARAM (Adaptive Retrieval-Augmented Masked Diffusion), a training-free framework that improves AI language generation by dynamically adjusting guidance based on retrieved context quality. The system addresses noise and conflicts in retrieval-augmented generation for diffusion-based language models, showing improved performance on knowledge-intensive QA benchmarks.
AINeutralarXiv โ CS AI ยท Mar 124/10
๐ง Researchers present TAMUSA-Chat, a framework for building domain-adapted large language model conversational systems for academic institutions. The system combines supervised fine-tuning and retrieval-augmented generation with transparent deployment strategies and publicly available code.
AIBullisharXiv โ CS AI ยท Mar 115/10
๐ง Researchers developed ELERAG, an enhanced Retrieval-Augmented Generation architecture that integrates Entity Linking with Wikidata to improve factual accuracy in educational AI systems. The system shows significant performance improvements in domain-specific contexts compared to standard RAG approaches, particularly for Italian educational question-answering applications.
AINeutralarXiv โ CS AI ยท Mar 64/10
๐ง Researchers developed the first comprehensive framework for creating domain-specialized Large Language Models for combustion science, using 3.5 billion tokens from scientific literature and code. The study found that standard RAG approaches hit a performance ceiling at 60% accuracy, highlighting the need for more advanced knowledge injection methods including knowledge graphs and continued pretraining.
AINeutralarXiv โ CS AI ยท Mar 35/106
๐ง Researchers propose WKGFC, a new AI system that uses knowledge graphs and multi-agent retrieval to improve fact-checking accuracy. The system addresses limitations of current methods that rely on textual similarity by implementing an automated Markov Decision Process with LLM agents to retrieve and verify evidence from multiple sources.
AINeutralGoogle Research Blog ยท May 144/105
๐ง This article explores retrieval augmented generation (RAG) in AI systems, focusing on how sufficient context improves data mining and modeling capabilities. The analysis appears to be a technical deep-dive into RAG methodologies and their practical applications.
AIBullishNVIDIA AI Blog ยท Jan 315/104
๐ง This article explains Retrieval-Augmented Generation (RAG), a technique that enhances AI models by combining their general knowledge with specific external information sources. The article uses a courtroom analogy to illustrate how RAG works, comparing it to judges who consult specialized expertise for complex cases requiring domain-specific knowledge.