13 articles tagged with #retrieval. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Apr 77/10
๐ง MemMachine is an open-source memory system for AI agents that preserves conversational ground truth and achieves superior accuracy-efficiency tradeoffs compared to existing solutions. The system integrates short-term, long-term episodic, and profile memory while using 80% fewer input tokens than comparable systems like Mem0.
๐ง GPT-4๐ง GPT-5
AINeutralarXiv โ CS AI ยท Mar 46/104
๐ง Researchers analyzed memory systems in LLM agents and found that retrieval methods are more critical than write strategies for performance. Simple raw chunk storage matched expensive alternatives, suggesting current memory pipelines may discard useful context that retrieval systems cannot compensate for.
AIBullisharXiv โ CS AI ยท Feb 277/108
๐ง Researchers introduce RAGdb, a revolutionary architecture that consolidates Retrieval-Augmented Generation into a single SQLite container, eliminating the need for cloud infrastructure and GPUs. The system achieves 100% entity retrieval accuracy while reducing disk footprint by 99.5% compared to traditional Docker-based RAG stacks, enabling truly portable AI applications for edge computing and privacy-sensitive environments.
AIBullisharXiv โ CS AI ยท Apr 76/10
๐ง Researchers introduced GroundedKG-RAG, a new retrieval-augmented generation system that creates knowledge graphs directly grounded in source documents to improve long-document question answering. The system reduces resource consumption and hallucinations while maintaining accuracy comparable to state-of-the-art models at lower cost.
AIBullisharXiv โ CS AI ยท Mar 266/10
๐ง Researchers introduce MDKeyChunker, a three-stage pipeline that improves RAG (Retrieval-Augmented Generation) systems by using structure-aware chunking of Markdown documents, single-call LLM enrichment, and semantic key-based restructuring. The system achieves superior retrieval performance with Recall@5=1.000 using BM25 over structural chunks, significantly improving upon traditional fixed-size chunking methods.
๐ข OpenAI
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers propose EMBRAG, a new framework that combines large language models with knowledge graphs to improve reasoning accuracy and reduce hallucinations. The system generates multiple logical rules from queries and applies them in embedding space, achieving state-of-the-art performance on knowledge graph question-answering benchmarks.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers developed a framework to make large language model-based query expansion more efficient by distilling knowledge from powerful teacher models into compact student models. The approach uses retrieval feedback and preference alignment to maintain 97% of the original performance while dramatically reducing inference costs.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง Researchers introduce CLAG, a clustering-based memory framework that helps small language model agents organize and retrieve information more effectively. The system addresses memory dilution issues by creating semantic clusters with automated profiles, showing improved performance across multiple QA datasets.
AIBullisharXiv โ CS AI ยท Mar 176/10
๐ง GlobalRAG is a new reinforcement learning framework that significantly improves multi-hop question answering by decomposing questions into subgoals and coordinating retrieval with reasoning. The system achieves 14.2% average improvements in performance metrics while using only 42% of the training data required by baseline models.
AIBullisharXiv โ CS AI ยท Mar 36/106
๐ง Researchers developed TARSE, a new AI system for clinical decision-making that retrieves relevant medical skills and experiences from curated libraries to improve reasoning accuracy. The system performs test-time adaptation to align language models with clinically valid logic, showing improvements over existing medical AI baselines in question-answering benchmarks.
AIBullisharXiv โ CS AI ยท Mar 36/104
๐ง Researchers introduce LLaVE, a new multimodal embedding model that uses hardness-weighted contrastive learning to better distinguish between positive and negative pairs in image-text tasks. The model achieves state-of-the-art performance on the MMEB benchmark, with LLaVE-2B outperforming previous 7B models and demonstrating strong zero-shot transfer capabilities to video retrieval tasks.
AIBullisharXiv โ CS AI ยท Mar 26/1013
๐ง Researchers found that simple keyword search within agentic AI frameworks can achieve over 90% of the performance of traditional RAG systems without requiring vector databases. This approach offers a more cost-effective and simpler alternative for AI applications requiring frequent knowledge base updates.
AIBullishHugging Face Blog ยท Mar 226/109
๐ง The article discusses binary and scalar embedding quantization techniques that can significantly reduce computational costs and increase speed for retrieval systems. These methods compress high-dimensional vector embeddings while maintaining retrieval performance, making AI search and recommendation systems more efficient and cost-effective.