59 articles tagged with #knowledge-graphs. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท 1d ago7/10
๐ง Researchers identified a critical failure mode in LLM-based agents called policy-invisible violations, where agents execute actions that appear compliant but breach organizational policies due to missing contextual information. They introduced PhantomPolicy, a benchmark with 600 test cases, and Sentinel, an enforcement framework using counterfactual graph simulation that achieved 93% accuracy in detecting violations compared to 68.8% for baseline approaches.
AIBullisharXiv โ CS AI ยท 1d ago7/10
๐ง Researchers introduce reasoning graphs, a persistent knowledge structure that improves language model reasoning accuracy by storing and reusing chains of thought tied to evidence items. The system achieves 47% error reduction on multi-hop questions and maintains deterministic outputs without model retraining, using only context engineering.
AIBullisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers introduce AtlasKV, a parametric knowledge integration method that enables large language models to leverage billion-scale knowledge graphs while consuming less than 20GB of VRAM. Unlike traditional retrieval-augmented generation (RAG) approaches, AtlasKV integrates knowledge directly into LLM parameters without requiring external retrievers or extended context windows, reducing inference latency and computational overhead.
AINeutralarXiv โ CS AI ยท 2d ago7/10
๐ง Researchers introduce PaperScope, a comprehensive benchmark for evaluating multi-modal AI systems on complex scientific research tasks across multiple documents. The benchmark reveals that even advanced systems like OpenAI Deep Research and Tongyi Deep Research struggle with long-context retrieval and cross-document reasoning, exposing significant gaps in current AI capabilities for scientific workflows.
๐ข OpenAI
AIBullisharXiv โ CS AI ยท 3d ago7/10
๐ง Researchers introduce LOM-action, an enterprise AI system that grounds LLM-based decisions in business ontologies and event-driven simulations rather than unrestricted knowledge spaces. The approach achieves 93.82% accuracy with 98.74% F1 scores on decision chains, vastly outperforming larger models like DeepSeek-V3.2, while maintaining complete audit trails for enterprise compliance.
AINeutralarXiv โ CS AI ยท Mar 267/10
๐ง Researchers developed a graph-based evaluation framework that transforms clinical guidelines into dynamic benchmarks for testing domain-specific language models. The system addresses key evaluation challenges by providing contamination resistance, comprehensive coverage, and maintainable assessment tools that reveal systematic capability gaps in current AI models.
AIBullisharXiv โ CS AI ยท Mar 267/10
๐ง Researchers have developed AI-Supervisor, a multi-agent framework that maintains a persistent Research World Model to autonomously conduct end-to-end AI research supervision. Unlike traditional linear pipelines, the system uses specialized agents with structured gap discovery, self-correcting loops, and consensus mechanisms to continuously evolve research understanding.
AIBullisharXiv โ CS AI ยท Mar 117/10
๐ง Researchers introduce MMGraphRAG, a new AI framework that addresses hallucination issues in large language models by integrating visual scene graphs with text knowledge graphs through cross-modal fusion. The system uses SpecLink for entity linking and demonstrates superior performance in multimodal information processing across multiple benchmarks.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers propose PlugMem, a task-agnostic plugin memory module for LLM agents that structures episodic memories into knowledge-centric graphs for efficient retrieval. The system consistently outperforms existing memory designs across multiple benchmarks while maintaining transferability between different tasks.
AIBullisharXiv โ CS AI ยท Mar 57/10
๐ง Researchers introduce GraphMERT, an 80M-parameter AI model that efficiently extracts reliable knowledge graphs from unstructured text data. The system outperforms much larger language models like Qwen3-32B in generating factually accurate and semantically valid knowledge graphs, achieving 69.8% FActScore versus 40.2% for the baseline.
AIBullisharXiv โ CS AI ยท Mar 56/10
๐ง Researchers propose a dual-helix governance framework to address AI agent reliability issues in WebGIS development, implementing a 3-track architecture that achieved 51% reduction in code complexity. The framework uses knowledge graphs and self-learning cycles to overcome LLM limitations like context constraints and instruction failures.
AIBullisharXiv โ CS AI ยท Mar 57/10
๐ง Researchers developed a new AI training method using knowledge graphs as reward models to improve compositional reasoning in specialized domains. The approach enables smaller 14B parameter models to outperform much larger frontier systems like GPT-5.2 and Gemini 3 Pro on complex multi-hop reasoning tasks in medicine.
๐ง Gemini
AIBullisharXiv โ CS AI ยท Mar 47/103
๐ง Researchers introduce MIRAGE, a novel AI framework that uses knowledge graphs and electronic health records to predict Alzheimer's disease when MRI scans are unavailable. The system improves AD classification rates by 13% compared to single-modality approaches by creating synthetic representations without expensive 3D brain scan reconstruction.
AIBullisharXiv โ CS AI ยท Mar 47/103
๐ง Researchers present Odin, the first production-deployed graph intelligence engine that autonomously discovers patterns in knowledge graphs without predefined queries. The system uses a novel COMPASS scoring metric combining structural, semantic, temporal, and community-aware signals, and has been successfully deployed in regulated healthcare and insurance environments.
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง Researchers propose a graph-based soft prompting framework that enables LLMs to reason over incomplete knowledge graphs by processing subgraph structures rather than explicit node paths, achieving state-of-the-art results on multi-hop question-answering benchmarks while reducing computational costs through a two-stage inference approach.
AIBullisharXiv โ CS AI ยท 1d ago6/10
๐ง Researchers introduce RALP, a novel method that uses chain-of-thought prompts with large language models to improve knowledge graph predictions, outperforming traditional embedding models by over 5% on standard benchmarks while better handling unseen entities, relations, and numerical data.
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง Researchers propose Opinion-Aware Retrieval-Augmented Generation (RAG) to address a critical bias in current LLM systems that treat subjective content as noise rather than valuable information. By formalizing the distinction between factual queries (epistemic uncertainty) and opinion queries (aleatoric uncertainty), the team develops an architecture that preserves diverse perspectives in knowledge retrieval, demonstrating 26.8% improved sentiment diversity and 42.7% better entity matching on real-world e-commerce data.
AIBullisharXiv โ CS AI ยท 1d ago6/10
๐ง Researchers introduce KG-Reasoner, an end-to-end framework that uses reinforcement learning to train large language models to perform multi-hop reasoning over knowledge graphs without decomposing tasks into isolated pipeline steps. The approach demonstrates competitive or superior performance across eight reasoning benchmarks by enabling LLMs to dynamically explore reasoning paths and backtrack when necessary.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers demonstrate a zero-shot knowledge graph construction pipeline using local open-source LLMs on consumer hardware, achieving 0.70 F1 on document relations and 0.55 exact match on multi-hop reasoning through ensemble methods. The study reveals that strong model consensus often signals collective hallucination rather than accuracy, challenging traditional ensemble assumptions while maintaining low computational costs and carbon footprint.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce CodaRAG, a framework that enhances Retrieval-Augmented Generation by treating evidence retrieval as active associative discovery rather than passive lookup. The system achieves 7-10% gains in retrieval recall and 3-11% improvements in generation accuracy by consolidating fragmented knowledge, navigating multi-dimensional pathways, and eliminating noise.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce ConflictQA, a benchmark revealing that large language models struggle with conflicting information across different knowledge sources (text vs. knowledge graphs) in retrieval-augmented generation systems. The study proposes XoT, an explanation-based framework to improve faithful reasoning when LLMs encounter contradictory evidence.
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce MยณKG-RAG, a novel multimodal retrieval-augmented generation system that enhances large language models by integrating multi-hop knowledge graphs with audio-visual data. The approach improves reasoning depth and answer accuracy by filtering irrelevant information through a new grounding and pruning mechanism called GRASP.
$KG
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Gypscie is a new cross-platform AI artifact management system that unifies the complexity of managing machine learning models across diverse infrastructure through a knowledge graph and rule-based query language. The system streamlines the entire AI model lifecycleโfrom data preparation through deployment and monitoringโwhile enabling explainability through provenance tracking.
AINeutralarXiv โ CS AI ยท 3d ago6/10
๐ง A new study comparing large language models against graph-based parsers for relation extraction demonstrates that smaller, specialized architectures significantly outperform LLMs when processing complex linguistic graphs with multiple relations. This finding challenges the prevailing assumption that larger language models are universally superior for natural language processing tasks.
AINeutralarXiv โ CS AI ยท 3d ago6/10
๐ง Researchers propose GNN-as-Judge, a framework combining Large Language Models with Graph Neural Networks to improve learning on text-attributed graphs in low-resource settings. The approach uses collaborative pseudo-labeling and weakly-supervised fine-tuning to generate reliable labels while reducing noise, demonstrating significant performance gains when labeled data is scarce.