y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#graph-theory News & Analysis

9 articles tagged with #graph-theory. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

9 articles
AINeutralarXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

Researchers at arXiv have identified two key mechanisms behind reasoning hallucinations in large language models: Path Reuse and Path Compression. The study models next-token prediction as graph search, showing how memorized knowledge can override contextual constraints and how frequently used reasoning paths become shortcuts that lead to unsupported conclusions.

AINeutralarXiv โ€“ CS AI ยท Mar 37/105
๐Ÿง 

DAG-Math: Graph-of-Thought Guided Mathematical Reasoning in LLMs

Researchers introduce DAG-Math, a new framework for evaluating mathematical reasoning in Large Language Models that models Chain-of-Thought as rule-based processes over directed acyclic graphs. The framework includes a 'logical closeness' metric that reveals significant differences in reasoning quality between LLM families, even when final answer accuracy appears comparable.

AINeutralarXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Graph-theoretic Agreement Framework for Multi-agent LLM Systems

Researchers propose a graph-theoretic framework for securing multi-agent LLM systems by analyzing consensus in signed, directed interaction networks. The study addresses vulnerabilities in distributed AI architectures where hidden system prompts can act as 'topological Trojan horses' that destabilize cooperative consensus among AI agents.

AINeutralarXiv โ€“ CS AI ยท Mar 54/10
๐Ÿง 

Graph Hopfield Networks: Energy-Based Node Classification with Associative Memory

Researchers introduce Graph Hopfield Networks, a new neural network architecture that combines associative memory with graph-based learning for node classification tasks. The method shows improvements of up to 5 percentage points on robustness tests and 2 percentage points on citation networks, outperforming standard baselines across multiple graph types.

AINeutralGoogle Research Blog ยท May 134/105
๐Ÿง 

Differential privacy on trust graphs

This appears to be a research article focused on differential privacy techniques applied to trust graphs. The article falls under algorithms and theory, suggesting an academic or technical exploration of privacy-preserving methods in graph-based trust systems.

AINeutralarXiv โ€“ CS AI ยท Mar 34/104
๐Ÿง 

Heterophily-Agnostic Hypergraph Neural Networks with Riemannian Local Exchanger

Researchers propose HealHGNN, a novel Hypergraph Neural Network that addresses limitations in traditional networks when dealing with heterophilic hypergraphs. The system uses Riemannian geometry and adaptive local heat exchangers to enable better long-range dependency modeling with linear complexity.

AINeutralarXiv โ€“ CS AI ยท Mar 34/104
๐Ÿง 

Learning Shortest Paths with Generative Flow Networks

Researchers present a novel framework using Generative Flow Networks (GFlowNets) to solve shortest path problems in graphs. The method proves that minimizing total flow forces GFlowNets to traverse only shortest paths, demonstrating competitive performance in pathfinding tasks including solving Rubik's Cubes with smaller search budgets than existing approaches.

AINeutralarXiv โ€“ CS AI ยท Mar 34/106
๐Ÿง 

From Variance to Invariance: Qualitative Content Analysis for Narrative Graph Annotation

Researchers developed a new framework for annotating economic narratives in news using directed acyclic graphs to represent causal relationships between events. The study focused on inflation narratives and introduced quality measures to reduce annotation errors, finding that lenient metrics overestimate reliability while locally-constrained representations improve consistency.

AINeutralarXiv โ€“ CS AI ยท Mar 24/109
๐Ÿง 

Embracing Discrete Search: A Reasonable Approach to Causal Structure Learning

Researchers introduce FLOP, a new causal discovery algorithm for linear models that significantly reduces computation time through fast parent selection and Cholesky-based score updates. The algorithm achieves near-perfect accuracy in standard benchmarks and makes discrete search approaches viable for causal structure learning.

$NEAR