11 articles tagged with #symbolic-reasoning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท Mar 47/105
๐ง Researchers introduce NeuroProlog, a neurosymbolic framework that improves mathematical reasoning in Large Language Models by converting math problems into executable Prolog programs. The multi-task 'Cocktail' training approach shows significant accuracy improvements of 3-5% across different model sizes, with larger models demonstrating better error correction capabilities.
AINeutralarXiv โ CS AI ยท 6d ago6/10
๐ง Researchers present ProofSketcher, a hybrid system combining large language models with lightweight proof verification to address mathematical reasoning errors in AI-generated proofs. The approach bridges the gap between LLM efficiency and the formal rigor of interactive theorem provers like Lean and Coq, enabling more reliable automated reasoning without requiring full formalization.
$AVAX
AINeutralarXiv โ CS AI ยท 6d ago6/10
๐ง Researchers introduce a declarative runtime protocol that externalizes agent state to measure how much of an LLM-based agent's competence actually derives from the language model versus explicit structural components. Testing on Collaborative Battleship, they find that explicit world-model planning drives most performance gains, while sparse LLM-based revision at 4.3% of turns yields minimal and sometimes negative returns.
AINeutralarXiv โ CS AI ยท 6d ago6/10
๐ง Researchers introduce OneLife, a framework for learning symbolic world models from minimal unguided exploration in complex, stochastic environments. The approach uses conditionally-activated programmatic laws within a probabilistic framework and demonstrates superior performance on 16 of 23 test scenarios, advancing autonomous construction of world models for unknown environments.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers propose DUPLEX, a dual-system architecture that restricts LLMs to information extraction rather than end-to-end planning, using symbolic planners for logical synthesis. The system demonstrated superior performance across 12 planning domains by leveraging LLMs for semantic grounding while avoiding their hallucination tendencies in complex reasoning tasks.
AINeutralarXiv โ CS AI ยท Mar 266/10
๐ง Researchers investigated whether Vision-Language Models (VLMs) can reason robustly under distribution shifts and found that fine-tuned VLMs achieve high accuracy in-distribution but fail to generalize. They propose VLC, a neuro-symbolic method combining VLM-based concept recognition with circuit-based symbolic reasoning that demonstrates consistent performance under covariate shifts.
AINeutralarXiv โ CS AI ยท Mar 37/109
๐ง Researchers propose the Lattice Representation Hypothesis, a new framework showing how large language models encode symbolic reasoning through geometric structures. The theory unifies continuous neural representations with formal logic by demonstrating that LLM embeddings naturally form concept lattices that enable symbolic operations through geometric intersections and unions.
AINeutralarXiv โ CS AI ยท Mar 37/109
๐ง Researchers introduce a novel multi-agent AI architecture that integrates Theory of Mind, internal beliefs, and symbolic solvers to improve collaborative decision-making in LLM-based systems. The study evaluates this architecture across different language models in resource allocation scenarios, revealing complex interactions between LLM capabilities and cognitive mechanisms.
AINeutralarXiv โ CS AI ยท Mar 54/10
๐ง Researchers introduce NEURONA, a neuro-symbolic framework that combines AI symbolic reasoning with fMRI brain data to decode neural activity patterns. The system demonstrates improved accuracy in understanding how the brain processes visual concepts by incorporating structural priors and compositional reasoning.
AINeutralarXiv โ CS AI ยท Mar 34/106
๐ง Researchers introduce Discrete World Models via Regularization (DWMR), a new method for learning Boolean representations of environments without requiring reconstruction or contrastive learning. The approach uses specialized regularizers to maximize entropy and independence while enforcing locality constraints, showing superior performance on benchmarks with combinatorial structure.
AINeutralarXiv โ CS AI ยท Mar 24/106
๐ง Researchers introduce CSyMR-Bench, a new benchmark for evaluating AI systems' ability to perform complex music information retrieval tasks from symbolic notation. The benchmark includes 126 multiple-choice questions requiring compositional reasoning, and demonstrates that tool-augmented AI approaches outperform language model-only methods by 5-7%.