11,679 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers present a new framework for evaluating logical reasoning AI agents using an "assessor agent" that can issue tasks, enforce execution limits, and record structured failure types. Their auto-formalization agent achieved 86.70% accuracy on logical reasoning tasks, outperforming traditional chain-of-thought approaches by nearly 13 percentage points.
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers propose NAR-CP, a new method to improve Large Language Models' performance in high-frequency decision-making tasks like UAV pursuit. The approach uses normalized action rewards and consistency policy optimization to address limitations in current LLM-based agents that struggle with rapid, precise numerical state updates.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers have developed TikZilla, a new AI model that generates high-quality scientific figures from text descriptions using TikZ code. The model uses a dataset four times larger than previous versions and combines supervised learning with reinforcement learning to achieve performance matching GPT-5 while using much smaller model sizes.
AIBullisharXiv – CS AI · Mar 47/104
🧠Researchers introduce Retrieval-Augmented Robotics (RAR), a new paradigm enabling robots to actively retrieve and use external visual documentation to execute complex tasks. The system uses a Retrieve-Reason-Act loop where robots search unstructured visual manuals, align 2D diagrams with 3D objects, and synthesize executable plans for assembly tasks.
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers developed RxnNano, a compact 0.5B-parameter AI model for chemical reaction prediction that outperforms much larger 7B+ parameter models by 23.5% through novel training techniques focused on chemical understanding rather than scale. The framework uses hierarchical curriculum learning and chemical consistency objectives to improve drug discovery and synthesis planning applications.
$ATOM
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers have developed EvoSkill, an automated framework that enables AI agents to discover and refine domain-specific skills through iterative failure analysis. The system demonstrated significant performance improvements on specialized tasks, with accuracy gains of 7.3% on financial data analysis and 12.1% on search-augmented QA, while showing transferable capabilities across different domains.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers propose MA-CoNav, a multi-agent collaborative framework for robot navigation that uses a Master-Slave architecture to distribute cognitive tasks among specialized agents. The system outperforms existing Vision-Language Navigation methods by decoupling perception, planning, execution, and memory functions across different AI agents with hierarchical collaboration.
AINeutralarXiv – CS AI · Mar 47/104
🧠Researchers introduce GraphSSR, a new framework that improves zero-shot graph learning by combining Large Language Models with adaptive subgraph denoising. The system addresses structural noise issues in existing methods through a dynamic 'Sample-Select-Reason' pipeline and reinforcement learning training.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers propose a new preconditioning method for flow matching and score-based diffusion models that improves training optimization by reshaping the geometry of intermediate distributions. The technique addresses optimization bias caused by ill-conditioned covariance matrices, preventing training from stagnating at suboptimal weights and enabling better model performance.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers developed Social-JEPA, showing that separate AI agents learning from different viewpoints of the same environment develop internal representations that are mathematically aligned through approximate linear isometry. This enables models trained on one agent to work on another without retraining, suggesting a path toward interoperable decentralized AI vision systems.
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers introduce BehaveSim, a new method to measure algorithmic similarity by analyzing problem-solving behavior rather than code syntax. The approach enhances AI-driven algorithm design frameworks and enables systematic analysis of AI-generated algorithms through behavioral clustering.
AIBullisharXiv – CS AI · Mar 46/106
🧠SuperLocalMemory is a new privacy-preserving memory system for multi-agent AI that defends against memory poisoning attacks through local-first architecture and Bayesian trust scoring. The open-source system eliminates cloud dependencies while providing personalized retrieval through adaptive learning-to-rank, demonstrating strong performance metrics including 10.6ms search latency and 72% trust degradation for sleeper attacks.
AINeutralarXiv – CS AI · Mar 46/103
🧠Researchers released the ERI benchmark, a comprehensive dataset spanning 9 engineering fields and 55 subdomains to evaluate large language models' engineering capabilities. The benchmark tested 7 LLMs across 57,750 records, revealing a clear three-tier performance structure with frontier models like GPT-5 and Claude Sonnet 4 significantly outperforming mid-tier and smaller models.
AINeutralarXiv – CS AI · Mar 47/105
🧠Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers developed LLM-MLFFN, a new framework combining large language models with multi-level feature fusion to classify autonomous vehicle driving behaviors. The system achieves over 94% accuracy on the Waymo dataset by integrating numerical driving data with semantic features extracted through LLMs.
AINeutralarXiv – CS AI · Mar 47/102
🧠Researchers audited the MedCalc-Bench benchmark for evaluating AI models on clinical calculator tasks, finding over 20 errors in the dataset and showing that simple 'open-book' prompting achieves 81-85% accuracy versus previous best of 74%. The study suggests the benchmark measures formula memorization rather than clinical reasoning, challenging how AI medical capabilities are evaluated.
AINeutralarXiv – CS AI · Mar 47/104
🧠Researchers introduced NeuroCognition, a new benchmark for evaluating LLMs based on neuropsychological tests, revealing that while models show unified capability across tasks, they struggle with foundational cognitive abilities. The study found LLMs perform well on text but degrade with images and complexity, suggesting current models lack core adaptive cognition compared to human intelligence.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers introduce CHaRS (Concept Heterogeneity-aware Representation Steering), a new method for controlling large language model behavior that uses optimal transport theory to create context-dependent steering rather than global directions. The approach models representations as Gaussian mixture models and derives input-dependent steering maps, showing improved behavioral control over existing methods.
AINeutralarXiv – CS AI · Mar 46/102
🧠Researchers have released LiveAgentBench, a comprehensive benchmark featuring 104 real-world scenarios to evaluate AI agent performance across practical applications. The benchmark uses a novel Social Perception-Driven Data Generation method to ensure tasks reflect actual user requirements and includes 374 total tasks for testing various AI models and frameworks.
AINeutralarXiv – CS AI · Mar 46/104
🧠Researchers analyzed memory systems in LLM agents and found that retrieval methods are more critical than write strategies for performance. Simple raw chunk storage matched expensive alternatives, suggesting current memory pipelines may discard useful context that retrieval systems cannot compensate for.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers propose MEBM-Speech, a neural decoder that detects speech activity from brain signals using magnetoencephalography (MEG). The system achieved 89.3% F1 score on benchmark tests and could advance brain-computer interfaces for cognitive neuroscience and clinical applications.
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers propose SUN (Shared Use of Next-token Prediction), a novel approach for multi-LLM serving that enables cross-model sharing of decode execution by decomposing transformers into separate prefill and decode modules. The system achieves up to 2.0x throughput improvement per GPU while maintaining accuracy comparable to full fine-tuning, with a quantized version (QSUN) providing additional 45% speedup.
AIBullisharXiv – CS AI · Mar 47/105
🧠Researchers introduce NeuroProlog, a neurosymbolic framework that improves mathematical reasoning in Large Language Models by converting math problems into executable Prolog programs. The multi-task 'Cocktail' training approach shows significant accuracy improvements of 3-5% across different model sizes, with larger models demonstrating better error correction capabilities.
AIBullisharXiv – CS AI · Mar 47/104
🧠Researchers introduce PRISM, a new AI inference algorithm that uses Process Reward Models to guide deep reasoning systems. The method significantly improves performance on mathematical and scientific benchmarks by treating candidate solutions as particles in an energy landscape and using score-guided refinement to concentrate on higher-quality reasoning paths.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers introduce AgentAssay, the first framework for regression testing AI agent workflows, achieving 78-100% cost reduction while maintaining statistical guarantees. The system uses behavioral fingerprinting and stochastic testing methods to detect regressions in autonomous AI agents across multiple models including GPT-5.2, Claude Sonnet 4.6, and others.