11,678 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers developed SAE-based Transferability Score (STS), a new metric using sparse autoencoders to predict how well fine-tuned large language models will perform across different domains without requiring actual training. The method achieves correlation coefficients above 0.7 with actual performance changes and provides interpretable insights into model adaptation.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers have developed a framework that allows neural network verification tools to accept natural language specifications instead of low-level technical constraints. The system automatically translates human-readable requirements into formal verification queries, significantly expanding the practical applicability of neural network verification across diverse domains.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers propose a framework for sustainable AI self-evolution through triadic roles (Proposer, Solver, Verifier) that ensures learnable information gain across iterations. The study identifies three key system designs to prevent the common plateau effect in self-play AI systems: asymmetric co-evolution, capacity growth, and proactive information seeking.
AINeutralarXiv – CS AI · Mar 46/105
🧠Researchers propose a framework for developing trustworthy AI agents that function as epistemic entities, capable of pursuing knowledge goals and shaping information environments. The paper argues that as AI models increasingly replace traditional search methods and provide specialized advice, their calibration to human epistemic norms becomes critical to prevent cognitive deskilling and epistemic drift.
AINeutralarXiv – CS AI · Mar 47/103
🧠Research compares Transformers, State Space Models (SSMs), and hybrid architectures for in-context retrieval tasks, finding hybrid models excel at information-dense retrieval while Transformers remain superior for position-based tasks. SSM-based models develop unique locality-aware embeddings that create interpretable positional structures, explaining their specific strengths and limitations.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers developed COOL-MC, a tool that combines reinforcement learning with model checking to verify and explain AI policies for platelet inventory management in blood banks. The system achieved a 2.9% stockout probability while providing transparent decision-making explanations for safety-critical healthcare applications.
AIBearisharXiv – CS AI · Mar 47/104
🧠Researchers introduced SANDBOXESCAPEBENCH, a new benchmark that measures large language models' ability to break out of Docker container sandboxes commonly used for AI safety. The study found that LLMs can successfully identify and exploit vulnerabilities in sandbox environments, highlighting significant security risks as AI agents become more autonomous.
AINeutralarXiv – CS AI · Mar 47/102
🧠Research identifies a critical bottleneck in Vision-Language-Action (VLA) models for edge AI, where up to 75% of latency comes from memory-bound action generation phases. The study analyzes performance on Nvidia edge hardware and projects requirements for scaling to 100B parameter models in robotics applications.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers introduce PRISM, an EEG foundation model that demonstrates how diverse pretraining data leads to better clinical performance than narrow-source datasets. The study shows that geographically diverse EEG data outperforms larger but homogeneous datasets in medical diagnosis tasks, particularly achieving 12.3% better accuracy in distinguishing epilepsy from similar conditions.
$COMP
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers propose MEBM-Speech, a neural decoder that detects speech activity from brain signals using magnetoencephalography (MEG). The system achieved 89.3% F1 score on benchmark tests and could advance brain-computer interfaces for cognitive neuroscience and clinical applications.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers introduce Density-Guided Response Optimization (DGRO), a new AI alignment method that learns community preferences from implicit acceptance signals rather than explicit feedback. The technique uses geometric patterns in how communities naturally engage with content to train language models without requiring costly annotation or preference labeling.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers introduce Param∆, a novel method for transferring post-training capabilities to updated language models without additional training costs. The technique achieves 95% performance of traditional post-training by computing weight differences between base and post-trained models, offering significant cost savings for AI model development.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers have developed EvoSkill, an automated framework that enables AI agents to discover and refine domain-specific skills through iterative failure analysis. The system demonstrated significant performance improvements on specialized tasks, with accuracy gains of 7.3% on financial data analysis and 12.1% on search-augmented QA, while showing transferable capabilities across different domains.
AIBullisharXiv – CS AI · Mar 47/104
🧠Researchers introduce Retrieval-Augmented Robotics (RAR), a new paradigm enabling robots to actively retrieve and use external visual documentation to execute complex tasks. The system uses a Retrieve-Reason-Act loop where robots search unstructured visual manuals, align 2D diagrams with 3D objects, and synthesize executable plans for assembly tasks.
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers propose NAR-CP, a new method to improve Large Language Models' performance in high-frequency decision-making tasks like UAV pursuit. The approach uses normalized action rewards and consistency policy optimization to address limitations in current LLM-based agents that struggle with rapid, precise numerical state updates.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers developed Social-JEPA, showing that separate AI agents learning from different viewpoints of the same environment develop internal representations that are mathematically aligned through approximate linear isometry. This enables models trained on one agent to work on another without retraining, suggesting a path toward interoperable decentralized AI vision systems.
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers developed RxnNano, a compact 0.5B-parameter AI model for chemical reaction prediction that outperforms much larger 7B+ parameter models by 23.5% through novel training techniques focused on chemical understanding rather than scale. The framework uses hierarchical curriculum learning and chemical consistency objectives to improve drug discovery and synthesis planning applications.
$ATOM
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers introduce BehaveSim, a new method to measure algorithmic similarity by analyzing problem-solving behavior rather than code syntax. The approach enhances AI-driven algorithm design frameworks and enables systematic analysis of AI-generated algorithms through behavioral clustering.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers introduce AgentAssay, the first framework for regression testing AI agent workflows, achieving 78-100% cost reduction while maintaining statistical guarantees. The system uses behavioral fingerprinting and stochastic testing methods to detect regressions in autonomous AI agents across multiple models including GPT-5.2, Claude Sonnet 4.6, and others.
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers propose SUN (Shared Use of Next-token Prediction), a novel approach for multi-LLM serving that enables cross-model sharing of decode execution by decomposing transformers into separate prefill and decode modules. The system achieves up to 2.0x throughput improvement per GPU while maintaining accuracy comparable to full fine-tuning, with a quantized version (QSUN) providing additional 45% speedup.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers present a new framework for evaluating logical reasoning AI agents using an "assessor agent" that can issue tasks, enforce execution limits, and record structured failure types. Their auto-formalization agent achieved 86.70% accuracy on logical reasoning tasks, outperforming traditional chain-of-thought approaches by nearly 13 percentage points.
AINeutralarXiv – CS AI · Mar 47/104
🧠Researchers have introduced SorryDB, a dynamic benchmark for evaluating AI systems' ability to prove mathematical theorems using the Lean proof assistant. The benchmark draws from 78 real-world formalization projects and addresses limitations of static benchmarks by providing continuously updated tasks that better reflect community needs.
AINeutralarXiv – CS AI · Mar 46/102
🧠Researchers have released LiveAgentBench, a comprehensive benchmark featuring 104 real-world scenarios to evaluate AI agent performance across practical applications. The benchmark uses a novel Social Perception-Driven Data Generation method to ensure tasks reflect actual user requirements and includes 374 total tasks for testing various AI models and frameworks.
AIBullisharXiv – CS AI · Mar 46/106
🧠SuperLocalMemory is a new privacy-preserving memory system for multi-agent AI that defends against memory poisoning attacks through local-first architecture and Bayesian trust scoring. The open-source system eliminates cloud dependencies while providing personalized retrieval through adaptive learning-to-rank, demonstrating strong performance metrics including 10.6ms search latency and 72% trust degradation for sleeper attacks.
AINeutralarXiv – CS AI · Mar 46/103
🧠Researchers released the ERI benchmark, a comprehensive dataset spanning 9 engineering fields and 55 subdomains to evaluate large language models' engineering capabilities. The benchmark tested 7 LLMs across 57,750 records, revealing a clear three-tier performance structure with frontier models like GPT-5 and Claude Sonnet 4 significantly outperforming mid-tier and smaller models.