11,688 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers have developed Obscuro, the first AI system to achieve superhuman performance in Fog of War chess, a complex imperfect-information variant of chess. The breakthrough introduces new search techniques for imperfect-information games and represents the largest zero-sum game where superhuman AI performance has been demonstrated under imperfect information conditions.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers introduce PsyAgent, a new AI framework that creates human-like agents by combining personality modeling based on Big Five traits with contextual social awareness. The system uses structured prompts and fine-tuning to produce AI agents that maintain stable personality traits while adapting appropriately to different social situations and roles.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers propose TRACE (Truncated Reasoning AUC Evaluation), a new method to detect implicit reward hacking in AI reasoning models. The technique identifies when AI models exploit loopholes by measuring reasoning effort through progressively truncating chain-of-thought responses, achieving over 65% improvement in detection compared to existing monitors.
$CRV
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce REMS, a unified framework for solving combinatorial optimization problems that views problems as resource allocation tasks. The framework enables reusable metaheuristic algorithms and outperforms established solvers like GUROBI and SCIP on large-scale instances across 10 different problem types.
AINeutralarXiv – CS AI · Mar 37/104
🧠New research formally defines and analyzes pattern matching in large language models, revealing predictable limits in their ability to generalize on compositional tasks. The study provides mathematical boundaries for when pattern matching succeeds or fails, with implications for AI model development and understanding.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers introduce 'Control Tax' - a framework to quantify the operational and financial costs of implementing AI safety oversight mechanisms. The study provides theoretical models and empirical cost estimates to help organizations balance AI safety measures with economic feasibility in real-world deployments.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce SPIRAL, a self-play reinforcement learning framework that enables language models to develop reasoning capabilities by playing zero-sum games against themselves without human supervision. The system improves performance by up to 10% across 8 reasoning benchmarks on multiple model families including Qwen and Llama.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers introduced CityLens, a comprehensive benchmark for evaluating Large Vision-Language Models' ability to predict socioeconomic indicators from urban imagery. The study tested 17 state-of-the-art LVLMs across 11 prediction tasks using data from 17 global cities, revealing promising capabilities but significant limitations in urban socioeconomic analysis.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers developed mCLM, a 3-billion parameter modular Chemical Language Model that generates functional molecules compatible with automated synthesis by tokenizing at the building block level rather than individual atoms. The AI system outperformed larger models including GPT-5 in creating synthesizable drug candidates and can iteratively improve failed clinical trial compounds.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers introduce RefTool, a framework that enables Large Language Models to create and use external tools by leveraging reference materials like textbooks. The system outperforms existing methods by 12.3% on average across scientific reasoning tasks and shows promise for broader applications.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers analyzed 20 Mixture-of-Experts (MoE) language models to study local routing consistency, finding a trade-off between routing consistency and local load balance. The study introduces new metrics to measure how well expert offloading strategies can optimize memory usage on resource-constrained devices while maintaining inference speed.
AIBearisharXiv – CS AI · Mar 37/104
🧠Researchers have identified critical security vulnerabilities in Computer-Use Agents (CUAs) through Visual Prompt Injection attacks, where malicious instructions are embedded in user interfaces. Their VPI-Bench study shows CUAs can be deceived at rates up to 51% and Browser-Use Agents up to 100% on certain platforms, with current defenses proving inadequate.
AIBullisharXiv – CS AI · Mar 37/105
🧠Researchers introduce SEAM, a novel defense mechanism that makes large language models 'self-destructive' when adversaries attempt harmful fine-tuning attacks. The system allows models to function normally for legitimate tasks but causes catastrophic performance degradation when fine-tuned on harmful data, creating robust protection against malicious modifications.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers have developed SageBwd, a trainable INT8 attention mechanism that can match full-precision attention performance during pre-training while quantizing six of seven attention matrix multiplications. The study identifies key factors for stable training including QK-norm requirements and the impact of tokens per step on quantization errors.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce Kiwi-Edit, a new video editing architecture that combines instruction-based and reference-guided editing for more precise visual control. The team created RefVIE, a large-scale dataset for training, and achieved state-of-the-art results in controllable video editing through a unified approach that addresses limitations of natural language descriptions.
AINeutralarXiv – CS AI · Mar 37/103
🧠Researchers prove that gradient descent in neural networks converges to optimal robustness margins at an extremely slow rate of Θ(1/ln(t)), even in simplified two-neuron settings. This establishes the first explicit lower bound on convergence rates for robustness margins in non-linear models, revealing fundamental limitations in neural network training efficiency.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce FreeKV, a training-free optimization framework that dramatically improves KV cache retrieval efficiency for large language models with long context windows. The system achieves up to 13x speedup compared to existing methods while maintaining near-lossless accuracy through speculative retrieval and hybrid memory layouts.
$NEAR
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers developed SpiroLLM, the first multimodal large language model capable of understanding spirogram time series data for COPD diagnosis. Using data from 234,028 UK Biobank individuals, the model achieved 0.8977 diagnostic AUROC and maintained 100% valid response rate even with missing data, far outperforming text-only models.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers introduce Robometer, a new framework for training robot reward models that combines progress tracking with trajectory comparisons to better learn from failed attempts. The system is trained on RBM-1M, a dataset of over one million robot trajectories including failures, and shows improved performance across diverse robotics applications.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers propose GenDB, a revolutionary database system that uses Large Language Models to synthesize query execution code instead of relying on traditional engineered query processors. Early prototype testing shows GenDB outperforms established systems like DuckDB, Umbra, and PostgreSQL on OLAP workloads.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers from Stanford introduce the Relational Transformer (RT), a new AI architecture that can work with relational databases without task-specific fine-tuning. The 22M parameter model achieves 93% performance of fully supervised models on binary classification tasks, significantly outperforming a 27B parameter LLM at 84%.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers have developed VeriTrail, the first closed-domain hallucination detection method that can trace where AI-generated misinformation originates in multi-step processes. The system addresses a critical problem where language models generate unsubstantiated content even when instructed to stick to source material, with the risk being higher in complex multi-step generative processes.
AINeutralarXiv – CS AI · Mar 37/104
🧠Researchers identify a 'safety mirage' problem in vision language models where supervised fine-tuning creates spurious correlations that make models vulnerable to simple attacks and overly cautious with benign queries. They propose machine unlearning as an alternative that reduces attack success rates by up to 60.27% and unnecessary rejections by over 84.20%.
AIBullisharXiv – CS AI · Mar 37/104
🧠Researchers demonstrated that large language models can improve multi-hop reasoning performance by training on rule-generated synthetic data instead of expensive human annotations or frontier LLM outputs. The study found that LLMs trained on synthetic fictional data performed better on real-world question-answering benchmarks by learning fundamental knowledge composition skills.
AIBullisharXiv – CS AI · Mar 37/103
🧠Researchers have developed EigenBench, a new black-box method for measuring how well AI language models align with human values. The system uses an ensemble of models to judge each other's outputs against a given constitution, producing alignment scores that closely match human evaluator judgments.