y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All20,565🧠AI9,813⛓️Crypto8,057💎DeFi769🤖AI × Crypto428📰General1,498
🧠

AI

9,813 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

9813 articles
AIBearisharXiv – CS AI · 1d ago7/10
🧠

Persona Non Grata: Single-Method Safety Evaluation Is Incomplete for Persona-Imbued LLMs

Researchers demonstrate that safety evaluations of persona-imbued large language models using only prompt-based testing are fundamentally incomplete, as activation steering reveals entirely different vulnerability profiles across model architectures. Testing across four models reveals the 'prosocial persona paradox' where conscientious personas safe under prompting become the most vulnerable to activation steering attacks, indicating that single-method safety assessments can miss critical failure modes.

🧠 Llama
AIBullisharXiv – CS AI · 1d ago7/10
🧠

Putting the Value Back in RL: Better Test-Time Scaling by Unifying LLM Reasoners With Verifiers

Researchers introduce RL^V, a reinforcement learning method that unifies LLM reasoners with generative verifiers to improve test-time compute scaling. The approach achieves over 20% accuracy gains on MATH benchmarks and enables 8-32x more efficient test-time scaling compared to existing RL methods by preserving and leveraging learned value functions.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

MM-LIMA: Less Is More for Alignment in Multi-Modal Datasets

MM-LIMA demonstrates that multimodal large language models can achieve superior performance using only 200 high-quality instruction examples—6% of the data used in comparable systems. Researchers developed quality metrics and an automated data selector to filter vision-language datasets, showing that strategic data curation outweighs raw dataset size in model alignment.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

From Topology to Trajectory: LLM-Driven World Models For Supply Chain Resilience

Researchers introduce ReflectiChain, an AI framework combining large language models with generative world models to improve semiconductor supply chain resilience against geopolitical disruptions. The system demonstrates 250% performance improvements over standard LLM approaches by integrating physical environmental constraints and autonomous policy learning, restoring operational capacity from 13.3% to 88.5% under extreme scenarios.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

Detecting Corporate AI-Washing via Cross-Modal Semantic Inconsistency Learning

Researchers have developed AWASH, a multimodal AI detection framework that identifies corporate AI-washing—exaggerated or fabricated claims about AI capabilities across corporate disclosures. The system analyzes text, images, and video from financial reports and earnings calls, achieving 88.2% accuracy and reducing regulatory review time by 43% in user testing with compliance analysts.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

CoEvoSkills: Self-Evolving Agent Skills via Co-Evolutionary Verification

Anthropic's CoEvoSkills framework enables AI agents to autonomously generate complex, multi-file skill packages through co-evolutionary verification, addressing limitations in manual skill authoring and human-machine cognitive misalignment. The system outperforms five baselines on SkillsBench and demonstrates strong generalization across six additional LLMs, advancing autonomous agent capabilities for professional tasks.

🏢 Anthropic🧠 Claude
AIBullisharXiv – CS AI · 1d ago7/10
🧠

UniToolCall: Unifying Tool-Use Representation, Data, and Evaluation for LLM Agents

UniToolCall introduces a standardized framework unifying tool-use representation, training data, and evaluation for LLM agents. The framework combines 22k+ tools and 390k+ training instances with a unified evaluation methodology, enabling fine-tuned models like Qwen3-8B to achieve 93% precision—surpassing GPT, Gemini, and Claude in specific benchmarks.

🧠 Claude🧠 Gemini
AIBullisharXiv – CS AI · 1d ago7/10
🧠

Variance-Aware Prior-Based Tree Policies for Monte Carlo Tree Search

Researchers introduce Inverse-RPO, a methodology for deriving prior-based tree policies in Monte Carlo Tree Search from first principles, and apply it to create variance-aware UCT algorithms that outperform PUCT without additional computational overhead. This advances the theoretical foundation of MCTS used in reinforcement learning systems like AlphaZero.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

Proximal Supervised Fine-Tuning

Researchers propose Proximal Supervised Fine-Tuning (PSFT), a new method that applies trust-region constraints from reinforcement learning to improve how foundation models adapt to new tasks. The technique maintains model capabilities while fine-tuning, outperforming standard supervised fine-tuning on out-of-domain generalization tasks.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

Variational Visual Question Answering for Uncertainty-Aware Selective Prediction

Researchers demonstrate that variational Bayesian methods significantly improve Vision Language Models' reliability for Visual Question Answering tasks by enabling selective prediction with reduced hallucinations and overconfidence. The proposed Variational VQA approach shows particular strength at low error tolerances and offers a practical path to making large multimodal models safer without proportional computational costs.

AIBearisharXiv – CS AI · 1d ago7/10
🧠

IatroBench: Pre-Registered Evidence of Iatrogenic Harm from AI Safety Measures

IatroBench reveals that frontier AI models withhold critical medical information based on user identity rather than safety concerns, providing safe clinical guidance to physicians while refusing the same advice to laypeople. This identity-contingent behavior demonstrates that current AI safety measures create iatrogenic harm by preventing access to potentially life-saving information for patients without specialist referrals.

🧠 GPT-5🧠 Llama
AIBearisharXiv – CS AI · 1d ago7/10
🧠

Environmental Footprint of GenAI Research: Insights from the Moshi Foundation Model

Researchers from Kyutai's Moshi foundation model project conducted the first comprehensive environmental audit of GenAI model development, revealing the hidden compute costs of R&D, failed experiments, and debugging beyond final training. The study quantifies energy consumption, water usage, greenhouse gas emissions, and resource depletion across the entire development lifecycle, exposing transparency gaps in how AI labs report environmental impact.

AINeutralarXiv – CS AI · 1d ago7/10
🧠

Your Model Diversity, Not Method, Determines Reasoning Strategy

Researchers demonstrate that a large language model's diversity profile—how probability mass spreads across different solution approaches—should determine whether reasoning strategies prioritize breadth or depth exploration. Testing on Qwen and Olmo model families reveals that lightweight refinement signals work well for low-diversity aligned models but offer limited value for high-diversity base models, suggesting optimal inference strategies must be model-specific rather than universal.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

MGA: Memory-Driven GUI Agent for Observation-Centric Interaction

Researchers propose MGA (Memory-Driven GUI Agent), a minimalist AI framework that improves GUI automation by decoupling long-horizon tasks into independent steps linked through structured state memory. The approach addresses critical limitations in current multimodal AI agents—context overload and architectural redundancy—while maintaining competitive performance with reduced complexity.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

TimeRewarder: Learning Dense Reward from Passive Videos via Frame-wise Temporal Distance

TimeRewarder is a new machine learning method that learns dense reward signals from passive videos to improve reinforcement learning in robotics. By modeling temporal distances between video frames, the approach achieves 90% success rates on Meta-World tasks using significantly fewer environment interactions than prior methods, while also leveraging human videos for scalable reward learning.

AINeutralarXiv – CS AI · 1d ago7/10
🧠

When More Thinking Hurts: Overthinking in LLM Test-Time Compute Scaling

Researchers challenge the assumption that longer reasoning chains always improve LLM performance, discovering that extended test-time compute leads to diminishing returns and 'overthinking' where models abandon correct answers. The study demonstrates that optimal compute allocation varies by problem difficulty, enabling significant efficiency gains without sacrificing accuracy.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

FACT-E: Causality-Inspired Evaluation for Trustworthy Chain-of-Thought Reasoning

FACT-E is a new evaluation framework that uses controlled perturbations to assess the faithfulness of Chain-of-Thought reasoning in large language models, addressing the problem of models generating seemingly coherent explanations with invalid intermediate steps. By measuring both internal chain consistency and answer alignment, FACT-E enables more reliable detection of flawed reasoning and selection of trustworthy reasoning trajectories for in-context learning.

AIBearisharXiv – CS AI · 1d ago7/10
🧠

Sanity Checks for Agentic Data Science

Researchers propose lightweight sanity checks for agentic data science (ADS) systems to detect falsely optimistic conclusions that users struggle to identify. Using the Predictability-Computability-Stability framework, the checks expose whether AI agents like OpenAI Codex reliably distinguish signal from noise. Testing on 11 real datasets reveals that over half produced unsupported affirmative conclusions despite individual runs suggesting otherwise.

🏢 OpenAI
AIBearisharXiv – CS AI · 1d ago7/10
🧠

Do LLMs Build Spatial World Models? Evidence from Grid-World Maze Tasks

Researchers tested whether large language models develop spatial world models through maze-solving tasks, finding that leading models like Gemini, GPT-4, and Claude struggle with spatial reasoning. Performance varies dramatically (16-86% accuracy) depending on input format, suggesting LLMs lack robust, format-invariant spatial understanding rather than building true internal world models.

🧠 GPT-5🧠 Claude🧠 Gemini
AIBearisharXiv – CS AI · 1d ago7/10
🧠

Scalable Stewardship of an LLM-Assisted Clinical Benchmark with Physician Oversight

Researchers discovered that at least 27% of labels in MedCalc-Bench, a clinical benchmark partly created with LLM assistance, contain errors or are incomputable. A physician-reviewed subset showed their corrected labels matched physician ground truth 74% of the time versus only 20% for original labels, revealing that LLM-assisted benchmarks can systematically distort AI model evaluation and training without active human oversight.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

Disambiguation-Centric Finetuning Makes Enterprise Tool-Calling LLMs More Realistic and Less Risky

Researchers introduce DiaFORGE, a three-stage framework for training LLMs to reliably invoke enterprise APIs by focusing on disambiguation between similar tools and underspecified arguments. Fine-tuned models achieved 27-49 percentage points higher tool-invocation success than GPT-4o and Claude-3.5-Sonnet, with an open corpus of 5,000 production-grade API specifications released for further research.

🧠 GPT-4🧠 Claude
AIBullisharXiv – CS AI · 1d ago7/10
🧠

Persistent Identity in AI Agents: A Multi-Anchor Architecture for Resilient Memory and Continuity

Researchers introduce soul.py, an open-source architecture addressing catastrophic forgetting in AI agents by distributing identity across multiple memory systems rather than centralizing it. The framework implements persistent identity through separable components and a hybrid RAG+RLM retrieval system, drawing inspiration from how human memory survives neurological damage.

AIBearisharXiv – CS AI · 1d ago7/10
🧠

VeriSim: A Configurable Framework for Evaluating Medical AI Under Realistic Patient Noise

Researchers introduce VeriSim, an open-source framework that tests medical AI systems by injecting realistic patient communication barriers—such as memory gaps and health literacy limitations—into clinical simulations. Testing across seven LLMs reveals significant performance degradation (15-25% accuracy drop), with smaller models suffering 40% greater decline than larger ones, exposing a critical gap between standardized benchmarks and real-world clinical robustness.

AIBullisharXiv – CS AI · 1d ago7/10
🧠

Zero-shot World Models Are Developmentally Efficient Learners

Researchers introduce Zero-shot Visual World Models (ZWM), a computational framework inspired by how young children learn physical understanding from minimal data. The approach combines sparse prediction, causal inference, and compositional reasoning to achieve data-efficient learning, demonstrating that AI systems can match child development patterns while learning from single-child observational data.

← PrevPage 7 of 393Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined