y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,247🧠AI12,517🤖AI × Crypto513📰General4,217
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,257 articles with AI-powered sentiment analysis and key takeaways.

17257 articles
AIBearisharXiv – CS AI · Mar 56/10
🧠

Preference Leakage: A Contamination Problem in LLM-as-a-judge

Researchers have identified 'preference leakage,' a contamination problem in LLM-as-a-judge systems where evaluator models show bias toward related data generator models. The study found this bias occurs when judge and generator LLMs share relationships like being the same model, having inheritance connections, or belonging to the same model family.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Difficult Examples Hurt Unsupervised Contrastive Learning: A Theoretical Perspective

New research reveals that difficult training examples, which are crucial for supervised learning, actually hurt performance in unsupervised contrastive learning. The study provides theoretical framework and empirical evidence showing that removing these difficult examples can improve downstream classification tasks.

AIBearisharXiv – CS AI · Mar 56/10
🧠

Why Do AI Agents Systematically Fail at Cloud Root Cause Analysis?

Research reveals that AI agents used for cloud system root cause analysis fail systematically due to architectural flaws rather than individual model limitations. A study analyzing 1,675 agent runs across five LLM models identified 12 failure types, with hallucinated data interpretation and incomplete exploration being the most common issues that persist regardless of model capability.

AIBullisharXiv – CS AI · Mar 57/10
🧠

The Geometry of Reasoning: Flowing Logics in Representation Space

Researchers propose a geometric framework showing how large language models 'think' through representation space as flows, with logical statements acting as controllers of these flows' velocities. The study provides evidence that LLMs can internalize logical invariants through next-token prediction training, challenging the 'stochastic parrot' criticism and suggesting universal representational laws underlying machine understanding.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Cognition Envelopes for Bounded Decision Making in Autonomous UAS Operations

Researchers introduce 'Cognition Envelopes' as a new framework to constrain AI decision-making in autonomous systems, addressing errors like hallucinations in Large Language Models and Vision-Language Models. The approach is demonstrated through autonomous drone search and rescue missions, establishing reasoning boundaries to complement traditional safety measures.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Can a Small Model Learn to Look Before It Leaps? Dynamic Learning and Proactive Correction for Hallucination Detection

Researchers propose LEAP, a new framework for detecting AI hallucinations using efficient small models that can dynamically adapt verification strategies. The system uses a teacher-student approach where a powerful model trains smaller ones to detect false outputs, addressing a critical barrier to safe AI deployment in production environments.

AINeutralarXiv – CS AI · Mar 57/10
🧠

SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition

Researchers introduce SpatialBench, a comprehensive benchmark for evaluating spatial cognition in multimodal large language models (MLLMs). The framework reveals that while MLLMs excel at perceptual grounding, they struggle with symbolic reasoning, causal inference, and planning compared to humans who demonstrate more goal-directed spatial abstraction.

AINeutralarXiv – CS AI · Mar 57/10
🧠

ERDES: A Benchmark Video Dataset for Retinal Detachment and Macular Status Classification in Ocular Ultrasound

Researchers have released ERDES, the first open-access dataset of ocular ultrasound videos for detecting retinal detachment and macular status using machine learning. The dataset addresses a critical gap in automated medical diagnosis by enabling AI models to classify retinal detachment severity, which is essential for determining surgical urgency.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Training High-Level Schedulers with Execution-Feedback Reinforcement Learning for Long-Horizon GUI Automation

Researchers developed CES, a multi-agent framework using reinforcement learning to improve GUI automation for long-horizon tasks. The system uses a Coordinator for planning, State Tracker for context management, and can integrate with any low-level Executor model to significantly enhance performance on complex automated tasks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data

Researchers introduce GraphMERT, an 80M-parameter AI model that efficiently extracts reliable knowledge graphs from unstructured text data. The system outperforms much larger language models like Qwen3-32B in generating factually accurate and semantically valid knowledge graphs, achieving 69.8% FActScore versus 40.2% for the baseline.

AIBullisharXiv – CS AI · Mar 57/10
🧠

LeanTutor: Towards a Verified AI Mathematical Proof Tutor

Researchers have developed LeanTutor, a proof-of-concept AI system that combines Large Language Models with theorem provers to create a mathematically verified proof tutor. The system features three modules for autoformalization, proof-checking, and natural language feedback, evaluated using PeanoBench, a new dataset of 371 Peano Arithmetic proofs.

AINeutralarXiv – CS AI · Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Emotion-Gradient Metacognitive RSI (Part I): Theoretical Foundations and Single-Agent Architecture

Researchers introduce the Emotion-Gradient Metacognitive Recursive Self-Improvement (EG-MRSI) framework, a theoretical architecture for AI systems that can safely modify their own learning algorithms. The framework integrates metacognition, emotion-based motivation, and self-modification with formal safety constraints, representing foundational research toward safe artificial general intelligence.

AIBullisharXiv – CS AI · Mar 56/10
🧠

R1-Code-Interpreter: LLMs Reason with Code via Supervised and Multi-stage Reinforcement Learning

Researchers developed R1-Code-Interpreter, a large language model that uses multi-stage reinforcement learning to autonomously generate code for step-by-step reasoning across diverse tasks. The 14B parameter model achieves 72.4% accuracy on test tasks, outperforming GPT-4o variants and demonstrating emergent self-checking capabilities through code generation.

🏢 Hugging Face🧠 GPT-4
AIBullisharXiv – CS AI · Mar 56/10
🧠

ToolVQA: A Dataset for Multi-step Reasoning VQA with External Tools

Researchers introduce ToolVQA, a large-scale multimodal dataset with 23K instances designed to improve AI models' ability to use external tools for visual question answering. The dataset features real-world contexts and multi-step reasoning tasks, with fine-tuned 7B models outperforming GPT-3.5-turbo on various benchmarks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Robustness of Agentic AI Systems via Adversarially-Aligned Jacobian Regularization

Researchers introduce Adversarially-Aligned Jacobian Regularization (AAJR), a new method to improve the robustness of autonomous AI agent systems by controlling sensitivity along adversarial directions rather than globally. This approach maintains better performance while ensuring stability in multi-agent AI ecosystems compared to existing methods.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety

Researchers introduced WebRRSBench, a comprehensive benchmark evaluating multimodal large language models' reasoning, robustness, and safety capabilities for web understanding tasks. Testing 11 MLLMs on 3,799 QA pairs from 729 websites revealed significant gaps in compositional reasoning, UI robustness, and safety-critical action recognition.

AIBullisharXiv – CS AI · Mar 57/10
🧠

ZipMap: Linear-Time Stateful 3D Reconstruction with Test-Time Training

Researchers introduce ZipMap, a new AI model for 3D reconstruction that achieves linear-time processing while maintaining accuracy comparable to slower quadratic-time methods. The system can reconstruct over 700 frames in under 10 seconds on a single H100 GPU, making it more than 20x faster than current state-of-the-art approaches like VGGT.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Synthetic emotions and consciousness: exploring architectural boundaries

Researchers propose an architectural framework for implementing emotion-like AI systems while deliberately avoiding features associated with consciousness. The study introduces risk-reduction constraints and engineering principles to create sophisticated emotional AI without triggering consciousness-related safety concerns.

AIBearisharXiv – CS AI · Mar 57/10
🧠

Efficient Refusal Ablation in LLM through Optimal Transport

Researchers developed a new AI safety attack method using optimal transport theory that achieves 11% higher success rates in bypassing language model safety mechanisms compared to existing approaches. The study reveals that AI safety refusal mechanisms are localized to specific network layers rather than distributed throughout the model, suggesting current alignment methods may be more vulnerable than previously understood.

🏢 Perplexity🧠 Llama
AIBullisharXiv – CS AI · Mar 57/10
🧠

RoboCasa365: A Large-Scale Simulation Framework for Training and Benchmarking Generalist Robots

Researchers have released RoboCasa365, a large-scale simulation benchmark featuring 365 household tasks across 2,500 kitchen environments with over 600 hours of human demonstration data. The platform is designed to train and evaluate generalist robots for everyday tasks, providing insights into factors affecting robot performance and generalization capabilities.

AIBullisharXiv – CS AI · Mar 56/10
🧠

RANGER: Sparsely-Gated Mixture-of-Experts with Adaptive Retrieval Re-ranking for Pathology Report Generation

Researchers introduce RANGER, a new AI framework using sparsely-gated Mixture-of-Experts architecture for generating pathology reports from medical images. The system achieves superior performance on standard benchmarks by enabling dynamic expert specialization and reducing noise through adaptive retrieval re-ranking.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Dissecting Quantization Error: A Concentration-Alignment Perspective

Researchers introduce Concentration-Alignment Transforms (CAT), a new method to reduce quantization error in large language and vision models by improving both weight/activation concentration and alignment. The technique consistently matches or outperforms existing quantization methods at 4-bit precision across several LLMs.

AINeutralarXiv – CS AI · Mar 57/10
🧠

World Properties without World Models: Recovering Spatial and Temporal Structure from Co-occurrence Statistics in Static Word Embeddings

Research shows that static word embeddings like GloVe and Word2Vec can recover substantial geographic and temporal information from text co-occurrence patterns alone, challenging assumptions that such capabilities require sophisticated world models in large language models. The study found these simple embeddings could predict city coordinates and historical birth years with high accuracy, suggesting that linear probe recoverability doesn't necessarily indicate advanced internal representations.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Dual-Modality Multi-Stage Adversarial Safety Training: Robustifying Multimodal Web Agents Against Cross-Modal Attacks

Researchers developed DMAST, a new training framework that protects multimodal web agents from cross-modal attacks where adversaries inject malicious content into webpages to deceive both visual and text processing channels. The method uses adversarial training through a three-stage pipeline and significantly outperforms existing defenses while doubling task completion efficiency.

← PrevPage 129 of 691Next →
◆ AI Mentions
🏢OpenAI
94×
🏢Nvidia
66×
🧠GPT-5
47×
🧠Claude
45×
🧠Gemini
40×
🏢Anthropic
40×
🧠ChatGPT
25×
🧠GPT-4
19×
🧠Llama
18×
🏢Meta
11×
🧠Opus
10×
🏢xAI
9×
🏢Google
9×
🧠Sonnet
8×
🏢Perplexity
7×
🏢Hugging Face
7×
🧠Grok
6×
🏢Microsoft
6×
🏢Cohere
2×
🧠DALL E
1×
▲ Trending Tags
1#ai6302#iran5783#market4374#geopolitical4015#trump1376#security1187#openai948#artificial-intelligence829#nvidia6510#inflation5611#fed5312#google5213#china4914#meta4415#machine-learning39
Tag Sentiment
#ai630 articles
#iran578 articles
#market437 articles
#geopolitical401 articles
#trump137 articles
#security118 articles
#openai94 articles
#artificial-intelligence82 articles
#nvidia65 articles
#inflation56 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
275
#iran↔#market
182
#geopolitical↔#market
154
#iran↔#trump
93
#ai↔#artificial-intelligence
69
#ai↔#market
68
#geopolitical↔#trump
53
#market↔#trump
52
#ai↔#openai
44
#market↔#nvidia
42
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange