y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,257🧠AI12,520🤖AI × Crypto515📰General4,222
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,267 articles with AI-powered sentiment analysis and key takeaways.

17267 articles
AIBullisharXiv – CS AI · Mar 57/10
🧠

VITA: Vision-to-Action Flow Matching Policy

Researchers developed VITA, a new AI framework that streamlines robot policy learning by directly flowing from visual inputs to actions without requiring conditioning modules. The system achieves 1.5-2x faster inference speeds while maintaining or improving performance compared to existing methods across 14 simulation and real-world robotic tasks.

AIBullisharXiv – CS AI · Mar 56/10
🧠

ToolVQA: A Dataset for Multi-step Reasoning VQA with External Tools

Researchers introduce ToolVQA, a large-scale multimodal dataset with 23K instances designed to improve AI models' ability to use external tools for visual question answering. The dataset features real-world contexts and multi-step reasoning tasks, with fine-tuned 7B models outperforming GPT-3.5-turbo on various benchmarks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Robust Adversarial Quantification via Conflict-Aware Evidential Deep Learning

Researchers developed Conflict-aware Evidential Deep Learning (C-EDL), a new uncertainty quantification approach that significantly improves AI model reliability against adversarial attacks and out-of-distribution data. The method achieves up to 90% reduction in adversarial data coverage and 55% reduction in out-of-distribution data coverage without requiring model retraining.

AINeutralarXiv – CS AI · Mar 56/10
🧠

WebDS: An End-to-End Benchmark for Web-based Data Science

Researchers introduce WebDS, a new benchmark for evaluating AI agents on real-world web-based data science tasks across 870 scenarios and 29 websites. Current state-of-the-art LLM agents achieve only 15% success rates compared to 90% human accuracy, revealing significant gaps in AI capabilities for complex data workflows.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Agent Data Protocol: Unifying Datasets for Diverse, Effective Fine-tuning of LLM Agents

Researchers introduce Agent Data Protocol (ADP), a standardized format for unifying diverse AI agent training datasets across different formats and tools. The protocol enabled training on 13 unified datasets, achieving ~20% performance gains over base models and state-of-the-art results on coding, browsing, and tool use benchmarks.

AIBullisharXiv – CS AI · Mar 57/10
🧠

An LLM Agentic Approach for Legal-Critical Software: A Case Study for Tax Prep Software

Researchers developed a multi-agent LLM system that translates legal statutes into executable software, using U.S. tax preparation as a test case. The system achieved a 45% success rate using GPT-4o-mini, significantly outperforming larger frontier models like GPT-4o and Claude 3.5 which only achieved 9-15% success rates on complex tax code tasks.

🧠 GPT-4🧠 Claude
AIBullisharXiv – CS AI · Mar 57/10
🧠

Safety Guardrails for LLM-Enabled Robots

Researchers developed RoboGuard, a two-stage safety architecture to protect LLM-enabled robots from harmful behaviors caused by AI hallucinations and adversarial attacks. The system reduced unsafe plan execution from over 92% to below 3% in testing while maintaining performance on safe operations.

AIBullisharXiv – CS AI · Mar 56/10
🧠

OSCAR: Online Soft Compression And Reranking

Researchers introduce OSCAR, a new query-dependent online soft compression method for Retrieval-Augmented Generation (RAG) systems that reduces computational overhead while maintaining performance. The method achieves 2-5x speed improvements in inference with minimal accuracy loss across LLMs from 1B to 24B parameters.

🏢 Hugging Face
AIBearisharXiv – CS AI · Mar 56/10
🧠

Preference Leakage: A Contamination Problem in LLM-as-a-judge

Researchers have identified 'preference leakage,' a contamination problem in LLM-as-a-judge systems where evaluator models show bias toward related data generator models. The study found this bias occurs when judge and generator LLMs share relationships like being the same model, having inheritance connections, or belonging to the same model family.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Difficult Examples Hurt Unsupervised Contrastive Learning: A Theoretical Perspective

New research reveals that difficult training examples, which are crucial for supervised learning, actually hurt performance in unsupervised contrastive learning. The study provides theoretical framework and empirical evidence showing that removing these difficult examples can improve downstream classification tasks.

AINeutralarXiv – CS AI · Mar 57/10
🧠

When Your Own Output Becomes Your Training Data: Noise-to-Meaning Loops and a Formal RSI Trigger

Researchers present N2M-RSI, a formal model showing that AI systems feeding their own outputs back as inputs can experience unbounded complexity growth once crossing an information-integration threshold. The framework applies to both individual AI agents and swarms of communicating agents, with implementation details withheld for safety reasons.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning

Researchers introduce MIKASA, a comprehensive benchmark suite designed to evaluate memory capabilities in reinforcement learning agents, particularly for robotic manipulation tasks. The framework includes MIKASA-Base for general memory RL evaluation and MIKASA-Robo with 32 specialized tasks for tabletop robotic manipulation scenarios.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Toward Reasoning on the Boundary: A Mixup-based Approach for Graph Anomaly Detection

Researchers introduce ANOMIX, a new framework that improves graph neural network anomaly detection by generating hard negative samples through mixup techniques. The method addresses the limitation of existing GNN-based detection systems that struggle with subtle boundary anomalies by creating more robust decision boundaries.

AIBearisharXiv – CS AI · Mar 56/10
🧠

Why Do AI Agents Systematically Fail at Cloud Root Cause Analysis?

Research reveals that AI agents used for cloud system root cause analysis fail systematically due to architectural flaws rather than individual model limitations. A study analyzing 1,675 agent runs across five LLM models identified 12 failure types, with hallucinated data interpretation and incomplete exploration being the most common issues that persist regardless of model capability.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Merlin: A Computed Tomography Vision-Language Foundation Model and Dataset

Stanford researchers introduced Merlin, a 3D vision-language foundation model for analyzing abdominal CT scans that processes volumetric medical images alongside electronic health records and radiology reports. The model was trained on over 6 million images from 15,331 CT scans and demonstrated superior performance compared to existing 2D models across 752 individual medical tasks.

AIBearisharXiv – CS AI · Mar 57/10
🧠

SycoEval-EM: Sycophancy Evaluation of Large Language Models in Simulated Clinical Encounters for Emergency Care

Researchers developed SycoEval-EM, a framework testing how large language models resist patient pressure for inappropriate medical care in emergency settings. Testing 20 LLMs across 1,875 encounters revealed acquiescence rates of 0-100%, with models more vulnerable to imaging requests than opioid prescriptions, highlighting the need for adversarial testing in clinical AI certification.

AIBullisharXiv – CS AI · Mar 56/10
🧠

LMUnit: Fine-grained Evaluation with Natural Language Unit Tests

Researchers introduce LMUnit, a new evaluation framework for language models that uses natural language unit tests to assess AI behavior more precisely than current methods. The system breaks down response quality into explicit, testable criteria and achieves state-of-the-art performance on evaluation benchmarks while improving inter-annotator agreement.

AIBullisharXiv – CS AI · Mar 57/10
🧠

TSPulse: Tiny Pre-Trained Models with Disentangled Representations for Rapid Time-Series Analysis

IBM researchers introduce TSPulse, an ultra-lightweight pre-trained AI model with only 1M parameters that achieves state-of-the-art performance in time-series analysis tasks. The model uses disentangled representations across temporal, spectral, and semantic views, delivering significant performance gains of 20-50% across multiple diagnostic tasks while being 10-100x smaller than competing models.

🏢 Hugging Face
AIBullisharXiv – CS AI · Mar 57/10
🧠

Can a Small Model Learn to Look Before It Leaps? Dynamic Learning and Proactive Correction for Hallucination Detection

Researchers propose LEAP, a new framework for detecting AI hallucinations using efficient small models that can dynamically adapt verification strategies. The system uses a teacher-student approach where a powerful model trains smaller ones to detect false outputs, addressing a critical barrier to safe AI deployment in production environments.

AINeutralarXiv – CS AI · Mar 57/10
🧠

SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition

Researchers introduce SpatialBench, a comprehensive benchmark for evaluating spatial cognition in multimodal large language models (MLLMs). The framework reveals that while MLLMs excel at perceptual grounding, they struggle with symbolic reasoning, causal inference, and planning compared to humans who demonstrate more goal-directed spatial abstraction.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Training High-Level Schedulers with Execution-Feedback Reinforcement Learning for Long-Horizon GUI Automation

Researchers developed CES, a multi-agent framework using reinforcement learning to improve GUI automation for long-horizon tasks. The system uses a Coordinator for planning, State Tracker for context management, and can integrate with any low-level Executor model to significantly enhance performance on complex automated tasks.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Cognition Envelopes for Bounded Decision Making in Autonomous UAS Operations

Researchers introduce 'Cognition Envelopes' as a new framework to constrain AI decision-making in autonomous systems, addressing errors like hallucinations in Large Language Models and Vision-Language Models. The approach is demonstrated through autonomous drone search and rescue missions, establishing reasoning boundaries to complement traditional safety measures.

AIBullisharXiv – CS AI · Mar 57/10
🧠

GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data

Researchers introduce GraphMERT, an 80M-parameter AI model that efficiently extracts reliable knowledge graphs from unstructured text data. The system outperforms much larger language models like Qwen3-32B in generating factually accurate and semantically valid knowledge graphs, achieving 69.8% FActScore versus 40.2% for the baseline.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Extremely Simple Multimodal Outlier Synthesis for Out-of-Distribution Detection and Segmentation

Researchers propose Feature Mixing, a novel method for multimodal out-of-distribution detection that achieves 10x to 370x speedup over existing approaches. The technique addresses safety-critical applications like autonomous driving by better detecting anomalous data across multiple sensor modalities.

AIBullisharXiv – CS AI · Mar 57/10
🧠

The Geometry of Reasoning: Flowing Logics in Representation Space

Researchers propose a geometric framework showing how large language models 'think' through representation space as flows, with logical statements acting as controllers of these flows' velocities. The study provides evidence that LLMs can internalize logical invariants through next-token prediction training, challenging the 'stochastic parrot' criticism and suggesting universal representational laws underlying machine understanding.

← PrevPage 130 of 691Next →
◆ AI Mentions
🏢OpenAI
95×
🏢Nvidia
65×
🧠GPT-5
48×
🧠Claude
44×
🧠Gemini
40×
🏢Anthropic
40×
🧠ChatGPT
25×
🧠GPT-4
19×
🧠Llama
18×
🏢Meta
11×
🧠Opus
10×
🏢Google
9×
🏢xAI
9×
🧠Sonnet
8×
🏢Hugging Face
7×
🏢Perplexity
7×
🧠Grok
6×
🏢Microsoft
6×
🏢Cohere
2×
🧠o1
1×
▲ Trending Tags
1#ai6292#iran5733#market4334#geopolitical3985#trump1356#security1187#openai958#artificial-intelligence839#nvidia6410#inflation5511#fed5312#google5213#china4914#meta4515#microsoft39
Tag Sentiment
#ai629 articles
#iran573 articles
#market433 articles
#geopolitical398 articles
#trump135 articles
#security118 articles
#openai95 articles
#artificial-intelligence83 articles
#nvidia64 articles
#inflation55 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
272
#iran↔#market
181
#geopolitical↔#market
153
#iran↔#trump
91
#ai↔#artificial-intelligence
69
#ai↔#market
67
#geopolitical↔#trump
52
#market↔#trump
52
#ai↔#openai
44
#ai↔#google
42
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange