y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,161🧠AI12,474🤖AI × Crypto505📰General4,182
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,169 articles with AI-powered sentiment analysis and key takeaways.

17169 articles
AIBullisharXiv – CS AI · Mar 97/10
🧠

TADPO: Reinforcement Learning Goes Off-road

Researchers introduced TADPO, a novel reinforcement learning approach that extends PPO for autonomous off-road driving. The system achieved successful zero-shot sim-to-real transfer on a full-scale off-road vehicle, marking the first RL-based policy deployment on such a platform.

AIBullisharXiv – CS AI · Mar 97/10
🧠

AI End-to-End Radiation Treatment Planning Under One Second

Researchers developed AIRT, an AI-powered radiation therapy planning system that generates complete prostate cancer treatment plans in under one second using deep learning. The system processes CT scans and anatomical data to produce clinically-viable radiation treatment plans 100x faster than current methods, demonstrating non-inferiority to existing commercial solutions.

🏢 Nvidia
AINeutralarXiv – CS AI · Mar 97/10
🧠

When AI Levels the Playing Field: Skill Homogenization, Asset Concentration, and Two Regimes of Inequality

New research reveals that generative AI creates a paradox where it equalizes individual task performance but may increase aggregate inequality by concentrating economic value in complementary assets. The study presents a formal model showing two inequality regimes dependent on AI's technology structure and labor market institutions.

AINeutralarXiv – CS AI · Mar 97/10
🧠

Cultural Perspectives and Expectations for Generative AI: A Global Survey Approach

Researchers conducted a large-scale global survey across Europe, Americas, Asia, and Africa to understand cultural perspectives on how generative AI should represent different cultures. The study reveals significant complexities in how communities define culture and provides recommendations for culturally sensitive AI development, including participatory approaches and frameworks for addressing cultural sensitivities.

AIBearisharXiv – CS AI · Mar 97/10
🧠

The Rise of AI in Weather and Climate Information and its Impact on Global Inequality

Research reveals that AI development in climate and weather modeling is concentrated in the Global North, creating systematic performance gaps that disproportionately affect vulnerable regions. The study warns that current AI trajectory risks amplifying global inequality in climate information systems through biased data, unrepresentative validation, and dominant knowledge forms.

AIBullisharXiv – CS AI · Mar 97/10
🧠

COLD-Steer: Steering Large Language Models via In-Context One-step Learning Dynamics

Researchers introduce COLD-Steer, a training-free framework that enables efficient control of large language model behavior at inference time using just a few examples. The method approximates gradient descent effects without parameter updates, achieving 95% steering effectiveness while using 50 times fewer samples than existing approaches.

AIBullisharXiv – CS AI · Mar 97/10
🧠

BEVLM: Distilling Semantic Knowledge from LLMs into Bird's-Eye View Representations

Researchers introduce BEVLM, a framework that integrates Large Language Models with Bird's-Eye View representations for autonomous driving. The approach improves LLM reasoning accuracy in cross-view driving scenarios by 46% and enhances end-to-end driving performance by 29% in safety-critical situations.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering

Researchers have developed a new technique called activation steering to reduce reasoning biases in large language models, particularly the tendency to confuse content plausibility with logical validity. Their novel K-CAST method achieved up to 15% improvement in formal reasoning accuracy while maintaining robustness across different tasks and languages.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Sysformer: Safeguarding Frozen Large Language Models with Adaptive System Prompts

Researchers developed Sysformer, a novel approach to safeguard large language models by adapting system prompts rather than fine-tuning model parameters. The method achieved up to 80% improvement in refusing harmful prompts while maintaining 90% compliance with safe prompts across 5 different LLMs.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Localizing and Correcting Errors for LLM-based Planners

Researchers developed Localized In-Context Learning (L-ICL), a technique that significantly improves large language model performance on symbolic planning tasks by targeting specific constraint violations with minimal corrections. The method achieves 89% valid plan generation compared to 59% for best baselines, representing a major advancement in LLM reasoning capabilities.

AINeutralarXiv – CS AI · Mar 97/10
🧠

Uncertainty Quantification in LLM Agents: Foundations, Emerging Challenges, and Opportunities

Researchers present a new framework for uncertainty quantification in AI agents, highlighting critical gaps in current research that focuses on single-turn interactions rather than complex multi-step agent deployments. The paper identifies four key technical challenges and proposes foundations for safer AI agent systems in real-world applications.

AINeutralarXiv – CS AI · Mar 97/10
🧠

From Features to Actions: Explainability in Traditional and Agentic AI Systems

Researchers demonstrate that traditional explainable AI methods designed for static predictions fail when applied to agentic AI systems that make sequential decisions over time. The study shows attribution-based explanations work well for static tasks but trace-based diagnostics are needed to understand failures in multi-step AI agent behaviors.

AIBullisharXiv – CS AI · Mar 97/10
🧠

RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model

Researchers introduce RAG-Driver, a retrieval-augmented multi-modal large language model designed for autonomous driving that can provide explainable decisions and control predictions. The system addresses data scarcity and generalization challenges in AI-driven autonomous vehicles by using in-context learning and expert demonstration retrieval.

AIBearisharXiv – CS AI · Mar 97/10
🧠

Algorithmic Collusion by Large Language Models

Research reveals that Large Language Model-based pricing agents autonomously develop collusive pricing strategies in oligopoly markets, achieving supracompetitive prices and profits. The study demonstrates that minor variations in AI prompts significantly influence the degree of price manipulation, raising concerns about future regulation of AI-driven pricing systems.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Generative Predictive Control: Flow Matching Policies for Dynamic and Difficult-to-Demonstrate Tasks

Researchers introduce generative predictive control, a new AI framework that enables robots to perform fast, dynamic tasks without requiring expert demonstrations. The method uses flow matching policies that can handle high-frequency feedback and maintain temporal consistency, addressing key limitations of current robotics approaches.

AIBullisharXiv – CS AI · Mar 97/10
🧠

SpecFuse: Ensembling Large Language Models via Next-Segment Prediction

Researchers introduce SpecEM, a new training-free framework for ensembling large language models that dynamically adjusts each model's contribution based on real-time performance. The system uses speculative decoding principles and online feedback mechanisms to improve collaboration between different LLMs, showing consistent performance improvements across multiple benchmark datasets.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Predictive Coding Networks and Inference Learning: Tutorial and Survey

Researchers present a comprehensive survey of Predictive Coding Networks (PCNs), a neuroscience-inspired AI approach that uses biologically plausible inference learning instead of traditional backpropagation. PCNs can achieve higher computational efficiency with parallelization and offer a more versatile framework for both supervised and unsupervised learning compared to traditional neural networks.

AINeutralarXiv – CS AI · Mar 97/10
🧠

Aligning Compound AI Systems via System-level DPO

Researchers introduce SysDPO, a framework that extends Direct Preference Optimization to align compound AI systems comprising multiple interacting components like LLMs, foundation models, and external tools. The approach addresses challenges in optimizing complex AI systems by modeling them as Directed Acyclic Graphs and enabling system-level alignment through two variants: SysDPO-Direct and SysDPO-Sampling.

AIBearisharXiv – CS AI · Mar 97/10
🧠

The Malicious Technical Ecosystem: Exposing Limitations in Technical Governance of AI-Generated Non-Consensual Intimate Images of Adults

Research paper identifies a 'malicious technical ecosystem' comprising open-source face-swapping models and nearly 200 'nudifying' software programs that enable creation of AI-generated non-consensual intimate images within minutes. The study exposes significant gaps in current AI governance frameworks, showing how existing technical standards fail to regulate this harmful ecosystem.

AIBullisharXiv – CS AI · Mar 97/10
🧠

RM-R1: Reward Modeling as Reasoning

Researchers introduce RM-R1, a new class of Reasoning Reward Models (ReasRMs) that integrate chain-of-thought reasoning into reward modeling for large language models. The models outperform much larger competitors including GPT-4o by up to 4.9% across reward model benchmarks by using a chain-of-rubrics mechanism and two-stage training process.

🧠 GPT-4🧠 Llama
AIBullisharXiv – CS AI · Mar 97/10
🧠

CanvasMAR: Improving Masked Autoregressive Video Prediction With Canvas

Researchers have developed CanvasMAR, a new masked autoregressive video prediction model that generates high-quality videos with fewer sampling steps by using a "canvas" approach that provides global structure early in the generation process. The model demonstrates superior performance on major benchmarks including BAIR, UCF-101, and Kinetics-600, rivaling advanced diffusion-based methods.

AINeutralarXiv – CS AI · Mar 97/10
🧠

AdAEM: An Adaptively and Automated Extensible Measurement of LLMs' Value Difference

Researchers introduce AdAEM, a new evaluation algorithm that automatically generates test questions to better assess value differences and biases across Large Language Models. Unlike static benchmarks, AdAEM adaptively creates controversial topics that reveal more distinguishable insights about LLMs' underlying values and cultural alignment.

AIBullisharXiv – CS AI · Mar 97/10
🧠

SPARC: Concept-Aligned Sparse Autoencoders for Cross-Model and Cross-Modal Interpretability

Researchers introduced SPARC, a framework that creates unified latent spaces across different AI models and modalities, enabling direct comparison of how various architectures represent identical concepts. The method achieves 0.80 Jaccard similarity on Open Images, tripling alignment compared to previous methods, and enables practical applications like text-guided spatial localization in vision-only models.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Shoot First, Ask Questions Later? Building Rational Agents that Explore and Act Like People

Researchers developed new Monte Carlo inference strategies inspired by Bayesian Experimental Design to improve AI agents' information-seeking capabilities. The methods significantly enhanced language models' performance in strategic decision-making tasks, with weaker models like Llama-4-Scout outperforming GPT-5 at 1% of the cost.

🧠 GPT-5🧠 Llama
AIBullisharXiv – CS AI · Mar 97/10
🧠

Just-In-Time Objectives: A General Approach for Specialized AI Interactions

Researchers introduce 'just-in-time objectives' that allow large language models to automatically infer and optimize for users' specific goals in real-time by observing behavior. The system generates specialized tools and responses that achieve 66-86% win rates over standard LLMs in user experiments.

← PrevPage 120 of 687Next →
◆ AI Mentions
🏢OpenAI
96×
🏢Nvidia
63×
🧠GPT-5
53×
🧠Claude
51×
🏢Anthropic
45×
🧠Gemini
38×
🧠ChatGPT
29×
🧠GPT-4
19×
🧠Llama
18×
🏢Meta
11×
🧠Opus
10×
🏢xAI
8×
🧠Sonnet
8×
🏢Google
8×
🏢Hugging Face
7×
🏢Perplexity
7×
🏢Microsoft
6×
🧠Grok
6×
🏢Cohere
2×
🧠Stable Diffusion
1×
▲ Trending Tags
1#ai6672#iran6343#market4754#geopolitical4365#trump1586#security1297#openai968#artificial-intelligence859#nvidia6310#china5511#fed5512#inflation5413#google5014#meta4715#microsoft42
Tag Sentiment
#ai667 articles
#iran634 articles
#market475 articles
#geopolitical436 articles
#trump158 articles
#security129 articles
#openai96 articles
#artificial-intelligence85 articles
#nvidia63 articles
#china55 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
301
#iran↔#market
202
#geopolitical↔#market
167
#iran↔#trump
109
#ai↔#artificial-intelligence
73
#ai↔#market
72
#market↔#trump
61
#geopolitical↔#trump
58
#ai↔#openai
50
#ai↔#security
46
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange