y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All22,671🧠AI14,870🤖AI × Crypto699📰General7,102
Home/AI Pulse

AI Pulse News

Models, papers, tools. 22,678 articles with AI-powered sentiment analysis and key takeaways.

22678 articles
AINeutralFortune Crypto · Mar 177/10
🧠

AI is making productivity obsolete. The leaders who thrive next will have something machines can’t touch

AI is fundamentally changing how professional value is measured by making traditional productivity metrics obsolete. Leaders must now focus on uniquely human capabilities that machines cannot replicate as the definition of workplace worth shifts away from pure output.

AI is making productivity obsolete. The leaders who thrive next will have something machines can’t touch
AI × CryptoBearishThe Block · Mar 176/10
🤖

Cango posts $452.8 million net loss in first year as bitcoin miner

Cango reported a $452.8 million net loss in its first full year as a bitcoin mining operation. The company has been selling bitcoin to repay debt and fund its transition into AI services.

Cango posts $452.8 million net loss in first year as bitcoin miner
$BTC
GeneralBullishDaily Hodl · Mar 176/10
📰

‘Closer to the End of This Correction’: Morgan Stanley CIO Outlines Equity Market Predictions Amid Drawdown

Morgan Stanley CIO Mike Wilson believes U.S. equity markets are nearing the end of their current correction phase after months of economic and geopolitical pressures. The investment bank's chief strategist suggests the stock market sell-off began well before recent events and may be approaching a turning point.

‘Closer to the End of This Correction’: Morgan Stanley CIO Outlines Equity Market Predictions Amid Drawdown
AIBullishMarkTechPost · Mar 176/10
🧠

Google AI Releases WAXAL: A Multilingual African Speech Dataset for Training Automatic Speech Recognition and Text-to-Speech Models

Google AI has released WAXAL, an open multilingual speech dataset covering 24 African languages to improve Automatic Speech Recognition and Text-to-Speech systems. This addresses the significant data distribution problem where African languages remain poorly represented in speech technology training corpora.

Google AI Releases WAXAL: A Multilingual African Speech Dataset for Training Automatic Speech Recognition and Text-to-Speech Models
🏢 Google
GeneralNeutralFortune Crypto · Mar 177/10
📰

Boards protected CEO bonuses as tariffs threatened business. Now, as Iran disrupts trade, CEOs may get more protection

An analysis of 50 companies revealed that CEOs in the lowest-performing tier still received 87% of their target bonuses despite poor performance. Boards are expected to implement additional compensation protection measures as Iran-related economic disruptions threaten business operations.

Boards protected CEO bonuses as tariffs threatened business. Now, as Iran disrupts trade, CEOs may get more protection
AI × CryptoNeutralCoinTelegraph · Mar 176/10
🤖

Messari’s new CEO is doubling down on AI as firm cuts staff

Messari has appointed Diran Li as its new CEO, who is positioning the crypto data and research firm as an AI-first company. The strategic pivot comes alongside staff cuts as the company focuses on serving institutional clients through AI-powered research and products.

Messari’s new CEO is doubling down on AI as firm cuts staff
AIBullisharXiv – CS AI · Mar 176/10
🧠

A Dual-Path Generative Framework for Zero-Day Fraud Detection in Banking Systems

Researchers propose a dual-path AI framework combining Variational Autoencoders and Wasserstein GANs for real-time fraud detection in banking systems. The system achieves sub-50ms detection latency while maintaining GDPR compliance through selective explainability mechanisms for high-uncertainty transactions.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Think First, Diffuse Fast: Improving Diffusion Language Model Reasoning via Autoregressive Plan Conditioning

Researchers developed plan conditioning, a training-free method that significantly improves diffusion language model reasoning by prepending short natural-language plans from autoregressive models. The technique improved performance by 11.6 percentage points on math problems and 12.8 points on coding tasks, bringing diffusion models to competitive levels with autoregressive models.

🧠 Llama
AIBullisharXiv – CS AI · Mar 176/10
🧠

Distilling Deep Reinforcement Learning into Interpretable Fuzzy Rules: An Explainable AI Framework

Researchers developed a Hierarchical Takagi-Sugeno-Kang Fuzzy Classifier System that converts opaque deep reinforcement learning agents into human-readable IF-THEN rules, achieving 81.48% fidelity in tests. The framework addresses the critical explainability problem in AI systems used for safety-critical applications by providing interpretable rules that humans can verify and understand.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Multi-hop Reasoning and Retrieval in Embedding Space: Leveraging Large Language Models with Knowledge

Researchers propose EMBRAG, a new framework that combines large language models with knowledge graphs to improve reasoning accuracy and reduce hallucinations. The system generates multiple logical rules from queries and applies them in embedding space, achieving state-of-the-art performance on knowledge graph question-answering benchmarks.

AIBullisharXiv – CS AI · Mar 176/10
🧠

DOVA: Deliberation-First Multi-Agent Orchestration for Autonomous Research Automation

Researchers introduce DOVA (Deep Orchestrated Versatile Agent), a multi-agent AI platform that improves research automation through deliberation-first orchestration and hybrid collaborative reasoning. The system reduces inference costs by 40-60% on simple tasks while maintaining deep reasoning capabilities for complex research requiring multi-source synthesis.

AIBullisharXiv – CS AI · Mar 176/10
🧠

From Refusal Tokens to Refusal Control: Discovering and Steering Category-Specific Refusal Directions

Researchers developed a method to control AI safety refusal behavior using categorical refusal tokens in Llama 3 8B, enabling fine-grained control over when models refuse harmful versus benign requests. The technique uses steering vectors that can be applied during inference without additional training, improving both safety and reducing over-refusal of harmless prompts.

🧠 Llama
AINeutralarXiv – CS AI · Mar 176/10
🧠

MESD: Detecting and Mitigating Procedural Bias in Intersectional Groups

Researchers propose MESD (Multi-category Explanation Stability Disparity), a new metric to detect procedural bias in AI models across intersectional groups. They also introduce UEF framework that balances utility, explanation quality, and fairness in machine learning systems.

AINeutralarXiv – CS AI · Mar 176/10
🧠

The AI Fiction Paradox

A new research paper identifies the 'AI-Fiction Paradox' - AI models desperately need fiction for training data but struggle to generate quality fiction themselves. The paper outlines three core challenges: narrative causation requiring temporal paradoxes, informational revaluation that conflicts with current attention mechanisms, and multi-scale emotional architecture that current AI cannot orchestrate effectively.

AIBullisharXiv – CS AI · Mar 176/10
🧠

EviAgent: Evidence-Driven Agent for Radiology Report Generation

Researchers introduce EviAgent, a new AI system for automated radiology report generation that provides transparent, evidence-driven analysis. The system addresses key limitations of current medical AI models by offering traceable decision-making and integrating external domain knowledge, outperforming existing specialized medical models in testing.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Supervised Fine-Tuning versus Reinforcement Learning: A Study of Post-Training Methods for Large Language Models

A comprehensive research study examines the relationship between Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) methods for improving Large Language Models after pre-training. The research identifies emerging trends toward hybrid post-training approaches that combine both methods, analyzing applications from 2023-2025 to establish when each method is most effective.

AIBullisharXiv – CS AI · Mar 176/10
🧠

GRPO and Reflection Reward for Mathematical Reasoning in Large Language Models

Researchers propose GRPO (Group Relative Policy Optimization) combined with reflection reward mechanisms to enhance mathematical reasoning in large language models. The four-stage framework encourages self-reflective capabilities during training and demonstrates state-of-the-art performance over existing methods like supervised fine-tuning and LoRA.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Relationship-Aware Safety Unlearning for Multimodal LLMs

Researchers propose a new framework for improving safety in multimodal AI models by targeting unsafe relationships between objects rather than removing entire concepts. The approach uses parameter-efficient edits to suppress dangerous combinations while preserving benign uses of the same objects and relations.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Why Do LLM-based Web Agents Fail? A Hierarchical Planning Perspective

Researchers propose a hierarchical planning framework to analyze why LLM-based web agents fail at complex navigation tasks. The study reveals that while structured PDDL plans outperform natural language plans, low-level execution and perceptual grounding remain the primary bottlenecks rather than high-level reasoning.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Contests with Spillovers: Incentivizing Content Creation with GenAI

Researchers propose the Content Creation with Spillovers (CCS) model to address how GenAI and LLMs create positive spillovers where creators' content can be reused by others, potentially undermining individual incentives. They introduce Provisional Allocation mechanisms to guarantee equilibrium existence and develop approximation algorithms to maximize social welfare in content creation ecosystems.

AINeutralarXiv – CS AI · Mar 176/10
🧠

AgentProcessBench: Diagnosing Step-Level Process Quality in Tool-Using Agents

Researchers introduce AgentProcessBench, the first benchmark for evaluating step-level effectiveness in AI tool-using agents, comprising 1,000 trajectories and 8,509 human-labeled annotations. The benchmark reveals that current AI models struggle with distinguishing neutral and erroneous actions in tool execution, and that process-level signals can significantly enhance test-time performance.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Argumentation for Explainable and Globally Contestable Decision Support with LLMs

Researchers introduce ArgEval, a new framework that enhances Large Language Model decision-making through structured argumentation and global contestability. Unlike previous approaches limited to binary choices and local corrections, ArgEval maps entire decision spaces and builds reusable argumentation frameworks that can be globally modified to prevent repeated mistakes.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Dynamic Theory of Mind as a Temporal Memory Problem: Evidence from Large Language Models

Research reveals that Large Language Models struggle with dynamic Theory of Mind tasks, particularly tracking how others' beliefs change over time. While LLMs can infer current beliefs effectively, they fail to maintain and retrieve prior belief states after updates occur, showing patterns consistent with human cognitive biases.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Gradient Atoms: Unsupervised Discovery, Attribution and Steering of Model Behaviors via Sparse Decomposition of Training Gradients

Researchers introduce Gradient Atoms, an unsupervised method that decomposes AI model training gradients to discover interpretable behaviors without requiring predefined queries. The technique can identify model behaviors like refusal patterns and arithmetic capabilities, while also serving as effective steering vectors to control model outputs.

AIBearisharXiv – CS AI · Mar 176/10
🧠

BrainBench: Exposing the Commonsense Reasoning Gap in Large Language Models

Researchers introduced BrainBench, a new benchmark revealing significant gaps in commonsense reasoning among leading LLMs. Even the best model (Claude Opus 4.6) achieved only 80.3% accuracy on 100 brainteaser questions, while GPT-4o scored just 39.7%, exposing fundamental reasoning deficits across frontier AI models.

🧠 GPT-4🧠 Claude🧠 Opus
← PrevPage 458 of 908Next →
◆ AI Mentions
🏢OpenAI
109×
🧠Claude
65×
🏢Nvidia
59×
🏢Anthropic
57×
🧠Gemini
54×
🧠Llama
54×
🧠GPT-5
46×
🧠GPT-4
34×
🏢Meta
32×
🧠ChatGPT
28×
🏢Perplexity
27×
🏢Hugging Face
22×
🏢xAI
11×
🧠Opus
11×
🧠Grok
9×
🧠Sonnet
8×
🏢Google
6×
🧠Stable Diffusion
4×
🏢Microsoft
3×
🧠o1
3×
▲ Trending Tags
1#machine-learning3362#ai3133#market2034#bitcoin1935#reinforcement-learning1606#iran1337#ai-safety1318#geopolitics1249#language-models12210#ai-infrastructure11411#geopolitical-risk10912#neural-networks10813#inflation9414#openai9415#trump91
Tag Sentiment
#machine-learning336 articles
#ai313 articles
#market203 articles
#bitcoin193 articles
#reinforcement-learning160 articles
#iran133 articles
#ai-safety131 articles
#geopolitics124 articles
#language-models122 articles
#ai-infrastructure114 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#bitcoin↔#market
48
#geopolitical↔#iran
39
#china↔#trump
33
#geopolitics↔#iran
31
#bitcoin↔#trading
29
#iran↔#trump
27
#ai↔#artificial-intelligence
26
#ai↔#openai
26
#geopolitical-risk↔#strait-of-hormuz
25
#ai↔#market
25
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange