y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All20,231🧠AI14,277🤖AI × Crypto677📰General5,277
Home/AI Pulse

AI Pulse News

Models, papers, tools. 20,239 articles with AI-powered sentiment analysis and key takeaways.

20239 articles
AI × CryptoBearishThe Block · Mar 176/10
🤖

Cango posts $452.8 million net loss in first year as bitcoin miner

Cango reported a $452.8 million net loss in its first full year as a bitcoin mining operation. The company has been selling bitcoin to repay debt and fund its transition into AI services.

Cango posts $452.8 million net loss in first year as bitcoin miner
$BTC
GeneralBullishDaily Hodl · Mar 176/10
📰

‘Closer to the End of This Correction’: Morgan Stanley CIO Outlines Equity Market Predictions Amid Drawdown

Morgan Stanley CIO Mike Wilson believes U.S. equity markets are nearing the end of their current correction phase after months of economic and geopolitical pressures. The investment bank's chief strategist suggests the stock market sell-off began well before recent events and may be approaching a turning point.

‘Closer to the End of This Correction’: Morgan Stanley CIO Outlines Equity Market Predictions Amid Drawdown
AIBullishMarkTechPost · Mar 176/10
🧠

Google AI Releases WAXAL: A Multilingual African Speech Dataset for Training Automatic Speech Recognition and Text-to-Speech Models

Google AI has released WAXAL, an open multilingual speech dataset covering 24 African languages to improve Automatic Speech Recognition and Text-to-Speech systems. This addresses the significant data distribution problem where African languages remain poorly represented in speech technology training corpora.

Google AI Releases WAXAL: A Multilingual African Speech Dataset for Training Automatic Speech Recognition and Text-to-Speech Models
🏢 Google
GeneralNeutralFortune Crypto · Mar 177/10
📰

Boards protected CEO bonuses as tariffs threatened business. Now, as Iran disrupts trade, CEOs may get more protection

An analysis of 50 companies revealed that CEOs in the lowest-performing tier still received 87% of their target bonuses despite poor performance. Boards are expected to implement additional compensation protection measures as Iran-related economic disruptions threaten business operations.

Boards protected CEO bonuses as tariffs threatened business. Now, as Iran disrupts trade, CEOs may get more protection
AI × CryptoNeutralCoinTelegraph · Mar 176/10
🤖

Messari’s new CEO is doubling down on AI as firm cuts staff

Messari has appointed Diran Li as its new CEO, who is positioning the crypto data and research firm as an AI-first company. The strategic pivot comes alongside staff cuts as the company focuses on serving institutional clients through AI-powered research and products.

Messari’s new CEO is doubling down on AI as firm cuts staff
AIBullisharXiv – CS AI · Mar 176/10
🧠

A Dual-Path Generative Framework for Zero-Day Fraud Detection in Banking Systems

Researchers propose a dual-path AI framework combining Variational Autoencoders and Wasserstein GANs for real-time fraud detection in banking systems. The system achieves sub-50ms detection latency while maintaining GDPR compliance through selective explainability mechanisms for high-uncertainty transactions.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Think First, Diffuse Fast: Improving Diffusion Language Model Reasoning via Autoregressive Plan Conditioning

Researchers developed plan conditioning, a training-free method that significantly improves diffusion language model reasoning by prepending short natural-language plans from autoregressive models. The technique improved performance by 11.6 percentage points on math problems and 12.8 points on coding tasks, bringing diffusion models to competitive levels with autoregressive models.

🧠 Llama
AIBullisharXiv – CS AI · Mar 176/10
🧠

Distilling Deep Reinforcement Learning into Interpretable Fuzzy Rules: An Explainable AI Framework

Researchers developed a Hierarchical Takagi-Sugeno-Kang Fuzzy Classifier System that converts opaque deep reinforcement learning agents into human-readable IF-THEN rules, achieving 81.48% fidelity in tests. The framework addresses the critical explainability problem in AI systems used for safety-critical applications by providing interpretable rules that humans can verify and understand.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Multi-hop Reasoning and Retrieval in Embedding Space: Leveraging Large Language Models with Knowledge

Researchers propose EMBRAG, a new framework that combines large language models with knowledge graphs to improve reasoning accuracy and reduce hallucinations. The system generates multiple logical rules from queries and applies them in embedding space, achieving state-of-the-art performance on knowledge graph question-answering benchmarks.

AIBullisharXiv – CS AI · Mar 176/10
🧠

DOVA: Deliberation-First Multi-Agent Orchestration for Autonomous Research Automation

Researchers introduce DOVA (Deep Orchestrated Versatile Agent), a multi-agent AI platform that improves research automation through deliberation-first orchestration and hybrid collaborative reasoning. The system reduces inference costs by 40-60% on simple tasks while maintaining deep reasoning capabilities for complex research requiring multi-source synthesis.

AIBullisharXiv – CS AI · Mar 176/10
🧠

From Refusal Tokens to Refusal Control: Discovering and Steering Category-Specific Refusal Directions

Researchers developed a method to control AI safety refusal behavior using categorical refusal tokens in Llama 3 8B, enabling fine-grained control over when models refuse harmful versus benign requests. The technique uses steering vectors that can be applied during inference without additional training, improving both safety and reducing over-refusal of harmless prompts.

🧠 Llama
AINeutralarXiv – CS AI · Mar 176/10
🧠

MESD: Detecting and Mitigating Procedural Bias in Intersectional Groups

Researchers propose MESD (Multi-category Explanation Stability Disparity), a new metric to detect procedural bias in AI models across intersectional groups. They also introduce UEF framework that balances utility, explanation quality, and fairness in machine learning systems.

AINeutralarXiv – CS AI · Mar 176/10
🧠

The AI Fiction Paradox

A new research paper identifies the 'AI-Fiction Paradox' - AI models desperately need fiction for training data but struggle to generate quality fiction themselves. The paper outlines three core challenges: narrative causation requiring temporal paradoxes, informational revaluation that conflicts with current attention mechanisms, and multi-scale emotional architecture that current AI cannot orchestrate effectively.

AIBullisharXiv – CS AI · Mar 176/10
🧠

EviAgent: Evidence-Driven Agent for Radiology Report Generation

Researchers introduce EviAgent, a new AI system for automated radiology report generation that provides transparent, evidence-driven analysis. The system addresses key limitations of current medical AI models by offering traceable decision-making and integrating external domain knowledge, outperforming existing specialized medical models in testing.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Supervised Fine-Tuning versus Reinforcement Learning: A Study of Post-Training Methods for Large Language Models

A comprehensive research study examines the relationship between Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) methods for improving Large Language Models after pre-training. The research identifies emerging trends toward hybrid post-training approaches that combine both methods, analyzing applications from 2023-2025 to establish when each method is most effective.

AIBullisharXiv – CS AI · Mar 176/10
🧠

GRPO and Reflection Reward for Mathematical Reasoning in Large Language Models

Researchers propose GRPO (Group Relative Policy Optimization) combined with reflection reward mechanisms to enhance mathematical reasoning in large language models. The four-stage framework encourages self-reflective capabilities during training and demonstrates state-of-the-art performance over existing methods like supervised fine-tuning and LoRA.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Relationship-Aware Safety Unlearning for Multimodal LLMs

Researchers propose a new framework for improving safety in multimodal AI models by targeting unsafe relationships between objects rather than removing entire concepts. The approach uses parameter-efficient edits to suppress dangerous combinations while preserving benign uses of the same objects and relations.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Why Do LLM-based Web Agents Fail? A Hierarchical Planning Perspective

Researchers propose a hierarchical planning framework to analyze why LLM-based web agents fail at complex navigation tasks. The study reveals that while structured PDDL plans outperform natural language plans, low-level execution and perceptual grounding remain the primary bottlenecks rather than high-level reasoning.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Contests with Spillovers: Incentivizing Content Creation with GenAI

Researchers propose the Content Creation with Spillovers (CCS) model to address how GenAI and LLMs create positive spillovers where creators' content can be reused by others, potentially undermining individual incentives. They introduce Provisional Allocation mechanisms to guarantee equilibrium existence and develop approximation algorithms to maximize social welfare in content creation ecosystems.

AINeutralarXiv – CS AI · Mar 176/10
🧠

AgentProcessBench: Diagnosing Step-Level Process Quality in Tool-Using Agents

Researchers introduce AgentProcessBench, the first benchmark for evaluating step-level effectiveness in AI tool-using agents, comprising 1,000 trajectories and 8,509 human-labeled annotations. The benchmark reveals that current AI models struggle with distinguishing neutral and erroneous actions in tool execution, and that process-level signals can significantly enhance test-time performance.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Argumentation for Explainable and Globally Contestable Decision Support with LLMs

Researchers introduce ArgEval, a new framework that enhances Large Language Model decision-making through structured argumentation and global contestability. Unlike previous approaches limited to binary choices and local corrections, ArgEval maps entire decision spaces and builds reusable argumentation frameworks that can be globally modified to prevent repeated mistakes.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Dynamic Theory of Mind as a Temporal Memory Problem: Evidence from Large Language Models

Research reveals that Large Language Models struggle with dynamic Theory of Mind tasks, particularly tracking how others' beliefs change over time. While LLMs can infer current beliefs effectively, they fail to maintain and retrieve prior belief states after updates occur, showing patterns consistent with human cognitive biases.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Gradient Atoms: Unsupervised Discovery, Attribution and Steering of Model Behaviors via Sparse Decomposition of Training Gradients

Researchers introduce Gradient Atoms, an unsupervised method that decomposes AI model training gradients to discover interpretable behaviors without requiring predefined queries. The technique can identify model behaviors like refusal patterns and arithmetic capabilities, while also serving as effective steering vectors to control model outputs.

AIBearisharXiv – CS AI · Mar 176/10
🧠

BrainBench: Exposing the Commonsense Reasoning Gap in Large Language Models

Researchers introduced BrainBench, a new benchmark revealing significant gaps in commonsense reasoning among leading LLMs. Even the best model (Claude Opus 4.6) achieved only 80.3% accuracy on 100 brainteaser questions, while GPT-4o scored just 39.7%, exposing fundamental reasoning deficits across frontier AI models.

🧠 GPT-4🧠 Claude🧠 Opus
AIBullisharXiv – CS AI · Mar 176/10
🧠

OpenHospital: A Thing-in-itself Arena for Evolving and Benchmarking LLM-based Collective Intelligence

Researchers introduce OpenHospital, a new interactive arena designed to develop and benchmark Large Language Model-based Collective Intelligence through physician-patient agent interactions. The platform uses a data-in-agent-self paradigm to rapidly enhance AI agent capabilities while providing evaluation metrics for medical proficiency and system efficiency.

← PrevPage 401 of 810Next →
◆ AI Mentions
🏢OpenAI
81×
🏢Anthropic
44×
🧠Claude
38×
🏢Nvidia
37×
🧠Llama
32×
🧠Gemini
31×
🧠GPT-5
25×
🧠ChatGPT
23×
🧠GPT-4
23×
🏢Perplexity
18×
🏢xAI
11×
🏢Hugging Face
10×
🧠Opus
8×
🧠Sonnet
8×
🏢Meta
7×
🏢Google
5×
🧠Grok
4×
🏢Microsoft
3×
🧠Sora
2×
🧠Stable Diffusion
2×
▲ Trending Tags
1#ai2362#machine-learning1673#iran1294#geopolitics1135#geopolitical-risk1046#reinforcement-learning977#ai-infrastructure968#ai-safety759#language-models7410#geopolitical7311#openai7112#neural-networks6213#market-volatility5114#enterprise-ai4615#energy-markets46
Tag Sentiment
#ai236 articles
#machine-learning167 articles
#iran129 articles
#geopolitics113 articles
#geopolitical-risk104 articles
#reinforcement-learning97 articles
#ai-infrastructure96 articles
#ai-safety75 articles
#language-models74 articles
#geopolitical73 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
40
#geopolitics↔#iran
29
#iran↔#trump
24
#geopolitical-risk↔#strait-of-hormuz
22
#ai↔#artificial-intelligence
21
#geopolitics↔#oil-markets
21
#energy-markets↔#geopolitical-risk
21
#ai↔#market
20
#geopolitical-risk↔#oil-markets
20
#geopolitics↔#middle-east
20
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange