y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All15,743🧠AI11,675🤖AI × Crypto505📰General3,563
Home/AI Pulse

AI Pulse News

Models, papers, tools. 15,754 articles with AI-powered sentiment analysis and key takeaways.

15754 articles
AIBullishOpenAI News · Apr 137/10
🧠

Enterprises power agentic workflows in Cloudflare Agent Cloud with OpenAI

Cloudflare has integrated OpenAI's GPT-5.4 and Codex models into its Agent Cloud platform, enabling enterprises to build and deploy AI agents for production workloads. This integration combines Cloudflare's infrastructure and security capabilities with OpenAI's advanced language models to streamline agentic AI development at enterprise scale.

🏢 OpenAI🧠 GPT-5
AIBullisharXiv – CS AI · Apr 137/10
🧠

CSAttention: Centroid-Scoring Attention for Accelerating LLM Inference

Researchers introduce CSAttention, a training-free sparse attention method that accelerates LLM inference by 4.6x for long-context applications. The technique optimizes the offline-prefill/online-decode workflow by precomputing query-centric lookup tables, enabling faster token generation without sacrificing accuracy even at 95% sparsity levels.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Distributionally Robust Token Optimization in RLHF

Researchers propose Distributionally Robust Token Optimization (DRTO), a method combining reinforcement learning from human feedback with robust optimization to improve large language model consistency across distribution shifts. The approach demonstrates 9.17% improvement on GSM8K and 2.49% on MathQA benchmarks, addressing LLM vulnerabilities to minor input variations.

AIBearisharXiv – CS AI · Apr 137/10
🧠

Robust Reasoning Benchmark

Researchers have developed a 14-technique perturbation pipeline to test the robustness of large language models' reasoning capabilities on mathematical problems. Testing reveals that while frontier models maintain resilience, open-weight models experience catastrophic accuracy collapses up to 55%, and all tested models degrade when solving sequential problems in a single context window, suggesting fundamental architectural limitations in current reasoning systems.

🧠 Claude🧠 Opus
AIBullisharXiv – CS AI · Apr 137/10
🧠

AlphaLab: Autonomous Multi-Agent Research Across Optimization Domains with Frontier LLMs

AlphaLab is an autonomous research system using frontier LLMs to automate experimental cycles across computational domains. Without human intervention, it explores datasets, validates frameworks, and runs large-scale experiments while accumulating domain knowledge—achieving 4.4x speedups in CUDA optimization, 22% lower validation loss in LLM pretraining, and 23-25% improvements in traffic forecasting.

🧠 GPT-5🧠 Claude🧠 Opus
AINeutralarXiv – CS AI · Apr 137/10
🧠

Medical Reasoning with Large Language Models: A Survey and MR-Bench

Researchers present a comprehensive survey of medical reasoning in large language models, introducing MR-Bench, a clinical benchmark derived from real hospital data. The study reveals a significant performance gap between exam-style tasks and authentic clinical decision-making, highlighting that robust medical reasoning requires more than factual recall in safety-critical healthcare applications.

AIBearisharXiv – CS AI · Apr 137/10
🧠

Re-Mask and Redirect: Exploiting Denoising Irreversibility in Diffusion Language Models

Researchers demonstrate a critical vulnerability in diffusion-based language models where safety mechanisms can be bypassed by re-masking committed refusal tokens and injecting affirmative prefixes, achieving 76-82% attack success rates without gradient optimization. The findings reveal that dLLM safety relies on a fragile architectural assumption rather than robust adversarial defenses.

AIBearisharXiv – CS AI · Apr 137/10
🧠

From Dispersion to Attraction: Spectral Dynamics of Hallucination Across Whisper Model Scales

Researchers propose the Spectral Sensitivity Theorem to explain hallucinations in large ASR models like Whisper, identifying a phase transition between dispersive and attractor regimes. Analysis of model eigenspectra reveals that intermediate models experience structural breakdown while large models compress information, decoupling from acoustic evidence and increasing hallucination risk.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Dynamic sparsity in tree-structured feed-forward layers at scale

Researchers demonstrate that tree-structured sparse feed-forward layers can replace dense MLPs in large transformer models while maintaining performance, activating less than 5% of parameters per token. The work reveals an emergent auto-pruning mechanism where hard routing progressively converts dynamic sparsity into static structure, offering a scalable approach to reducing computational costs in language models beyond 1 billion parameters.

AINeutralarXiv – CS AI · Apr 137/10
🧠

Drift and selection in LLM text ecosystems

Researchers develop a mathematical framework showing how AI-generated text recursively shapes training corpora through drift and selection mechanisms. The study demonstrates that unfiltered reuse of generated content degrades linguistic diversity, while selective publication based on quality metrics can preserve structural complexity in training data.

AINeutralarXiv – CS AI · Apr 137/10
🧠

Mapping generative AI use in the human brain: divergent neural, academic, and mental health profiles of functional versus socio emotional AI use

A neuroimaging study of 222 university students reveals that generative AI use produces divergent brain and mental health outcomes depending on usage patterns: functional AI use correlates with better academics and larger prefrontal regions, while socio-emotional AI use associates with depression, anxiety, and smaller social-processing brain areas. The findings suggest AI's impact on the developing brain is highly context-dependent, requiring differentiated approaches to maximize educational benefits while minimizing mental health risks.

AINeutralarXiv – CS AI · Apr 137/10
🧠

SAGE: A Service Agent Graph-guided Evaluation Benchmark

Researchers introduce SAGE, a comprehensive benchmark for evaluating Large Language Models in customer service automation that uses dynamic dialogue graphs and adversarial testing to assess both intent classification and action execution. Testing across 27 LLMs reveals a critical 'Execution Gap' where models correctly identify user intents but fail to perform appropriate follow-up actions, plus an 'Empathy Resilience' phenomenon where models maintain polite facades despite underlying logical failures.

AINeutralarXiv – CS AI · Apr 137/10
🧠

PilotBench: A Benchmark for General Aviation Agents with Safety Constraints

Researchers introduce PilotBench, a benchmark evaluating large language models on safety-critical aviation tasks using 708 real-world flight trajectories. The study reveals a fundamental trade-off: traditional forecasters achieve superior numerical precision (7.01 MAE) while LLMs provide better instruction-following (86-89%) but with significantly degraded prediction accuracy (11-14 MAE), exposing brittleness in implicit physics reasoning for embodied AI applications.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Advantage-Guided Diffusion for Model-Based Reinforcement Learning

Researchers propose Advantage-Guided Diffusion (AGD-MBRL), a novel approach that improves model-based reinforcement learning by using advantage estimates to guide diffusion models during trajectory generation. The method addresses the short-horizon myopia problem in existing diffusion-based world models and demonstrates 2x performance improvements over current baselines on MuJoCo control tasks.

AIBullisharXiv – CS AI · Apr 137/10
🧠

OpenKedge: Governing Agentic Mutation with Execution-Bound Safety and Evidence Chains

OpenKedge introduces a protocol that governs AI agent actions through declarative intent proposals and execution contracts rather than allowing autonomous systems to directly mutate state. The system creates cryptographic evidence chains linking intent, policy decisions, and outcomes, enabling deterministic auditability and safer multi-agent coordination at scale.

AIBullisharXiv – CS AI · Apr 137/10
🧠

From Business Events to Auditable Decisions: Ontology-Governed Graph Simulation for Enterprise AI

Researchers introduce LOM-action, an enterprise AI system that grounds LLM-based decisions in business ontologies and event-driven simulations rather than unrestricted knowledge spaces. The approach achieves 93.82% accuracy with 98.74% F1 scores on decision chains, vastly outperforming larger models like DeepSeek-V3.2, while maintaining complete audit trails for enterprise compliance.

AIBullisharXiv – CS AI · Apr 137/10
🧠

The Two-Stage Decision-Sampling Hypothesis: Understanding the Emergence of Self-Reflection in RL-Trained LLMs

Researchers introduce the Two-Stage Decision-Sampling Hypothesis to explain how reinforcement learning enables self-reflection capabilities in large language models, demonstrating that RL's superior performance stems from improved decision-making rather than generation quality. The theory shows that reward gradients distribute asymmetrically across policy components, explaining why RL succeeds where supervised fine-tuning fails.

AIBullisharXiv – CS AI · Apr 137/10
🧠

SkillFactory: Self-Distillation For Learning Cognitive Behaviors

SkillFactory is a novel fine-tuning method that enables language models to learn cognitive behaviors like verification and backtracking without requiring distillation from stronger models. The approach uses self-rearranged training samples during supervised fine-tuning to prime models for subsequent reinforcement learning, resulting in better generalization and robustness.

AIBearisharXiv – CS AI · Apr 137/10
🧠

On the Limits of Layer Pruning for Generative Reasoning in Large Language Models

Research demonstrates that layer pruning—a compression technique for large language models—effectively reduces model size while maintaining classification performance, but critically fails to preserve generative reasoning capabilities like arithmetic and code generation. Even with extensive post-training on 400B tokens, models cannot recover lost reasoning abilities, revealing fundamental limitations in current compression approaches.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Commanding Humanoid by Free-form Language: A Large Language Action Model with Unified Motion Vocabulary

Researchers introduce Humanoid-LLA, a Large Language Action Model enabling humanoid robots to execute complex physical tasks from natural language commands. The system combines a unified motion vocabulary, physics-aware controller, and reinforcement learning to achieve both language understanding and real-world robot control, demonstrating improved performance on Unitree G1 and Booster T1 humanoids.

AIBearisharXiv – CS AI · Apr 137/10
🧠

Semantic Intent Fragmentation: A Single-Shot Compositional Attack on Multi-Agent AI Pipelines

Researchers demonstrate Semantic Intent Fragmentation (SIF), a novel attack on LLM orchestration systems where a single legitimate request causes AI systems to decompose tasks into individually benign subtasks that collectively violate security policies. The attack succeeds in 71% of enterprise scenarios while bypassing existing safety mechanisms, though plan-level information-flow tracking can detect all attacks before execution.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Unmasking Puppeteers: Leveraging Biometric Leakage to Disarm Impersonation in AI-based Videoconferencing

Researchers have developed a biometric leakage defense system that detects impersonation attacks in AI-based videoconferencing by analyzing pose-expression latents rather than reconstructed video. The method uses a contrastive encoder to isolate persistent identity cues, successfully flagging identity swaps in real-time across multiple talking-head generation models.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Webscale-RL: Automated Data Pipeline for Scaling RL Data to Pretraining Levels

Researchers introduced Webscale-RL, a data pipeline that converts large-scale pre-training documents into 1.2 million diverse question-answer pairs for reinforcement learning training. The approach enables RL models to achieve pre-training-level performance with up to 100x fewer tokens, addressing a critical bottleneck in scaling RL data and potentially advancing more efficient language model development.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models

Researchers propose a cost-effective proxy model framework that uses smaller, efficient models to approximate the interpretability explanations of expensive Large Language Models (LLMs), achieving over 90% fidelity at just 11% of computational cost. The framework includes verification mechanisms and demonstrates practical applications in prompt compression and data cleaning, making interpretability tools viable for real-world LLM development.

AIBullisharXiv – CS AI · Apr 137/10
🧠

Listener-Rewarded Thinking in VLMs for Image Preferences

Researchers introduce a listener-augmented reinforcement learning framework for training vision-language models to better align with human visual preferences. By using an independent frozen model to evaluate and validate reasoning chains, the approach achieves 67.4% accuracy on ImageReward benchmarks and demonstrates significant improvements in out-of-distribution generalization.

🏢 Hugging Face
← PrevPage 66 of 631Next →
◆ AI Mentions
🏢OpenAI
58×
🏢Anthropic
56×
🏢Nvidia
52×
🧠Claude
46×
🧠Gemini
43×
🧠GPT-5
43×
🧠ChatGPT
37×
🧠GPT-4
28×
🧠Llama
27×
🏢Meta
9×
🧠Opus
9×
🏢Hugging Face
9×
🧠Grok
7×
🏢Google
6×
🏢xAI
6×
🏢Perplexity
6×
🧠Sonnet
5×
🏢Microsoft
4×
🧠Stable Diffusion
2×
🏢Cohere
2×
▲ Trending Tags
1#iran5352#ai5173#market3404#geopolitical2945#geopolitics2196#geopolitical-risk1777#market-volatility1408#trump1329#middle-east11910#security10311#sanctions10212#energy-markets7613#oil-markets7014#inflation6815#artificial-intelligence66
Tag Sentiment
#iran535 articles
#ai517 articles
#market340 articles
#geopolitical294 articles
#geopolitics219 articles
#geopolitical-risk177 articles
#market-volatility140 articles
#trump132 articles
#middle-east119 articles
#security103 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
186
#iran↔#market
148
#geopolitical↔#market
119
#iran↔#trump
82
#geopolitics↔#iran
68
#ai↔#artificial-intelligence
56
#market↔#trump
50
#ai↔#market
45
#ai↔#security
41
#geopolitics↔#middle-east
40
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange