y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All16,006🧠AI11,769🤖AI × Crypto505📰General3,732
Home/AI Pulse

AI Pulse News

Models, papers, tools. 16,035 articles with AI-powered sentiment analysis and key takeaways.

16035 articles
AINeutralarXiv – CS AI · Mar 277/10
🧠

How Pruning Reshapes Features: Sparse Autoencoder Analysis of Weight-Pruned Language Models

Researchers conducted the first systematic study of how weight pruning affects language model representations using Sparse Autoencoders across multiple models and pruning methods. The study reveals that rare features survive pruning better than common ones, suggesting pruning acts as implicit feature selection that preserves specialized capabilities while removing generic features.

🧠 Llama
AIBearisharXiv – CS AI · Mar 277/10
🧠

Shape and Substance: Dual-Layer Side-Channel Attacks on Local Vision-Language Models

Researchers discovered significant privacy vulnerabilities in local Vision-Language Models that use Dynamic High-Resolution preprocessing. The dual-layer attack framework can exploit execution-time variations and cache patterns to infer sensitive information about processed images, even when models run locally for privacy.

AIBullisharXiv – CS AI · Mar 277/10
🧠

GlowQ: Group-Shared LOw-Rank Approximation for Quantized LLMs

Researchers propose GlowQ, a new quantization technique for large language models that reduces memory overhead and latency while maintaining accuracy. The method uses group-shared low-rank approximation to optimize deployment of quantized LLMs, showing significant performance improvements over existing approaches.

🏢 Perplexity
AIBullisharXiv – CS AI · Mar 277/10
🧠

AD-CARE: A Guideline-grounded, Modality-agnostic LLM Agent for Real-world Alzheimer's Disease Diagnosis with Multi-cohort Assessment, Fairness Analysis, and Reader Study

Researchers developed AD-CARE, an AI agent that uses large language models to diagnose Alzheimer's disease from incomplete medical data across multiple modalities. The system achieved 84.9% diagnostic accuracy across 10,303 cases and improved physician decision-making speed and accuracy in clinical studies.

AIBullisharXiv – CS AI · Mar 277/10
🧠

LLM4AD: Large Language Models for Autonomous Driving -- Concept, Review, Benchmark, Experiments, and Future Trends

Researchers have published a comprehensive review of Large Language Models for Autonomous Driving (LLM4AD), introducing new benchmarks and conducting real-world experiments on autonomous vehicle platforms. The paper explores how LLMs can enhance perception, decision-making, and motion control in self-driving cars, while identifying key challenges including latency, security, and safety concerns.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Sketch2Simulation: Automating Flowsheet Generation via Multi Agent Large Language Models

Researchers developed an end-to-end multi-agent AI system that automatically converts hand-drawn process engineering diagrams into executable simulation models for Aspen HYSYS software. The framework achieved high accuracy with connection consistency above 0.93 and stream consistency above 0.96 across four chemical engineering case studies of varying complexity.

AIBullisharXiv – CS AI · Mar 277/10
🧠

GoldiCLIP: The Goldilocks Approach for Balancing Explicit Supervision for Language-Image Pretraining

Researchers developed GoldiCLIP, a data-efficient vision-language model that achieves state-of-the-art performance using only 30 million images - 300x less data than leading methods. The framework combines three key innovations including text-conditioned self-distillation, VQA-integrated encoding, and uncertainty-based loss weighting to significantly improve image-text retrieval tasks.

AIBullisharXiv – CS AI · Mar 277/10
🧠

A Wireless World Model for AI-Native 6G Networks

Researchers introduce the Wireless World Model (WWM), a multi-modal AI framework for 6G networks that predicts wireless channel evolution by understanding electromagnetic wave propagation through 3D geometry. The model demonstrates superior performance across five downstream tasks and real-world measurements, outperforming existing foundation models.

AIBearisharXiv – CS AI · Mar 277/10
🧠

A Decade-Scale Benchmark Evaluating LLMs' Clinical Practice Guidelines Detection and Adherence in Multi-turn Conversations

Researchers introduced CPGBench, a benchmark evaluating how well Large Language Models detect and follow clinical practice guidelines in healthcare conversations. The study found that while LLMs can detect 71-90% of clinical recommendations, they only adhere to guidelines 22-63% of the time, revealing significant gaps for safe medical deployment.

AINeutralarXiv – CS AI · Mar 277/10
🧠

WebTestBench: Evaluating Computer-Use Agents towards End-to-End Automated Web Testing

Researchers introduced WebTestBench, a new benchmark for evaluating automated web testing using AI agents and large language models. The study reveals significant gaps between current AI capabilities and industrial deployment needs, with LLMs struggling with test completeness, defect detection, and long-term interaction reliability.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Train at Moving Edge: Online-Verified Prompt Selection for Efficient RL Training of Large Reasoning Model

Researchers propose HIVE, a new framework for training large language models more efficiently in reinforcement learning by selecting high-utility prompts before rollout. The method uses historical reward data and prompt entropy to identify the 'learning edge' where models learn most effectively, significantly reducing computational overhead without performance loss.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Does Explanation Correctness Matter? Linking Computational XAI Evaluation to Human Understanding

A user study with 200 participants found that while explanation correctness in AI systems affects human understanding, the relationship is not linear - performance drops significantly at 70% correctness but doesn't degrade further below that threshold. The research challenges assumptions that higher computational correctness metrics automatically translate to better human comprehension of AI decisions.

AINeutralarXiv – CS AI · Mar 277/10
🧠

CRAFT: Grounded Multi-Agent Coordination Under Partial Information

Researchers introduce CRAFT, a multi-agent benchmark that evaluates how well large language models coordinate through natural language communication under partial information constraints. The study finds that stronger reasoning abilities don't reliably translate to better coordination, with smaller open-weight models often matching or outperforming frontier systems in collaborative tasks.

AIBearisharXiv – CS AI · Mar 277/10
🧠

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Researchers have developed PIDP-Attack, a new cybersecurity threat that combines prompt injection with database poisoning to manipulate AI responses in Retrieval-Augmented Generation (RAG) systems. The attack method demonstrated 4-16% higher success rates than existing techniques across multiple benchmark datasets and eight different large language models.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Imperative Interference: Social Register Shapes Instruction Topology in Large Language Models

Research reveals that large language models process instructions differently across languages due to social register variations, with imperative commands carrying different obligatory force in different speech communities. The study found that declarative rewording of instructions reduces cross-linguistic variance by 81% and suggests models treat instructions as social acts rather than technical specifications.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Closing the Confidence-Faithfulness Gap in Large Language Models

Researchers have identified a fundamental issue in large language models where verbalized confidence scores don't align with actual accuracy due to orthogonal encoding of these signals. They discovered a 'Reasoning Contamination Effect' where simultaneous reasoning disrupts confidence calibration, and developed a two-stage adaptive steering pipeline to improve alignment.

AIBearisharXiv – CS AI · Mar 277/10
🧠

The System Prompt Is the Attack Surface: How LLM Agent Configuration Shapes Security and Creates Exploitable Vulnerabilities

Research reveals that LLM system prompt configuration creates massive security vulnerabilities, with the same model's phishing detection rates ranging from 1% to 97% based solely on prompt design. The study PhishNChips demonstrates that more specific prompts can paradoxically weaken AI security by replacing robust multi-signal reasoning with exploitable single-signal dependencies.

AINeutralarXiv – CS AI · Mar 277/10
🧠

AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective

Researchers propose a unified framework for AI security threats that categorizes attacks based on four directional interactions between data and models. The comprehensive taxonomy addresses vulnerabilities in foundation models through four categories: data-to-data, data-to-model, model-to-data, and model-to-model attacks.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Shaping the Future of Mathematics in the Age of AI

A research paper examines how AI is rapidly transforming mathematics across five key areas: values, practice, teaching, technology, and ethics. The authors provide recommendations for the mathematical community to maintain intellectual autonomy and shape their field's future in the age of artificial intelligence.

AIBullishFortune Crypto · Mar 277/10
🧠

Exclusive: Anthropic acknowledges testing new AI model representing ‘step change’ in capabilities, after accidental data leak reveals its existence

Anthropic accidentally revealed through a publicly accessible draft blog post that it is testing a new AI model called 'Mythos' which represents a significant advancement in capabilities beyond their current offerings. The company has acknowledged the testing after the accidental data leak exposed the previously undisclosed model's existence.

Exclusive: Anthropic acknowledges testing new AI model representing ‘step change’ in capabilities, after accidental data leak reveals its existence
🏢 Anthropic
AIBearishFortune Crypto · Mar 277/10
🧠

Exclusive: Anthropic left details of an unreleased model, exclusive CEO retreat, sitting in an unsecured data trove in a significant security lapse

Anthropic experienced a significant security breach where sensitive information including details of unreleased AI models, unpublished blog drafts, and exclusive CEO retreat information was left accessible through an unsecured content management system. This represents a major data security lapse for one of the leading AI companies.

Exclusive: Anthropic left details of an unreleased model, exclusive CEO retreat, sitting in an unsecured data trove in a significant security lapse
🏢 Anthropic
AI × CryptoNeutralDL News · Mar 277/10
🤖

Why the industry’s biggest miner just sold $1bn Bitcoin to chase AI

MARA Holdings, one of the largest Bitcoin mining companies, sold $1 billion worth of Bitcoin to fund a strategic pivot into artificial intelligence operations. This move reflects a broader trend among Bitcoin miners diversifying into AI infrastructure to capitalize on the growing demand for AI computing power.

Why the industry’s biggest miner just sold $1bn Bitcoin to chase AI
$BTC
AIBullishTechCrunch – AI · Mar 277/10
🧠

Anthropic wins injunction against Trump administration over Defense Department saga

A federal judge has ruled in favor of AI company Anthropic, ordering the Trump administration to rescind recent restrictions placed on the company related to Defense Department dealings. The injunction represents a legal victory for Anthropic against government regulatory action.

🏢 Anthropic
AIBullishThe Verge – AI · Mar 277/10
🧠

Judge sides with Anthropic to temporarily block the Pentagon’s ban

A federal judge granted Anthropic a preliminary injunction against the Pentagon's blacklisting, ruling that the company was designated as a supply chain risk due to its 'hostile manner through the press.' The injunction temporarily blocks the ban while the lawsuit proceeds, with the judge citing potential First Amendment violations.

Judge sides with Anthropic to temporarily block the Pentagon’s ban
🏢 Anthropic
GeneralNeutralFortune Crypto · Mar 26🔥 8/10
📰

Trump extends his deadline for Iran to reopen the Strait of Hormuz to April 6

Trump has extended his deadline for Iran to reopen the Strait of Hormuz to April 6, backing down from his previous position. The article characterizes this as Trump 'chickening out' on his earlier threats regarding the crucial oil shipping route.

Trump extends his deadline for Iran to reopen the Strait of Hormuz to April 6
← PrevPage 87 of 642Next →
◆ AI Mentions
🏢OpenAI
56×
🏢Anthropic
56×
🏢Nvidia
52×
🧠Claude
45×
🧠GPT-5
41×
🧠Gemini
38×
🧠ChatGPT
37×
🧠GPT-4
30×
🧠Llama
23×
🏢Meta
10×
🧠Opus
9×
🏢Hugging Face
8×
🏢Perplexity
7×
🧠Sonnet
7×
🏢Google
7×
🧠Grok
6×
🏢xAI
6×
🏢Microsoft
4×
🏢Cohere
2×
🧠Haiku
1×
▲ Trending Tags
1#iran5892#ai5653#market4044#geopolitical3575#trump1546#geopolitics1307#security1158#geopolitical-risk1029#market-volatility7410#sanctions7411#artificial-intelligence6812#inflation6613#middle-east6414#openai5615#china52
Tag Sentiment
#iran589 articles
#ai565 articles
#market404 articles
#geopolitical357 articles
#trump154 articles
#geopolitics130 articles
#security115 articles
#geopolitical-risk102 articles
#market-volatility74 articles
#sanctions74 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
236
#iran↔#market
177
#geopolitical↔#market
146
#iran↔#trump
100
#market↔#trump
61
#ai↔#artificial-intelligence
58
#geopolitical↔#trump
52
#ai↔#market
51
#ai↔#security
43
#ai↔#google
39
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange