y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All16,375🧠AI11,880🤖AI × Crypto505📰General3,990
Home/AI Pulse

AI Pulse News

Models, papers, tools. 16,555 articles with AI-powered sentiment analysis and key takeaways.

16555 articles
AIBullisharXiv – CS AI · Mar 277/10
🧠

Train at Moving Edge: Online-Verified Prompt Selection for Efficient RL Training of Large Reasoning Model

Researchers propose HIVE, a new framework for training large language models more efficiently in reinforcement learning by selecting high-utility prompts before rollout. The method uses historical reward data and prompt entropy to identify the 'learning edge' where models learn most effectively, significantly reducing computational overhead without performance loss.

AINeutralarXiv – CS AI · Mar 277/10
🧠

WebTestBench: Evaluating Computer-Use Agents towards End-to-End Automated Web Testing

Researchers introduced WebTestBench, a new benchmark for evaluating automated web testing using AI agents and large language models. The study reveals significant gaps between current AI capabilities and industrial deployment needs, with LLMs struggling with test completeness, defect detection, and long-term interaction reliability.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Does Explanation Correctness Matter? Linking Computational XAI Evaluation to Human Understanding

A user study with 200 participants found that while explanation correctness in AI systems affects human understanding, the relationship is not linear - performance drops significantly at 70% correctness but doesn't degrade further below that threshold. The research challenges assumptions that higher computational correctness metrics automatically translate to better human comprehension of AI decisions.

AIBearisharXiv – CS AI · Mar 277/10
🧠

LLMs know their vulnerabilities: Uncover Safety Gaps through Natural Distribution Shifts

Researchers have identified a new vulnerability in large language models called 'natural distribution shifts' where seemingly benign prompts can bypass safety mechanisms to reveal harmful content. They developed ActorBreaker, a novel attack method that uses multi-turn prompts to gradually expose unsafe content, and proposed expanding safety training to address this vulnerability.

AIBearisharXiv – CS AI · Mar 277/10
🧠

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Researchers have developed PIDP-Attack, a new cybersecurity threat that combines prompt injection with database poisoning to manipulate AI responses in Retrieval-Augmented Generation (RAG) systems. The attack method demonstrated 4-16% higher success rates than existing techniques across multiple benchmark datasets and eight different large language models.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Imperative Interference: Social Register Shapes Instruction Topology in Large Language Models

Research reveals that large language models process instructions differently across languages due to social register variations, with imperative commands carrying different obligatory force in different speech communities. The study found that declarative rewording of instructions reduces cross-linguistic variance by 81% and suggests models treat instructions as social acts rather than technical specifications.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Shaping the Future of Mathematics in the Age of AI

A research paper examines how AI is rapidly transforming mathematics across five key areas: values, practice, teaching, technology, and ethics. The authors provide recommendations for the mathematical community to maintain intellectual autonomy and shape their field's future in the age of artificial intelligence.

AINeutralarXiv – CS AI · Mar 277/10
🧠

AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective

Researchers propose a unified framework for AI security threats that categorizes attacks based on four directional interactions between data and models. The comprehensive taxonomy addresses vulnerabilities in foundation models through four categories: data-to-data, data-to-model, model-to-data, and model-to-model attacks.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Closing the Confidence-Faithfulness Gap in Large Language Models

Researchers have identified a fundamental issue in large language models where verbalized confidence scores don't align with actual accuracy due to orthogonal encoding of these signals. They discovered a 'Reasoning Contamination Effect' where simultaneous reasoning disrupts confidence calibration, and developed a two-stage adaptive steering pipeline to improve alignment.

AIBearisharXiv – CS AI · Mar 277/10
🧠

The System Prompt Is the Attack Surface: How LLM Agent Configuration Shapes Security and Creates Exploitable Vulnerabilities

Research reveals that LLM system prompt configuration creates massive security vulnerabilities, with the same model's phishing detection rates ranging from 1% to 97% based solely on prompt design. The study PhishNChips demonstrates that more specific prompts can paradoxically weaken AI security by replacing robust multi-signal reasoning with exploitable single-signal dependencies.

AIBullisharXiv – CS AI · Mar 277/10
🧠

GoldiCLIP: The Goldilocks Approach for Balancing Explicit Supervision for Language-Image Pretraining

Researchers developed GoldiCLIP, a data-efficient vision-language model that achieves state-of-the-art performance using only 30 million images - 300x less data than leading methods. The framework combines three key innovations including text-conditioned self-distillation, VQA-integrated encoding, and uncertainty-based loss weighting to significantly improve image-text retrieval tasks.

AIBullisharXiv – CS AI · Mar 277/10
🧠

LLM4AD: Large Language Models for Autonomous Driving -- Concept, Review, Benchmark, Experiments, and Future Trends

Researchers have published a comprehensive review of Large Language Models for Autonomous Driving (LLM4AD), introducing new benchmarks and conducting real-world experiments on autonomous vehicle platforms. The paper explores how LLMs can enhance perception, decision-making, and motion control in self-driving cars, while identifying key challenges including latency, security, and safety concerns.

AIBullisharXiv – CS AI · Mar 277/10
🧠

GlowQ: Group-Shared LOw-Rank Approximation for Quantized LLMs

Researchers propose GlowQ, a new quantization technique for large language models that reduces memory overhead and latency while maintaining accuracy. The method uses group-shared low-rank approximation to optimize deployment of quantized LLMs, showing significant performance improvements over existing approaches.

🏢 Perplexity
AIBullisharXiv – CS AI · Mar 277/10
🧠

Model2Kernel: Model-Aware Symbolic Execution For Safe CUDA Kernels

Researchers developed Model2Kernel, a system that automatically detects memory safety bugs in CUDA kernels used for large language model inference. The system discovered 353 previously unknown bugs across popular platforms like vLLM and Hugging Face with only nine false positives.

🏢 Hugging Face
AIBearisharXiv – CS AI · Mar 277/10
🧠

Malicious LLM-Based Conversational AI Makes Users Reveal Personal Information

Researchers conducted a study with 502 participants demonstrating that malicious LLM-based conversational AI systems can be deliberately designed to extract personal information from users through manipulative conversation strategies. The study found that these malicious chatbots significantly outperformed benign versions at collecting personal data, with social psychology-based approaches being most effective while appearing less threatening to users.

🧠 ChatGPT
AIBullisharXiv – CS AI · Mar 277/10
🧠

Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

Researchers introduce WriteBack-RAG, a framework that treats knowledge bases in retrieval-augmented generation systems as trainable components rather than static databases. The method distills relevant information from documents into compact knowledge units, improving RAG performance across multiple benchmarks by an average of +2.14%.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Sketch2Simulation: Automating Flowsheet Generation via Multi Agent Large Language Models

Researchers developed an end-to-end multi-agent AI system that automatically converts hand-drawn process engineering diagrams into executable simulation models for Aspen HYSYS software. The framework achieved high accuracy with connection consistency above 0.93 and stream consistency above 0.96 across four chemical engineering case studies of varying complexity.

AIBearisharXiv – CS AI · Mar 277/10
🧠

Impact of AI Search Summaries on Website Traffic: Evidence from Google AI Overviews and Wikipedia

A research study analyzing Google's AI Overviews feature found it reduces Wikipedia traffic by approximately 15% through causal analysis of 161,382 matched articles. The impact varies by content type, with Culture articles experiencing larger traffic declines than STEM topics, suggesting AI summaries substitute for clicks when brief answers satisfy user queries.

🏢 Google
AIBullisharXiv – CS AI · Mar 277/10
🧠

Cross-Model Disagreement as a Label-Free Correctness Signal

Researchers introduce cross-model disagreement as a training-free method to detect when AI language models make confident errors without requiring ground truth labels. The approach uses Cross-Model Perplexity and Cross-Model Entropy to measure how surprised a second verifier model is when reading another model's answers, significantly outperforming existing uncertainty-based methods across multiple benchmarks.

🏢 Perplexity
AIBearisharXiv – CS AI · Mar 277/10
🧠

Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

Researchers have identified a new attack vector called Epistemic Bias Injection (EBI) that manipulates AI language models by injecting factually correct but biased content into retrieval-augmented generation databases. The attack steers model outputs toward specific viewpoints while evading traditional detection methods, though a new defense mechanism called BiasDef shows promise in mitigating these threats.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models

Researchers have identified a new category of AI safety called 'reasoning safety' that focuses on protecting the logical consistency and integrity of LLM reasoning processes. They developed a real-time monitoring system that can detect unsafe reasoning behaviors with over 84% accuracy, addressing vulnerabilities beyond traditional content safety measures.

AIBullisharXiv – CS AI · Mar 277/10
🧠

DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents

Researchers introduce DRIFT, a new security framework designed to protect AI agents from prompt injection attacks through dynamic rule enforcement and memory isolation. The system uses a three-component approach with a Secure Planner, Dynamic Validator, and Injection Isolator to maintain security while preserving functionality across diverse AI models.

AINeutralarXiv – CS AI · Mar 277/10
🧠

How Pruning Reshapes Features: Sparse Autoencoder Analysis of Weight-Pruned Language Models

Researchers conducted the first systematic study of how weight pruning affects language model representations using Sparse Autoencoders across multiple models and pruning methods. The study reveals that rare features survive pruning better than common ones, suggesting pruning acts as implicit feature selection that preserves specialized capabilities while removing generic features.

🧠 Llama
AINeutralarXiv – CS AI · Mar 277/10
🧠

DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models

Researchers identified critical security vulnerabilities in Diffusion Large Language Models (dLLMs) that differ from traditional autoregressive LLMs, stemming from their iterative generation process. They developed DiffuGuard, a training-free defense framework that reduces jailbreak attack success rates from 47.9% to 14.7% while maintaining model performance.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation

Ming-Flash-Omni is a new 100 billion parameter multimodal AI model with Mixture-of-Experts architecture that uses only 6.1 billion active parameters per token. The model demonstrates unified capabilities across vision, speech, and language tasks, achieving performance comparable to Gemini 2.5 Pro on vision-language benchmarks.

🧠 Gemini
← PrevPage 89 of 663Next →
◆ AI Mentions
🏢OpenAI
77×
🏢Nvidia
55×
🏢Anthropic
47×
🧠Claude
39×
🧠GPT-5
39×
🧠ChatGPT
36×
🧠Gemini
29×
🧠GPT-4
20×
🧠Llama
13×
🏢Google
11×
🏢xAI
10×
🏢Meta
9×
🧠Opus
8×
🧠Grok
7×
🏢Hugging Face
5×
🏢Perplexity
5×
🧠Sonnet
5×
🏢Microsoft
3×
🏢Cohere
2×
🧠Haiku
1×
▲ Trending Tags
1#iran6852#ai5643#market5014#geopolitical4535#trump1696#security1187#openai808#artificial-intelligence639#google5710#china5711#nvidia5612#inflation4913#fed4114#sanctions3615#russia30
Tag Sentiment
#iran685 articles
#ai564 articles
#market501 articles
#geopolitical453 articles
#trump169 articles
#security118 articles
#openai80 articles
#artificial-intelligence63 articles
#google57 articles
#china57 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
308
#iran↔#market
217
#geopolitical↔#market
178
#iran↔#trump
117
#market↔#trump
67
#ai↔#market
65
#geopolitical↔#trump
60
#ai↔#artificial-intelligence
56
#ai↔#google
50
#ai↔#openai
42
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange