y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All18,995🧠AI13,485🤖AI × Crypto583📰General4,927
Home/AI Pulse

AI Pulse News

Models, papers, tools. 18,996 articles with AI-powered sentiment analysis and key takeaways.

18996 articles
AINeutralarXiv – CS AI · Apr 136/10
🧠

PerMix-RLVR: Preserving Persona Expressivity under Verifiable-Reward Alignment

Researchers introduce PerMix-RLVR, a training method that enables large language models to maintain persona flexibility while preserving task robustness. The approach addresses a fundamental trade-off in reinforcement learning with verifiable rewards, where models become less responsive to persona prompts but gain improved performance on objective tasks.

AINeutralarXiv – CS AI · Apr 136/10
🧠

ASTRA: Adaptive Semantic Tree Reasoning Architecture for Complex Table Question Answering

Researchers introduce ASTRA, a new architecture designed to improve how large language models process and reason about complex tables through adaptive semantic tree structures. The method combines tree-based navigation with symbolic code execution to achieve state-of-the-art performance on table question-answering benchmarks, addressing fundamental limitations in how tables are currently serialized for LLMs.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Noise-Aware In-Context Learning for Hallucination Mitigation in ALLMs

Researchers propose Noise-Aware In-Context Learning (NAICL), a plug-and-play method to reduce hallucinations in auditory large language models without expensive fine-tuning. The approach uses a noise prior library to guide models toward more conservative outputs, achieving a 37% reduction in hallucination rates while establishing a new benchmark for evaluating audio understanding systems.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection

Researchers introduce ImageProtector, a user-side defense mechanism that embeds imperceptible perturbations into images to prevent multi-modal large language models from analyzing them. When adversaries attempt to extract sensitive information from protected images, MLLMs are induced to refuse analysis, though potential countermeasures exist that may partially mitigate the technique's effectiveness.

AINeutralarXiv – CS AI · Apr 136/10
🧠

CONDESION-BENCH: Conditional Decision-Making of Large Language Models in Compositional Action Space

Researchers introduce CONDESION-BENCH, a new benchmark for evaluating how large language models make decisions in complex, real-world scenarios with compositional actions and conditional constraints. The benchmark addresses limitations in existing decision-making frameworks by incorporating variable-level, contextual, and allocation-level restrictions that better reflect actual decision-making environments.

AIBullisharXiv – CS AI · Apr 136/10
🧠

Learning Vision-Language-Action World Models for Autonomous Driving

Researchers present VLA-World, a vision-language-action model that combines predictive world modeling with reflective reasoning for autonomous driving. The system generates future frames guided by action trajectories and then reasons over imagined scenarios to refine predictions, achieving state-of-the-art performance on planning and future-generation benchmarks.

AINeutralarXiv – CS AI · Apr 136/10
🧠

CLIP-Inspector: Model-Level Backdoor Detection for Prompt-Tuned CLIP via OOD Trigger Inversion

Researchers introduce CLIP-Inspector, a backdoor detection method for prompt-tuned CLIP models that reconstructs hidden triggers using out-of-distribution images to identify if a model has been maliciously compromised. The technique achieves 94% detection accuracy and enables post-hoc model repair, addressing critical security vulnerabilities in outsourced machine learning services.

AIBullisharXiv – CS AI · Apr 136/10
🧠

Interactive ASR: Towards Human-Like Interaction and Semantic Coherence Evaluation for Agentic Speech Recognition

Researchers propose Interactive ASR, a new framework that combines semantic-aware evaluation using LLM-as-a-Judge with multi-turn interactive correction to improve automatic speech recognition beyond traditional word error rate metrics. The approach simulates human-like interaction, enabling iterative refinement of recognition outputs across English, Chinese, and code-switching datasets.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Structuring versus Problematizing: How LLM-based Agents Scaffold Learning in Diagnostic Reasoning

Researchers developed PharmaSim Switch, an AI-powered educational platform that uses large language models to scaffold diagnostic reasoning in pharmacy technician training through two distinct pedagogical approaches: structuring and problematizing. A 63-student experiment found both methods effective, with structuring promoting more accurate participation and problematizing encouraging deeper constructive engagement, suggesting hybrid scaffolding strategies optimize learning outcomes.

AIBearisharXiv – CS AI · Apr 136/10
🧠

GRM: Utility-Aware Jailbreak Attacks on Audio LLMs via Gradient-Ratio Masking

Researchers introduce GRM, a frequency-selective jailbreak framework that exploits vulnerabilities in audio large language models while maintaining utility preservation. By strategically perturbing specific frequency bands rather than entire spectrums, GRM achieves 88.46% jailbreak success rates with better trade-offs between attack effectiveness and transcription quality compared to existing methods.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Visually-Guided Policy Optimization for Multimodal Reasoning

Researchers propose Visually-Guided Policy Optimization (VGPO), a framework that enhances vision-language models' ability to focus on visual information during reasoning tasks. The method addresses a fundamental limitation where text-dominated VLMs suffer from weak visual attention and temporal visual forgetting, improving performance on multimodal reasoning and visual-dependent tasks.

AIBullisharXiv – CS AI · Apr 136/10
🧠

The AI Codebase Maturity Model: From Assisted Coding to Self-Sustaining Systems

Researchers present the AI Codebase Maturity Model (ACMM), a 5-level framework for systematically evolving codebases from basic AI-assisted coding to self-sustaining systems. Validated through a 4-month case study of KubeStellar Console, the model demonstrates that AI system intelligence depends primarily on surrounding infrastructure—testing, metrics, and feedback loops—rather than the AI model itself.

🏢 Microsoft🧠 Claude🧠 Copilot
AINeutralarXiv – CS AI · Apr 136/10
🧠

Yes, But Not Always. Generative AI Needs Nuanced Opt-in

A research paper proposes that generative AI licensing requires nuanced, conditional consent rather than binary opt-in/opt-out frameworks. The study argues inference-time verification can better balance rights holders' interests with AI developers' capabilities, using music licensing as a practical case study to demonstrate how contextual consent conditions can be enforced.

AIBullisharXiv – CS AI · Apr 136/10
🧠

RecaLLM: Addressing the Lost-in-Thought Phenomenon with Explicit In-Context Retrieval

Researchers introduce RecaLLM, a post-trained language model that addresses the 'lost-in-thought' phenomenon where retrieval performance degrades during extended reasoning chains. The model interleaves explicit in-context retrieval with reasoning steps and achieves strong performance on long-context benchmarks using training data significantly shorter than existing approaches.

AIBullisharXiv – CS AI · Apr 136/10
🧠

BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation

Researchers introduce BERT-as-a-Judge, a lightweight alternative to LLM-based evaluation methods that assesses generative model outputs with greater accuracy than lexical approaches while requiring significantly less computational overhead. The method demonstrates that existing lexical evaluation techniques poorly correlate with human judgment across 36 models and 15 tasks, establishing a practical middle ground between rigid rule-based and expensive LLM-judge evaluation paradigms.

AIBullisharXiv – CS AI · Apr 136/10
🧠

VISOR: Agentic Visual Retrieval-Augmented Generation via Iterative Search and Over-horizon Reasoning

Researchers introduce VISOR, a new agentic visual retrieval-augmented generation system that improves how AI models reason over multi-page visual documents. By addressing key technical challenges in evidence gathering and context management, VISOR achieves state-of-the-art results on complex visual reasoning tasks.

AIBullisharXiv – CS AI · Apr 136/10
🧠

VisionFoundry: Teaching VLMs Visual Perception with Synthetic Images

Researchers introduce VisionFoundry, a synthetic data generation pipeline that uses LLMs and text-to-image models to create targeted training data for vision-language models. The approach addresses VLMs' weakness in visual perception tasks and demonstrates 7-10% improvements on benchmark tests without requiring human annotation or reference images.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise

Researchers introduce VisPrompt, a framework that improves prompt learning for vision-language models by injecting visual semantic information to enhance robustness against label noise. The approach keeps pre-trained models frozen while adding minimal trainable parameters, demonstrating superior performance across seven benchmark datasets under both synthetic and real-world noisy conditions.

AIBullisharXiv – CS AI · Apr 136/10
🧠

Chain-in-Tree: Back to Sequential Reasoning in LLM Tree Search

Researchers introduce Chain-in-Tree (CiT), a framework that optimizes large language model tree search by selectively branching only when necessary rather than at every step. The approach reduces computational overhead by 75-85% on math reasoning tasks with minimal accuracy loss, making inference-time scaling more practical for resource-constrained deployments.

AIBullisharXiv – CS AI · Apr 136/10
🧠

Sample-Efficient Neurosymbolic Deep Reinforcement Learning

Researchers propose a neuro-symbolic deep reinforcement learning approach that integrates logical rules and symbolic knowledge to improve sample efficiency and generalization in RL systems. The method transfers partial policies from simple tasks to complex ones, reducing training data requirements and improving performance in sparse-reward environments compared to existing baselines.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Reasoning in a Combinatorial and Constrained World: Benchmarking LLMs on Natural-Language Combinatorial Optimization

Researchers introduced NLCO, a benchmark for evaluating large language models on natural-language combinatorial optimization problems without external solvers or code generation. Testing across modern LLMs reveals that while high-performing models handle small instances well, performance degrades significantly as problem complexity increases, with graph-structured and bottleneck-objective problems proving particularly challenging.

AINeutralarXiv – CS AI · Apr 136/10
🧠

ReplicatorBench: Benchmarking LLM Agents for Replicability in Social and Behavioral Sciences

Researchers introduce ReplicatorBench, a comprehensive benchmark for evaluating AI agents' ability to replicate scientific research claims in social and behavioral sciences. The study reveals that current LLM agents excel at designing and executing experiments but struggle significantly with data retrieval, highlighting critical gaps in autonomous research validation capabilities.

AINeutralarXiv – CS AI · Apr 136/10
🧠

TRU: Targeted Reverse Update for Efficient Multimodal Recommendation Unlearning

Researchers propose TRU (Targeted Reverse Update), a machine unlearning framework designed to efficiently remove user data from multimodal recommendation systems without full retraining. The method addresses non-uniform data influence across ranking behavior, modality branches, and network layers through coordinated interventions, achieving better performance than existing approximate unlearning approaches.

AINeutralarXiv – CS AI · Apr 136/10
🧠

ASPECT:Analogical Semantic Policy Execution via Language Conditioned Transfer

Researchers introduce ASPECT, a novel reinforcement learning framework that uses large language models as semantic operators to enable zero-shot transfer learning across novel tasks. By conditioning a text-based VAE on LLM-generated task descriptions, the approach allows agents to reuse policies on structurally similar but previously unseen tasks without discrete category constraints.

AINeutralarXiv – CS AI · Apr 136/10
🧠

Mitigating Extrinsic Gender Bias for Bangla Classification Tasks

Researchers have developed RandSymKL, a debiasing technique for Bangla language models that mitigates gender bias in classification tasks like sentiment analysis and hate speech detection. The study introduces four manually annotated benchmark datasets with gender-perturbation testing and demonstrates that the approach effectively reduces bias while maintaining competitive accuracy compared to existing methods.

← PrevPage 306 of 760Next →
◆ AI Mentions
🏢OpenAI
77×
🏢Anthropic
46×
🧠Claude
39×
🏢Nvidia
34×
🧠Gemini
25×
🧠GPT-5
21×
🧠ChatGPT
21×
🧠GPT-4
20×
🧠Llama
19×
🏢Perplexity
14×
🏢xAI
9×
🧠Opus
9×
🏢Meta
6×
🧠Sonnet
6×
🏢Hugging Face
5×
🏢Google
5×
🧠Grok
4×
🏢Microsoft
3×
🧠Haiku
2×
🧠o1
1×
▲ Trending Tags
1#ai2422#geopolitical-risk2403#geopolitics2204#iran1945#market-volatility1316#middle-east1247#sanctions928#oil-markets869#energy-markets8510#inflation8111#geopolitical7512#machine-learning6713#openai6514#ai-infrastructure6415#strait-of-hormuz58
Tag Sentiment
#ai242 articles
#geopolitical-risk240 articles
#geopolitics220 articles
#iran194 articles
#market-volatility131 articles
#middle-east124 articles
#sanctions92 articles
#oil-markets86 articles
#energy-markets85 articles
#inflation81 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitics↔#iran
65
#geopolitical-risk↔#market-volatility
47
#geopolitics↔#oil-markets
43
#geopolitical↔#iran
42
#geopolitics↔#middle-east
41
#geopolitical-risk↔#middle-east
40
#geopolitical-risk↔#oil-markets
35
#oil-markets↔#strait-of-hormuz
29
#iran↔#trump
29
#energy-markets↔#geopolitical-risk
29
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange