y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All22,232🧠AI14,870🤖AI × Crypto699📰General6,663
Home/AI Pulse

AI Pulse News

Models, papers, tools. 22,233 articles with AI-powered sentiment analysis and key takeaways.

22233 articles
AIBullisharXiv – CS AI · Mar 276/10
🧠

UniAI-GraphRAG: Synergizing Ontology-Guided Extraction, Multi-Dimensional Clustering, and Dual-Channel Fusion for Robust Multi-Hop Reasoning

Researchers have developed UniAI-GraphRAG, an enhanced framework that improves upon existing GraphRAG systems for complex reasoning and multi-hop queries. The framework introduces three key innovations including ontology-guided extraction, multi-dimensional clustering, and dual-channel fusion, showing superior performance over mainstream solutions like LightRAG on benchmark tests.

AIBullisharXiv – CS AI · Mar 275/10
🧠

Evaluating Language Models for Harmful Manipulation

AIBullisharXiv – CS AI · Mar 276/10
🧠

EcoThink: A Green Adaptive Inference Framework for Sustainable and Accessible Agents

Researchers have developed EcoThink, an energy-aware AI framework that reduces inference energy consumption by 40.4% on average while maintaining performance. The system uses adaptive routing to skip unnecessary computation for simple queries while preserving deep reasoning for complex tasks, addressing sustainability concerns in large language model deployment.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Voxtral TTS

Voxtral TTS is a new multilingual text-to-speech AI model that can generate natural speech from just 3 seconds of reference audio. In human evaluations, it achieved a 68.4% win rate over ElevenLabs Flash v2.5 for voice cloning, demonstrating superior naturalness and expressivity.

AIBullisharXiv – CS AI · Mar 276/10
🧠

R-C2: Cycle-Consistent Reinforcement Learning Improves Multimodal Reasoning

Researchers introduce RC2, a reinforcement learning framework that improves multimodal AI reasoning by enforcing consistency between visual and textual representations. The system uses cycle-consistent training to resolve internal conflicts between modalities, achieving up to 7.6 point improvements in reasoning accuracy without requiring additional labeled data.

AIBearisharXiv – CS AI · Mar 276/10
🧠

Back to Basics: Revisiting ASR in the Age of Voice Agents

Researchers introduced WildASR, a multilingual diagnostic benchmark revealing that current ASR systems suffer severe performance degradation in real-world conditions despite achieving near-human accuracy on curated tests. The study found that ASR models often hallucinate plausible but unspoken content under degraded inputs, creating safety risks for voice agents.

AIBullisharXiv – CS AI · Mar 276/10
🧠

X-OPD: Cross-Modal On-Policy Distillation for Capability Alignment in Speech LLMs

Researchers propose X-OPD, a Cross-Modal On-Policy Distillation framework to improve Speech Large Language Models by aligning them with text-based counterparts. The method uses token-level feedback from teacher models to bridge performance gaps in end-to-end speech systems while preserving inherent capabilities.

AIBullisharXiv – CS AI · Mar 276/10
🧠

TRAJEVAL: Decomposing Code Agent Trajectories for Fine-Grained Diagnosis

Researchers introduce TRAJEVAL, a diagnostic framework that breaks down AI code agent performance into three stages (search, read, edit) to identify specific failure points rather than just binary pass/fail outcomes. The framework analyzed 16,758 trajectories and found that real-time feedback based on trajectory signals improved state-of-the-art models by 2.2-4.6 percentage points while reducing costs by 20-31%.

🧠 GPT-5
AIBullisharXiv – CS AI · Mar 276/10
🧠

Experiential Reflective Learning for Self-Improving LLM Agents

Researchers introduce Experiential Reflective Learning (ERL), a framework that enables AI agents to improve performance by learning from past experiences and generating transferable heuristics. The method shows a 7.8% improvement in success rates on the Gaia2 benchmark compared to baseline approaches.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Reconstructing Spiking Neural Networks Using a Single Neuron with Autapses

Researchers propose TDA-SNN, a novel spiking neural network framework that uses a single neuron with time-delayed autapses to reconstruct traditional multilayer architectures. The approach significantly reduces neuron count and memory requirements while maintaining competitive performance, though at the cost of increased temporal latency.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Scalable Object Relation Encoding for Better 3D Spatial Reasoning in Large Language Models

Researchers introduce QuatRoPE, a novel positional embedding method that improves 3D spatial reasoning in Large Language Models by encoding object relations more efficiently. The method maintains linear scalability with the number of objects and preserves LLMs' original capabilities through the Isolated Gated RoPE Extension.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Evaluating Fine-Tuned LLM Model For Medical Transcription With Small Low-Resource Languages Validated Dataset

Researchers successfully fine-tuned LLaMA 3.1-8B for medical transcription in Finnish, a low-resource language, achieving strong semantic similarity despite low n-gram overlap. The study used simulated clinical conversations from students and demonstrates the feasibility of privacy-oriented domain-specific language models for clinical documentation in underrepresented languages.

AIBullisharXiv – CS AI · Mar 276/10
🧠

AIP: Agent Identity Protocol for Verifiable Delegation Across MCP and A2A

Researchers introduce Agent Identity Protocol (AIP) with Invocation-Bound Capability Tokens (IBCTs) to address the lack of authentication in AI agent communications via Model Context Protocol and Agent-to-Agent protocols. The protocol achieved 100% attack rejection rate in testing with minimal performance overhead of 0.086% in real deployments.

🧠 Gemini
AIBullisharXiv – CS AI · Mar 276/10
🧠

Reaching Beyond the Mode: RL for Distributional Reasoning in Language Models

Researchers developed a multi-answer reinforcement learning approach that trains language models to generate multiple plausible answers with confidence estimates in a single forward pass, rather than collapsing to one dominant answer. The method shows improved diversity and accuracy across question-answering, medical diagnosis, and coding benchmarks while being more computationally efficient than existing approaches.

AINeutralarXiv – CS AI · Mar 276/10
🧠

NeuroVLM-Bench: Evaluation of Vision-Enabled Large Language Models for Clinical Reasoning in Neurological Disorders

Researchers benchmarked 20 multimodal AI models on neuroimaging tasks using MRI and CT scans, finding that while technical attributes like imaging modality are nearly solved, diagnostic reasoning remains challenging. Gemini-2.5-Pro and GPT-5-Chat showed strongest diagnostic performance, while open-source MedGemma-1.5-4B demonstrated promising results under few-shot prompting.

🏢 Meta🧠 GPT-5🧠 Gemini
AIBullisharXiv – CS AI · Mar 276/10
🧠

Evaluating adaptive and generative AI-based feedback and recommendations in a knowledge-graph-integrated programming learning system

Researchers developed a framework integrating large language models with knowledge graphs to provide programming feedback and exercise recommendations. The hybrid GenAI-adaptive approach outperformed traditional adaptive learning and GenAI-only modes, producing more correct code submissions and fewer incomplete attempts across 4,956 code submissions.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Self-Corrected Image Generation with Explainable Latent Rewards

Researchers introduce xLARD, a self-correcting framework for text-to-image generation that uses multimodal large language models to provide explainable feedback and improve alignment with complex prompts. The system employs a lightweight corrector that refines latent representations based on structured feedback, addressing challenges in generating images that match fine-grained semantics and spatial relations.

AINeutralarXiv – CS AI · Mar 276/10
🧠

Efficient Detection of Bad Benchmark Items with Novel Scalability Coefficients

Researchers introduce a new nonparametric method called signed isotonic R² for efficiently detecting problematic items in AI benchmarks and assessments. The method outperforms traditional diagnostic techniques across major AI datasets including GSM8K and MMLU, offering a lightweight solution for improving evaluation quality.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Large Language Models as Optimization Controllers: Adaptive Continuation for SIMP Topology Optimization

Researchers developed a framework using large language models (LLMs) as adaptive controllers for SIMP topology optimization, replacing fixed-schedule continuation with real-time parameter adjustments. The LLM agent achieved 5.7% to 18.1% better performance than baseline methods across multiple 2D and 3D engineering problems.

AINeutralarXiv – CS AI · Mar 276/10
🧠

Do LLMs Know What They Know? Measuring Metacognitive Efficiency with Signal Detection Theory

Researchers introduce a new framework to evaluate how well Large Language Models understand their own knowledge limitations, finding that traditional confidence metrics miss key differences between models. The study reveals that models showing similar accuracy can have vastly different metacognitive abilities - their capacity to know what they don't know.

🧠 Llama
AIBullisharXiv – CS AI · Mar 276/10
🧠

SAVe: Self-Supervised Audio-visual Deepfake Detection Exploiting Visual Artifacts and Audio-visual Misalignment

Researchers developed SAVe, a self-supervised AI framework that detects audio-visual deepfakes by learning from authentic videos rather than synthetic ones. The system identifies visual artifacts and audio-visual misalignment patterns to detect manipulated content, showing strong cross-dataset generalization capabilities.

AINeutralarXiv – CS AI · Mar 276/10
🧠

Factors Influencing the Quality of AI-Generated Code: A Synthesis of Empirical Evidence

A systematic literature review of 24 studies reveals that AI-generated code quality depends on multiple factors including prompt design, task specification, and developer expertise. The research shows variable outcomes for code correctness, security, and maintainability, indicating that AI-assisted development requires careful human oversight and validation.

AIBullisharXiv – CS AI · Mar 276/10
🧠

Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models

Photon is a new framework that efficiently processes 3D medical imaging for AI visual question answering by using variable-length token sequences and adaptive compression. The system reduces computational costs while maintaining accuracy through instruction-conditioned token scheduling and custom gradient propagation techniques.

AIBearisharXiv – CS AI · Mar 276/10
🧠

Probing the Lack of Stable Internal Beliefs in LLMs

Research reveals that large language models (LLMs) struggle to maintain consistent internal beliefs or goals across multi-turn conversations, failing to preserve implicit consistency when not explicitly provided context. This limitation poses significant challenges for developing persona-driven AI systems that require stable personality traits and behavioral patterns.

AIBearisharXiv – CS AI · Mar 276/10
🧠

MolQuest: A Benchmark for Agentic Evaluation of Abductive Reasoning in Chemical Structure Elucidation

Researchers introduce MolQuest, a new benchmark for evaluating AI models' ability to perform complex chemical structure elucidation through multi-step reasoning. Even state-of-the-art AI models achieve only 50% accuracy on this real-world scientific task, revealing significant limitations in current AI systems' strategic reasoning capabilities.

← PrevPage 439 of 890Next →
◆ AI Mentions
🏢OpenAI
97×
🧠Claude
64×
🧠Gemini
57×
🏢Nvidia
55×
🧠Llama
54×
🏢Anthropic
51×
🧠GPT-5
45×
🧠GPT-4
34×
🏢Meta
28×
🏢Perplexity
28×
🧠ChatGPT
25×
🏢Hugging Face
22×
🏢xAI
15×
🧠Opus
11×
🧠Grok
9×
🧠Sonnet
8×
🏢Google
5×
🧠Stable Diffusion
4×
🧠o1
3×
🏢Microsoft
2×
▲ Trending Tags
1#machine-learning3362#ai2793#reinforcement-learning1604#ai-safety1315#geopolitics1266#iran1227#language-models1228#market1209#bitcoin11910#ai-infrastructure11511#geopolitical-risk10912#neural-networks10813#openai8214#benchmark7715#deep-learning77
Tag Sentiment
#machine-learning336 articles
#ai279 articles
#reinforcement-learning160 articles
#ai-safety131 articles
#geopolitics126 articles
#iran122 articles
#language-models122 articles
#market120 articles
#bitcoin119 articles
#ai-infrastructure115 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitics↔#iran
32
#bitcoin↔#market
29
#ai↔#artificial-intelligence
29
#geopolitical↔#iran
28
#geopolitical-risk↔#strait-of-hormuz
25
#iran↔#trump
24
#ai↔#google
23
#china↔#trump
22
#geopolitical-risk↔#oil-markets
22
#geopolitics↔#oil-markets
22
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange