y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All20,247🧠AI14,281🤖AI × Crypto680📰General5,286
Home/AI Pulse

AI Pulse News

Models, papers, tools. 20,250 articles with AI-powered sentiment analysis and key takeaways.

20250 articles
AIBullisharXiv – CS AI · Mar 176/10
🧠

Learning from Partial Chain-of-Thought via Truncated-Reasoning Self-Distillation

Researchers introduce Truncated-Reasoning Self-Distillation (TRSD), a post-training method that enables AI language models to maintain accuracy while using shorter reasoning traces. The technique reduces computational costs by training models to produce correct answers from partial reasoning, achieving significant inference-time efficiency gains without sacrificing performance.

AIBullisharXiv – CS AI · Mar 176/10
🧠

PREBA: Surgical Duration Prediction via PCA-Weighted Retrieval-Augmented LLMs and Bayesian Averaging Aggregation

Researchers developed PREBA, a retrieval-augmented framework that uses PCA-weighted retrieval and Bayesian averaging to improve surgical duration prediction accuracy by up to 40% using large language models. The system grounds LLM predictions in institution-specific clinical data without requiring computationally intensive training, achieving performance competitive with supervised machine learning methods.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Learning Retrieval Models with Sparse Autoencoders

Researchers introduce SPLARE, a new method that uses sparse autoencoders (SAEs) to improve learned sparse retrieval in language models. The technique outperforms existing vocabulary-based approaches in multilingual and out-of-domain settings, with SPLARE-7B achieving top results on multilingual retrieval benchmarks.

AIBullisharXiv – CS AI · Mar 176/10
🧠

FedTreeLoRA: Reconciling Statistical and Functional Heterogeneity in Federated LoRA Fine-Tuning

Researchers propose FedTreeLoRA, a new framework for privacy-preserving fine-tuning of large language models that addresses both statistical and functional heterogeneity across federated learning clients. The method uses tree-structured aggregation to allow layer-wise specialization while maintaining shared consensus on foundational layers, significantly outperforming existing personalized federated learning approaches.

AIBullisharXiv – CS AI · Mar 176/10
🧠

From Stochastic Answers to Verifiable Reasoning: Interpretable Decision-Making with LLM-Generated Code

Researchers propose a new framework that uses LLMs as code generators rather than per-instance evaluators for high-stakes decision-making, creating interpretable and reproducible AI systems. The approach generates executable decision logic once instead of querying LLMs for each prediction, demonstrated through venture capital founder screening with competitive performance while maintaining full transparency.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 176/10
🧠

Pragma-VL: Towards a Pragmatic Arbitration of Safety and Helpfulness in MLLMs

Researchers introduce Pragma-VL, a new alignment algorithm for Multimodal Large Language Models that balances safety and helpfulness by improving visual risk perception and using contextual arbitration. The method outperforms existing baselines by 5-20% on multimodal safety benchmarks while maintaining general AI capabilities in mathematics and reasoning.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Not All Queries Need Rewriting: When Prompt-Only LLM Refinement Helps and Hurts Dense Retrieval

Research reveals that LLM query rewriting in RAG systems shows highly domain-dependent performance, degrading retrieval effectiveness by 9% in financial domains while improving it by 5.1% in scientific contexts. The study identifies that effectiveness depends on whether rewriting improves or worsens lexical alignment between queries and domain-specific terminology.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Evidence-based Distributional Alignment for Large Language Models

Researchers propose Evi-DA, an evidence-based technique that improves how large language models predict population response distributions across different cultures and domains. The method uses World Values Survey data and reinforcement learning to achieve up to 44% improvement in accuracy compared to existing approaches.

AINeutralarXiv – CS AI · Mar 176/10
🧠

Feature-level Interaction Explanations in Multimodal Transformers

Researchers introduce FL-I2MoE, a new Mixture-of-Experts layer for multimodal Transformers that explicitly identifies synergistic and redundant cross-modal feature interactions. The method provides more interpretable explanations for how different data modalities contribute to AI decision-making compared to existing approaches.

AIBullisharXiv – CS AI · Mar 176/10
🧠

LUMINA: Laplacian-Unifying Mechanism for Interpretable Neurodevelopmental Analysis via Quad-Stream GCN

Researchers developed LUMINA, a new Graph Convolutional Network architecture that improves AI-driven diagnosis of neurodevelopmental disorders using fMRI brain data. The system achieved 84.66% accuracy for ADHD and 88.41% for autism spectrum disorder detection by addressing traditional GCN limitations in capturing neural connection dynamics.

AIBullisharXiv – CS AI · Mar 176/10
🧠

PolyGLU: State-Conditional Activation Routing in Transformer Feed-Forward Networks

Researchers introduce PolyGLU, a new transformer architecture that enables dynamic routing among multiple activation functions, mimicking biological neural diversity. The 597M-parameter PolychromaticLM model shows emergent specialization patterns and achieves strong performance despite training on significantly fewer tokens than comparable models.

🏢 Nvidia
AIBullisharXiv – CS AI · Mar 176/10
🧠

Thinking in Uncertainty: Mitigating Hallucinations in MLRMs with Latent Entropy-Aware Decoding

Researchers propose Latent Entropy-Aware Decoding (LEAD), a new method to reduce hallucinations in multimodal large reasoning models by switching between continuous and discrete token embeddings based on entropy states. The technique addresses issues where transition words correlate with high-entropy states that lead to unreliable outputs in visual question answering tasks.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Ethical Fairness without Demographics in Human-Centered AI

Researchers introduce Flare, a new AI fairness framework that ensures ethical outcomes without requiring demographic data, addressing privacy and regulatory concerns in human-centered AI applications. The system uses Fisher Information to detect hidden biases and includes a novel evaluation metric suite called BHE for measuring ethical fairness beyond traditional statistical measures.

🏢 Meta
AINeutralarXiv – CS AI · Mar 176/10
🧠

Bridging Protocol and Production: Design Patterns for Deploying AI Agents with Model Context Protocol

Researchers identify three critical gaps in the Model Context Protocol (MCP) that prevent AI agents from operating safely at production scale, despite MCP having over 10,000 active servers and 97 million monthly SDK downloads. The paper proposes three new mechanisms to address missing identity propagation, adaptive tool budgeting, and structured error semantics based on enterprise deployment experience.

AIBullisharXiv – CS AI · Mar 176/10
🧠

GPrune-LLM: Generalization-Aware Structured Pruning for Large Language Models

Researchers introduce GPrune-LLM, a new structured pruning framework that improves compression of large language models by addressing calibration bias and cross-task generalization issues. The method partitions neurons into behavior-consistent modules and uses adaptive metrics based on distribution sensitivity, showing consistent improvements in post-compression performance.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Outcome-Aware Tool Selection for Semantic Routers: Latency-Constrained Learning Without LLM Inference

Researchers propose Outcome-Aware Tool Selection (OATS), a method to improve tool selection in LLM inference gateways by interpolating tool embeddings toward successful query centroids without adding latency. The approach improves tool selection accuracy on benchmarks while maintaining single-digit millisecond CPU processing times.

AIBullisharXiv – CS AI · Mar 176/10
🧠

NormCode Canvas: Making LLM Agentic Workflows Development Sustainable via Case-Based Reasoning

NormCode Canvas v1.1.3 introduces a case-based reasoning system for LLM agentic workflows using a semi-formal planning language called NormCode. The deployed system demonstrates multi-step AI task automation across presentation generation, code assistance, and plan compilation with self-sustaining capabilities.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Resolving Interference (RI): Disentangling Models for Improved Model Merging

Researchers have developed Resolving Interference (RI), a new framework that improves AI model merging by reducing cross-task interference when combining specialized models. The method makes models functionally orthogonal to other tasks using only unlabeled data, improving merging performance by up to 3.8% and generalization by up to 2.3%.

AIBullisharXiv – CS AI · Mar 176/10
🧠

MR-GNF: Multi-Resolution Graph Neural Forecasting on Ellipsoidal Meshes for Efficient Regional Weather Prediction

Researchers developed MR-GNF, a lightweight AI model that performs regional weather forecasting using multi-resolution graph neural networks on ellipsoidal meshes. The model achieves competitive accuracy with traditional numerical weather prediction systems while using significantly less computational resources (under 80 GPU-hours on a single RTX 6000 Ada).

$ADA
AIBullisharXiv – CS AI · Mar 176/10
🧠

NCCL EP: Towards a Unified Expert Parallel Communication API for NCCL

Researchers have developed NCCL EP, a new communication library for Mixture-of-Experts (MoE) AI model architectures that improves GPU-initiated communication performance. The library provides unified APIs supporting both low-latency inference and high-throughput training modes, built entirely on NVIDIA's NCCL Device API.

🏢 Nvidia
AIBullisharXiv – CS AI · Mar 176/10
🧠

Locatability-Guided Adaptive Reasoning for Image Geo-Localization with Vision-Language Models

Researchers introduce Geo-ADAPT, a new AI framework using Vision-Language Models for image geo-localization that adapts reasoning depth based on image complexity. The system uses an Optimized Locatability Score and specialized dataset to achieve state-of-the-art performance while reducing AI hallucinations.

AINeutralarXiv – CS AI · Mar 176/10
🧠

QuarkMedBench: A Real-World Scenario Driven Benchmark for Evaluating Large Language Models

Researchers introduced QuarkMedBench, a new benchmark for evaluating large language models on real-world medical queries using over 20,000 queries across clinical care scenarios. The benchmark addresses limitations of current medical AI evaluations that rely on multiple-choice questions by using an automated scoring framework that achieves 91.8% concordance with clinical expert assessments.

AIBullisharXiv – CS AI · Mar 176/10
🧠

REFINE-DP: Diffusion Policy Fine-tuning for Humanoid Loco-manipulation via Reinforcement Learning

Researchers developed REFINE-DP, a hierarchical framework that combines diffusion policies with reinforcement learning to enable humanoid robots to perform complex loco-manipulation tasks. The system achieves over 90% success rate in simulation and demonstrates smooth autonomous execution in real-world environments for tasks like door traversal and object transport.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Knowledge Distillation for Large Language Models

Researchers developed a resource-efficient framework for compressing large language models using knowledge distillation and chain-of-thought reinforcement learning. The method successfully compressed Qwen 3B to 0.5B while retaining 70-95% of performance across English, Spanish, and coding tasks, making AI models more suitable for resource-constrained deployments.

AIBullisharXiv – CS AI · Mar 176/10
🧠

Retrieval-Feedback-Driven Distillation and Preference Alignment for Efficient LLM-based Query Expansion

Researchers developed a framework to make large language model-based query expansion more efficient by distilling knowledge from powerful teacher models into compact student models. The approach uses retrieval feedback and preference alignment to maintain 97% of the original performance while dramatically reducing inference costs.

← PrevPage 403 of 810Next →
◆ AI Mentions
🏢OpenAI
82×
🏢Anthropic
44×
🧠Claude
39×
🏢Nvidia
37×
🧠Llama
32×
🧠Gemini
31×
🧠GPT-5
25×
🧠GPT-4
23×
🧠ChatGPT
23×
🏢Perplexity
18×
🏢xAI
11×
🏢Hugging Face
10×
🧠Sonnet
8×
🧠Opus
8×
🏢Meta
7×
🏢Google
5×
🧠Grok
4×
🏢Microsoft
3×
🧠Sora
2×
🧠Stable Diffusion
2×
▲ Trending Tags
1#ai2332#machine-learning1673#iran1254#geopolitics1145#geopolitical-risk1076#ai-infrastructure977#reinforcement-learning978#ai-safety769#language-models7410#openai7211#geopolitical6712#neural-networks6213#market-volatility5314#enterprise-ai4915#energy-markets48
Tag Sentiment
#ai233 articles
#machine-learning167 articles
#iran125 articles
#geopolitics114 articles
#geopolitical-risk107 articles
#ai-infrastructure97 articles
#reinforcement-learning97 articles
#ai-safety76 articles
#language-models74 articles
#openai72 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
38
#geopolitics↔#iran
29
#iran↔#trump
24
#geopolitical-risk↔#strait-of-hormuz
23
#energy-markets↔#geopolitical-risk
22
#geopolitics↔#oil-markets
21
#ai↔#artificial-intelligence
21
#geopolitical-risk↔#oil-markets
21
#geopolitics↔#middle-east
20
#ai↔#google
19
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange