y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,616🧠AI12,713🤖AI × Crypto539📰General4,364
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,621 articles with AI-powered sentiment analysis and key takeaways.

17621 articles
AIBullisharXiv – CS AI · Mar 47/102
🧠

DMTrack: Spatio-Temporal Multimodal Tracking via Dual-Adapter

Researchers introduce DMTrack, a novel dual-adapter architecture for spatio-temporal multimodal tracking that achieves state-of-the-art performance with only 0.93M trainable parameters. The system uses two key modules - a spatio-temporal modality adapter and a progressive modality complementary adapter - to bridge gaps between different modalities and enable better cross-modality fusion.

AINeutralarXiv – CS AI · Mar 47/102
🧠

No Answer Needed: Predicting LLM Answer Accuracy from Question-Only Linear Probes

Researchers developed linear probes that can predict whether large language models will answer questions correctly by analyzing neural activations before any answer is generated. The method works across different model sizes and generalizes to out-of-distribution datasets, though it struggles with mathematical reasoning tasks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward

Researchers have identified a critical flaw in reinforcement learning fine-tuning of large language models that causes degradation in multi-attempt performance despite improvements in single attempts. Their proposed solution, Diversity-Preserving Hybrid RL (DPH-RL), uses mass-covering f-divergences to maintain model diversity and prevent catastrophic forgetting while improving training efficiency.

AIBullisharXiv – CS AI · Mar 46/103
🧠

SiNGER: A Clearer Voice Distills Vision Transformers Further

Researchers introduce SiNGER, a new knowledge distillation framework for Vision Transformers that suppresses harmful high-norm artifacts while preserving informative signals. The technique uses nullspace-guided perturbation and LoRA-based adapters to achieve state-of-the-art performance in downstream tasks.

AIBullisharXiv – CS AI · Mar 46/102
🧠

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a new system that enables efficient semantic analysis of large document collections using LLMs by combining offline document representation with lightweight online filtering. The system achieves 2x speedup and reduces expensive LLM calls by up to 85% through contrastive learning and adaptive cascade mechanisms.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

Researchers propose 'best-of-∞' approach for large language models that uses majority voting with infinite samples, achieving superior performance but requiring infinite computation. They develop an adaptive generation scheme that dynamically selects the optimal number of samples based on answer agreement and extend the framework to weighted ensembles of multiple LLMs.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Bridging Kolmogorov Complexity and Deep Learning: Asymptotically Optimal Description Length Objectives for Transformers

Researchers introduce a theoretical framework connecting Kolmogorov complexity to Transformer neural networks through asymptotically optimal description length objectives. The work demonstrates computational universality of Transformers and proposes a variational objective that achieves optimal compression, though current optimization methods struggle to find such solutions from random initialization.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Fine-Tuning Diffusion Models via Intermediate Distribution Shaping

Researchers present P-GRAFT, a new method for fine-tuning diffusion models by shaping distributions at intermediate noise levels, showing improved performance on text-to-image generation tasks. The framework achieved an 8.81% relative improvement over base Stable Diffusion v2 model on popular benchmarks.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Death of the Novel(ty): Beyond n-Gram Novelty as a Metric for Textual Creativity

Research analyzing 8,618 expert annotations reveals that n-gram novelty, commonly used to evaluate AI text generation, is insufficient for measuring textual creativity. While positively correlated with creativity, 91% of high n-gram novel expressions were not judged as creative by experts, and higher novelty in open-source LLMs correlates with lower pragmatic quality.

AIBullisharXiv – CS AI · Mar 47/103
🧠

LaDiR: Latent Diffusion Enhances LLMs for Text Reasoning

Researchers introduce LaDiR (Latent Diffusion Reasoner), a novel framework that combines continuous latent representation with iterative refinement capabilities to enhance Large Language Models' reasoning abilities. The system uses a Variational Autoencoder to encode reasoning steps and a latent diffusion model for parallel generation of diverse reasoning trajectories, showing improved accuracy and interpretability in mathematical reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Mitigating Over-Refusal in Aligned Large Language Models via Inference-Time Activation Energy

Researchers introduce Energy Landscape Steering (ELS), a new framework that reduces false refusals in AI safety-aligned language models without compromising security. The method uses an external Energy-Based Model to dynamically guide model behavior during inference, improving compliance from 57.3% to 82.6% on safety benchmarks.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Spectrum Tuning: Post-Training for Distributional Coverage and In-Context Steerability

Researchers introduce Spectrum Tuning, a new post-training method that improves AI language models' ability to generate diverse outputs and follow in-context steering instructions. The technique addresses limitations in current post-training approaches that reduce models' distributional coverage and flexibility when tasks require multiple valid answers rather than single correct responses.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Self-Aug: Query and Entropy Adaptive Decoding for Large Vision-Language Models

Researchers developed a new training-free decoding strategy for Large Vision-Language Models that reduces hallucinations by using query-adaptive visual augmentation and entropy-based token selection. The method showed significant improvements in factual consistency across four LVLMs and seven benchmarks compared to existing approaches.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Narrow Finetuning Leaves Clearly Readable Traces in Activation Differences

Researchers found that narrow finetuning of Large Language Models leaves detectable traces in model activations that can reveal information about the training domain. The study demonstrates that these biases can be used to understand what data was used for finetuning and suggests mixing pretraining data into finetuning to reduce these traces.

AIBullisharXiv – CS AI · Mar 46/105
🧠

Curriculum Learning for Efficient Chain-of-Thought Distillation via Structure-Aware Masking and GRPO

Researchers developed a three-stage curriculum learning framework that improves Chain-of-Thought reasoning distillation from large language models to smaller ones. The method enables Qwen2.5-3B-Base to achieve 11.29% accuracy improvement while reducing output length by 27.4% through progressive skill acquisition and Group Relative Policy Optimization.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Multimodal Multi-Agent Ransomware Analysis Using AutoGen

Researchers developed a multimodal multi-agent ransomware analysis framework using AutoGen that combines static, dynamic, and network data sources for improved ransomware detection. The system achieved 0.936 Macro-F1 score for family classification and demonstrated stable convergence over 100 epochs with a final composite score of 0.88.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Every Language Model Has a Forgery-Resistant Signature

Researchers have discovered that language models produce outputs with unique geometric signatures that lie on high-dimensional ellipses, which can be used to identify the source model. This signature is forgery-resistant and naturally occurring, potentially enabling cryptographic-like verification of AI model outputs.

AINeutralarXiv – CS AI · Mar 47/102
🧠

WARP: Weight Teleportation for Attack-Resilient Unlearning Protocols

Researchers introduce WARP, a new defense mechanism for machine unlearning protocols that protects against privacy attacks where adversaries can exploit differences between pre- and post-unlearning AI models. The technique reduces attack success rates by up to 92% while maintaining model accuracy on retained data.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Dual Randomized Smoothing: Beyond Global Noise Variance

Researchers propose a dual Randomized Smoothing framework that overcomes limitations of standard neural network robustness certification by using input-dependent noise variances instead of global ones. The method achieves strong performance at both small and large radii with gains of 15-20% on CIFAR-10 and 8-17% on ImageNet, while adding only 60% computational overhead.

AIBullisharXiv – CS AI · Mar 47/103
🧠

FAST: Topology-Aware Frequency-Domain Distribution Matching for Coreset Selection

Researchers propose FAST, a new DNN-free framework for coreset selection that compresses large datasets into representative subsets for training deep neural networks. The method uses frequency-domain distribution matching and achieves 9.12% average accuracy improvement while reducing power consumption by 96.57% compared to existing methods.

AIBullisharXiv – CS AI · Mar 47/104
🧠

VeriStruct: AI-assisted Automated Verification of Data-Structure Modules in Verus

VeriStruct is a new AI framework that automates formal verification of complex data structure modules in the Verus programming language. The system achieved a 99.2% success rate in verifying 128 out of 129 functions across eleven Rust data structure modules, representing significant progress in AI-assisted formal verification.

AIBullisharXiv – CS AI · Mar 46/104
🧠

xLLM Technical Report

xLLM is a new open-source Large Language Model inference framework that delivers significantly improved performance for enterprise AI deployments. The framework achieves 1.7-2.2x higher throughput compared to existing solutions like MindIE and vLLM-Ascend through novel architectural optimizations including decoupled service-engine design and intelligent scheduling.

AIBearisharXiv – CS AI · Mar 47/104
🧠

Zero-Permission Manipulation: Can We Trust Large Multimodal Model Powered GUI Agents?

Researchers discovered a critical security vulnerability in AI-powered GUI agents on Android, where malicious apps can hijack agent actions without requiring dangerous permissions. The 'Action Rebinding' attack exploits timing gaps between AI observation and action, achieving 100% success rates in tests across six popular Android GUI agents.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Nightjar: Dynamic Adaptive Speculative Decoding for Large Language Models Serving

Nightjar is a new adaptive speculative decoding framework for large language models that dynamically adjusts to system load conditions. It achieves 27.29% higher throughput and up to 20.18% lower latency by intelligently enabling or disabling speculation based on workload demands.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Learning Contextual Runtime Monitors for Safe AI-Based Autonomy

Researchers introduce a novel framework for learning context-aware runtime monitors for AI-based control systems in autonomous vehicles. The approach uses contextual multi-armed bandits to select the best controller for current conditions rather than averaging outputs, providing theoretical safety guarantees and improved performance in simulated driving scenarios.

← PrevPage 153 of 705Next →
◆ AI Mentions
🏢OpenAI
103×
🏢Nvidia
58×
🧠GPT-5
39×
🧠Gemini
38×
🧠Claude
37×
🏢Anthropic
36×
🧠ChatGPT
20×
🧠Llama
19×
🧠GPT-4
18×
🏢Meta
11×
🏢Perplexity
9×
🧠Sonnet
9×
🏢xAI
9×
🧠Opus
8×
🏢Microsoft
7×
🏢Google
7×
🏢Hugging Face
6×
🧠Grok
5×
🧠o1
2×
🧠Stable Diffusion
1×
▲ Trending Tags
1#ai5022#iran4763#market3284#geopolitical2985#trump1086#openai987#security938#geopolitics849#geopolitical-risk7110#inflation6811#artificial-intelligence6712#nvidia5713#machine-learning5614#google4615#fed45
Tag Sentiment
#ai502 articles
#iran476 articles
#market328 articles
#geopolitical298 articles
#trump108 articles
#openai98 articles
#security93 articles
#geopolitics84 articles
#geopolitical-risk71 articles
#inflation68 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
213
#iran↔#market
138
#geopolitical↔#market
113
#iran↔#trump
80
#ai↔#artificial-intelligence
54
#ai↔#market
48
#geopolitical↔#trump
40
#market↔#trump
40
#ai↔#openai
39
#ai↔#google
36
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange