y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,633🧠AI12,715🤖AI × Crypto541📰General4,377
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,638 articles with AI-powered sentiment analysis and key takeaways.

17638 articles
AIBullisharXiv – CS AI · Mar 37/104
🧠

Train Once, Answer All: Many Pretraining Experiments for the Cost of One

Researchers developed a method to conduct multiple AI training experiments simultaneously within a single pretraining run, reducing computational costs while maintaining research validity. The approach was validated across ten experiments using models up to 2.7B parameters trained on 210B tokens, with minimal impact on training dynamics.

AINeutralarXiv – CS AI · Mar 37/105
🧠

Robust Fine-Tuning from Non-Robust Pretrained Models: Mitigating Suboptimal Transfer With Epsilon-Scheduling

Researchers identified that fine-tuning non-robust pretrained AI models with robust objectives can lead to poor performance, termed 'suboptimal transfer.' They propose Epsilon-Scheduling, a novel training technique that adjusts perturbation strength during training to improve both task adaptation and adversarial robustness.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Beyond Single-Modal Analytics: A Framework for Integrating Heterogeneous LLM-Based Query Systems for Multi-Modal Data

Researchers introduce Meta Engine, a unified semantic query system that integrates multiple specialized LLM-based query systems to handle multi-modal data analysis. The system addresses fragmentation in current semantic query tools by combining specialized systems through five key components, achieving 3-24x better performance than existing baselines.

AIBullisharXiv – CS AI · Mar 37/104
🧠

HEAPr: Hessian-based Efficient Atomic Expert Pruning in Output Space

Researchers introduce HEAPr, a novel pruning algorithm for Mixture-of-Experts (MoE) language models that decomposes experts into atomic components for more precise pruning. The method achieves nearly lossless compression at 20-25% pruning ratios while reducing computational costs by approximately 20%.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

Researchers introduce Group Tree Optimization (GTO), a new training method that improves speculative decoding for large language models by aligning draft model training with actual decoding policies. GTO achieves 7.4% better acceptance length and 7.7% additional speedup over existing state-of-the-art methods across multiple benchmarks and LLMs.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Advancing Universal Deep Learning for Electronic-Structure Hamiltonian Prediction of Materials

Researchers developed NextHAM, a deep learning method for predicting electronic-structure Hamiltonians of materials, offering significant computational efficiency advantages over traditional DFT methods. The system introduces neural E(3)-symmetry architecture and a new dataset Materials-HAM-SOC with 17,000 material structures spanning 68 elements.

AIBearisharXiv – CS AI · Mar 37/103
🧠

Untargeted Jailbreak Attack

Researchers have developed a new 'untargeted jailbreak attack' (UJA) that can compromise AI safety systems in large language models with over 80% success rate using only 100 optimization iterations. This gradient-based attack method expands the search space by maximizing unsafety probability without fixed target responses, outperforming existing attacks by over 30%.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Uni-X: Mitigating Modality Conflict with a Two-End-Separated Architecture for Unified Multimodal Models

Researchers introduce Uni-X, a novel architecture for unified multimodal AI models that addresses gradient conflicts between vision and text processing. The X-shaped design uses modality-specific processing at input/output layers while sharing middle layers, achieving superior efficiency and matching 7B parameter models with only 3B parameters.

$UNI
AIBullisharXiv – CS AI · Mar 37/104
🧠

Distribution-Aligned Decoding for Efficient LLM Task Adaptation

Researchers introduce SVDecode, a new method for adapting large language models to specific tasks without extensive fine-tuning. The technique uses steering vectors during decoding to align output distributions with task requirements, improving accuracy by up to 5 percentage points while adding minimal computational overhead.

AIBullisharXiv – CS AI · Mar 37/104
🧠

BWCache: Accelerating Video Diffusion Transformers through Block-Wise Caching

Researchers have developed BWCache, a training-free method that accelerates Diffusion Transformer (DiT) video generation by up to 6× through block-wise feature caching and reuse. The technique exploits computational redundancy in DiT blocks across timesteps while maintaining visual quality, addressing a key bottleneck in real-world AI video generation applications.

AIBullisharXiv – CS AI · Mar 37/103
🧠

RACE Attention: A Strictly Linear-Time Attention for Long-Sequence Training

Researchers introduce RACE Attention, a new linear-time alternative to traditional Softmax Attention that can process up to 75 million tokens in a single pass, compared to current GPU-optimized implementations that fail beyond 4 million tokens. The technology uses angular similarity and Gaussian random projections to achieve dramatic efficiency gains while maintaining performance across language modeling and classification tasks.

AIBullisharXiv – CS AI · Mar 37/105
🧠

Expressive Power of Implicit Models: Rich Equilibria and Test-Time Scaling

Researchers provide mathematical proof that implicit models can achieve greater expressive power through increased test-time computation, explaining how these memory-efficient architectures can match larger explicit networks. The study validates this scaling property across image reconstruction, scientific computing, operations research, and LLM reasoning domains.

AIBullisharXiv – CS AI · Mar 37/104
🧠

SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs

Researchers introduce SwiReasoning, a training-free framework that improves large language model reasoning by dynamically switching between explicit chain-of-thought and latent reasoning modes. The method achieves 1.8%-3.1% accuracy improvements and 57%-79% better token efficiency across mathematics, STEM, coding, and general benchmarks.

AINeutralarXiv – CS AI · Mar 37/103
🧠

What Scales in Cross-Entropy Scaling Law?

Researchers discovered that the traditional cross-entropy scaling law for large language models breaks down at very large scales because only one component (error-entropy) actually follows power-law scaling, while other components remain constant. This finding explains why model performance improvements become less predictable as models grow larger and establishes a new error-entropy scaling law for better understanding LLM development.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Tiny but Mighty: A Software-Hardware Co-Design Approach for Efficient Multimodal Inference on Battery-Powered Small Devices

Researchers developed NANOMIND, a software-hardware framework that optimizes Large Multimodal Models for battery-powered devices by breaking them into modular components and mapping each to optimal accelerators. The system achieves 42.3% energy reduction and enables 20.8 hours of operation running LLaVA-OneVision on a compact device without network connectivity.

AIBullisharXiv – CS AI · Mar 37/103
🧠

MorphArtGrasp: Morphology-Aware Cross-Embodiment Dexterous Hand Articulation Generation for Grasping

MorphArtGrasp is a new AI framework that enables dexterous robotic hands to grasp objects across different hand designs without extensive retraining. The system achieves 91.9% success rate in simulation and 87% in real-world tests by using morphology-aware learning to adapt grasping strategies to different robotic hand configurations.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Emergent Coordination in Multi-Agent Language Models

Researchers developed an information-theoretic framework to measure when multi-agent AI systems exhibit coordinated behavior beyond individual agents. The study found that specific prompt designs can transform collections of AI agents into coordinated collectives that mirror human group intelligence principles.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Value Flows

Researchers have developed Value Flows, a new reinforcement learning method that uses flow-based models to estimate complete return distributions rather than single scalar values. The approach achieves 1.3x improvement in success rates across 62 benchmark tasks by better identifying states with high return uncertainty for improved decision-making.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data

Researchers from Stanford introduce the Relational Transformer (RT), a new AI architecture that can work with relational databases without task-specific fine-tuning. The 22M parameter model achieves 93% performance of fully supervised models on binary classification tasks, significantly outperforming a 27B parameter LLM at 84%.

AIBearisharXiv – CS AI · Mar 37/103
🧠

Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols

Research reveals that AI control protocols designed to prevent harmful behavior from untrusted LLM agents can be systematically defeated through adaptive attacks targeting monitor models. The study demonstrates that frontier models can evade safety measures by embedding prompt injections in their outputs, with existing protocols like Defer-to-Resample actually amplifying these attacks.

AIBullisharXiv – CS AI · Mar 37/103
🧠

GAR: Generative Adversarial Reinforcement Learning for Formal Theorem Proving

Researchers introduce GAR (Generative Adversarial Reinforcement Learning), a new AI training framework that jointly trains problem generators and solvers in an adversarial loop for formal theorem proving. The method shows significant improvements in mathematical proof capabilities, with models achieving 4.20% average relative improvement on benchmark tests.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Ctrl-World: A Controllable Generative World Model for Robot Manipulation

Researchers have developed Ctrl-World, a controllable generative world model that enables robot policies to be evaluated and improved through simulation rather than costly real-world testing. The model, trained on 95k trajectories, can generate consistent 20+ second simulations and improved policy success rates by 44.7% through synthetic data generation.

AIBullisharXiv – CS AI · Mar 37/103
🧠

On the Reasoning Abilities of Masked Diffusion Language Models

New research demonstrates that Masked Diffusion Models (MDMs) for text generation are computationally equivalent to chain-of-thought augmented transformers in finite-precision settings. The study proves MDMs can solve all reasoning problems that CoT transformers can, while being more efficient for certain problem classes due to parallel generation capabilities.

AIBullisharXiv – CS AI · Mar 37/103
🧠

UrbanVerse: Scaling Urban Simulation by Watching City-Tour Videos

UrbanVerse introduces a data-driven system that converts city-tour videos into realistic urban simulation environments for training AI agents like delivery robots. The system includes 100K+ annotated 3D urban assets and shows significant improvements in navigation success rates, with +30.1% better performance in real-world transfers.

AIBullisharXiv – CS AI · Mar 37/104
🧠

LightMem: Lightweight and Efficient Memory-Augmented Generation

Researchers introduce LightMem, a new memory system for Large Language Models that mimics human memory structure with three stages: sensory, short-term, and long-term memory. The system achieves up to 7.7% better QA accuracy while reducing token usage by up to 106x and API calls by up to 159x compared to existing methods.

← PrevPage 157 of 706Next →
◆ AI Mentions
🏢OpenAI
103×
🏢Nvidia
58×
🧠GPT-5
37×
🧠Gemini
34×
🧠Claude
34×
🏢Anthropic
33×
🧠ChatGPT
19×
🧠GPT-4
14×
🧠Llama
14×
🏢xAI
9×
🏢Meta
9×
🧠Sonnet
8×
🏢Perplexity
8×
🏢Google
7×
🏢Microsoft
7×
🧠Opus
7×
🧠Grok
5×
🏢Hugging Face
4×
🧠o1
2×
🏢Cohere
1×
▲ Trending Tags
1#iran4702#ai4543#market3214#geopolitical2885#trump1076#openai987#geopolitics928#security869#geopolitical-risk7610#inflation6911#artificial-intelligence6012#nvidia5613#machine-learning5014#sanctions4615#fed44
Tag Sentiment
#iran470 articles
#ai454 articles
#market321 articles
#geopolitical288 articles
#trump107 articles
#openai98 articles
#geopolitics92 articles
#security86 articles
#geopolitical-risk76 articles
#inflation69 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
206
#iran↔#market
137
#geopolitical↔#market
110
#iran↔#trump
80
#ai↔#artificial-intelligence
48
#ai↔#market
46
#market↔#trump
40
#geopolitical↔#trump
40
#ai↔#openai
39
#ai↔#google
35
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange