y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,638🧠AI12,717🤖AI × Crypto541📰General4,380
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,638 articles with AI-powered sentiment analysis and key takeaways.

17638 articles
AIBullisharXiv – CS AI · Mar 37/104
🧠

Emergent Coordination in Multi-Agent Language Models

Researchers developed an information-theoretic framework to measure when multi-agent AI systems exhibit coordinated behavior beyond individual agents. The study found that specific prompt designs can transform collections of AI agents into coordinated collectives that mirror human group intelligence principles.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Value Flows

Researchers have developed Value Flows, a new reinforcement learning method that uses flow-based models to estimate complete return distributions rather than single scalar values. The approach achieves 1.3x improvement in success rates across 62 benchmark tasks by better identifying states with high return uncertainty for improved decision-making.

AIBullisharXiv – CS AI · Mar 37/103
🧠

MorphArtGrasp: Morphology-Aware Cross-Embodiment Dexterous Hand Articulation Generation for Grasping

MorphArtGrasp is a new AI framework that enables dexterous robotic hands to grasp objects across different hand designs without extensive retraining. The system achieves 91.9% success rate in simulation and 87% in real-world tests by using morphology-aware learning to adapt grasping strategies to different robotic hand configurations.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Relational Transformer: Toward Zero-Shot Foundation Models for Relational Data

Researchers from Stanford introduce the Relational Transformer (RT), a new AI architecture that can work with relational databases without task-specific fine-tuning. The 22M parameter model achieves 93% performance of fully supervised models on binary classification tasks, significantly outperforming a 27B parameter LLM at 84%.

AIBullisharXiv – CS AI · Mar 37/104
🧠

LightMem: Lightweight and Efficient Memory-Augmented Generation

Researchers introduce LightMem, a new memory system for Large Language Models that mimics human memory structure with three stages: sensory, short-term, and long-term memory. The system achieves up to 7.7% better QA accuracy while reducing token usage by up to 106x and API calls by up to 159x compared to existing methods.

AIBullisharXiv – CS AI · Mar 37/103
🧠

A cross-species neural foundation model for end-to-end speech decoding

Researchers developed a new Brain-to-Text (BIT) framework that uses cross-species neural foundation models to decode speech from brain activity with significantly improved accuracy. The system reduces word error rates from 24.69% to 10.22% compared to previous methods and enables seamless translation of both attempted and imagined speech into text.

AINeutralarXiv – CS AI · Mar 37/104
🧠

How Do LLMs Use Their Depth?

New research reveals that large language models use a "Guess-then-Refine" framework, starting with high-frequency token predictions in early layers and refining them with contextual information in deeper layers. The study provides detailed insights into layer-wise computation dynamics through multiple-choice tasks, fact recall analysis, and part-of-speech predictions.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs

Researchers propose TRIM-KV, a novel approach that learns token importance for memory-bounded LLM inference through lightweight retention gates, addressing the quadratic cost of self-attention and growing key-value cache issues. The method outperforms existing eviction baselines across multiple benchmarks and provides insights into LLM interpretability through learned retention scores.

AINeutralarXiv – CS AI · Mar 37/103
🧠

InnoGym: Benchmarking the Innovation Potential of AI Agents

Researchers introduce InnoGym, the first benchmark designed to evaluate AI agents' innovation potential rather than just correctness. The framework measures both performance gains and methodological novelty across 18 real-world engineering and scientific tasks, revealing that while AI agents can generate novel approaches, they lack robustness for significant performance improvements.

AIBullisharXiv – CS AI · Mar 37/102
🧠

RMAAT: Astrocyte-Inspired Memory Compression and Replay for Efficient Long-Context Transformers

Researchers introduce RMAAT (Recurrent Memory Augmented Astromorphic Transformer), a new architecture inspired by brain astrocyte cells that addresses the quadratic complexity problem in Transformer models for long sequences. The system uses recurrent memory tokens and adaptive compression to achieve linear complexity while maintaining competitive accuracy on benchmark tests.

AINeutralarXiv – CS AI · Mar 37/103
🧠

Towards Transferable Defense Against Malicious Image Edits

Researchers propose TDAE, a new defense framework that protects images from malicious AI-powered edits by using imperceptible perturbations and coordinated image-text optimization. The system employs FlatGrad Defense Mechanism for visual protection and Dynamic Prompt Defense for textual enhancement, achieving better cross-model transferability than existing methods.

AIBullisharXiv – CS AI · Mar 37/104
🧠

AgentOCR: Reimagining Agent History via Optical Self-Compression

Researchers introduce AgentOCR, a framework that converts AI agent interaction histories from text to compressed visual format, reducing token usage by over 50% while maintaining 95% performance. The system uses visual caching and adaptive compression to address memory bottlenecks in large language model deployments.

AINeutralarXiv – CS AI · Mar 37/102
🧠

Learn-to-Distance: Distance Learning for Detecting LLM-Generated Text

Researchers developed a new algorithm called Learn-to-Distance (L2D) that can detect AI-generated text from models like GPT, Claude, and Gemini with significantly improved accuracy. The method uses adaptive distance learning between original and rewritten text, achieving 54.3% to 75.4% relative improvements over existing detection methods across extensive testing.

AIBullisharXiv – CS AI · Mar 37/102
🧠

ButterflyMoE: Sub-Linear Ternary Experts via Structured Butterfly Orbits

ButterflyMoE introduces a breakthrough approach to reduce memory requirements for AI expert models by 150× through geometric parameterization instead of storing independent weight matrices. The method uses shared ternary prototypes with learned rotations to achieve sub-linear memory scaling, enabling deployment of multiple experts on edge devices.

AIBullisharXiv – CS AI · Mar 37/104
🧠

HalluGuard: Demystifying Data-Driven and Reasoning-Driven Hallucinations in LLMs

Researchers introduce HalluGuard, a new framework that identifies and addresses both data-driven and reasoning-driven hallucinations in Large Language Models. The system achieved state-of-the-art performance across 10 benchmarks and 9 LLM backbones, offering a unified approach to improve AI reliability in critical domains like healthcare and law.

AIBullisharXiv – CS AI · Mar 37/104
🧠

A Learnable Wavelet Transformer for Long-Short Equity Trading and Risk-Adjusted Return Optimization

Researchers developed WaveLSFormer, a wavelet-based Transformer model that directly generates market-neutral long/short trading portfolios from financial time series data. The AI system achieved a 60.7% cumulative return and 2.16 Sharpe ratio across six industry groups, significantly outperforming traditional ML models like LSTM and standard Transformers.

AINeutralarXiv – CS AI · Mar 37/103
🧠

Reward Models Inherit Value Biases from Pretraining

A comprehensive study of 10 leading reward models reveals they inherit significant value biases from their base language models, with Llama-based models preferring 'agency' values while Gemma-based models favor 'communion' values. This bias persists even when using identical preference data and training processes, suggesting that the choice of base model fundamentally shapes AI alignment outcomes.

AIBullisharXiv – CS AI · Mar 37/103
🧠

CSRv2: Unlocking Ultra-Sparse Embeddings

CSRv2 introduces a new training approach for ultra-sparse embeddings that reduces inactive neurons from 80% to 20% while delivering 14% accuracy gains. The method achieves 7x speedup over existing approaches and up to 300x improvements in compute and memory efficiency compared to dense embeddings.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Beyond Single-Modal Analytics: A Framework for Integrating Heterogeneous LLM-Based Query Systems for Multi-Modal Data

Researchers introduce Meta Engine, a unified semantic query system that integrates multiple specialized LLM-based query systems to handle multi-modal data analysis. The system addresses fragmentation in current semantic query tools by combining specialized systems through five key components, achieving 3-24x better performance than existing baselines.

AINeutralarXiv – CS AI · Mar 37/103
🧠

When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems

Researchers have identified and studied the 'Mandela effect' in AI multi-agent systems, where groups of AI agents collectively develop false memories or misremember information. The study introduces MANBENCH, a benchmark to evaluate this phenomenon, and proposes mitigation strategies that achieved a 74.40% reduction in false collective memories.

AIBullisharXiv – CS AI · Mar 37/103
🧠

WAXAL: A Large-Scale Multilingual African Language Speech Corpus

Researchers have released WAXAL, a large-scale multilingual speech dataset covering 24 Sub-Saharan African languages representing over 100 million speakers. The dataset includes 1,250 hours of transcribed speech for ASR and 235 hours of high-quality recordings for TTS, released under CC-BY-4.0 license to advance inclusive AI technologies.

AINeutralarXiv – CS AI · Mar 37/104
🧠

Trojans in Artificial Intelligence (TrojAI) Final Report

IARPA's TrojAI program investigated AI Trojans - malicious backdoors hidden in AI models that can cause system failures or allow unauthorized control. The multi-year initiative developed detection methods through weight analysis and trigger inversion, while identifying ongoing challenges in AI security that require continued research.

AIBullisharXiv – CS AI · Mar 37/103
🧠

AceGRPO: Adaptive Curriculum Enhanced Group Relative Policy Optimization for Autonomous Machine Learning Engineering

Researchers introduce AceGRPO, a new reinforcement learning framework for Autonomous Machine Learning Engineering that addresses behavioral stagnation in current LLM-based agents. The Ace-30B model trained with this method achieves 100% valid submission rate on MLE-Bench-Lite and matches performance of proprietary frontier models while outperforming larger open-source alternatives.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Dream2Learn: Structured Generative Dreaming for Continual Learning

Researchers introduce Dream2Learn (D2L), a continual learning framework that enables AI models to generate synthetic training data from their own internal representations, mimicking human dreaming for knowledge consolidation. The system creates novel 'dreamed classes' using diffusion models to improve forward knowledge transfer and prevent catastrophic forgetting in neural networks.

AIBullisharXiv – CS AI · Mar 37/103
🧠

Large Language Model-Assisted UAV Operations and Communications: A Multifaceted Survey and Tutorial

Researchers have published a comprehensive survey exploring the integration of Large Language Models (LLMs) with Uncrewed Aerial Vehicles (UAVs), proposing a unified framework for intelligent drone operations. The study examines how LLMs can enhance UAV capabilities including swarm coordination, navigation, mission planning, and human-drone interaction through advanced reasoning and multimodal processing.

← PrevPage 158 of 706Next →
◆ AI Mentions
🏢OpenAI
101×
🏢Nvidia
58×
🧠GPT-5
37×
🧠Gemini
34×
🧠Claude
34×
🏢Anthropic
34×
🧠ChatGPT
19×
🧠Llama
14×
🧠GPT-4
14×
🏢Meta
9×
🏢xAI
9×
🏢Perplexity
8×
🧠Sonnet
8×
🏢Microsoft
7×
🧠Opus
7×
🏢Google
7×
🧠Grok
5×
🏢Hugging Face
4×
🧠o1
2×
🧠Copilot
1×
▲ Trending Tags
1#iran4692#ai4533#market3204#geopolitical2875#trump1076#openai967#geopolitics948#security869#geopolitical-risk7810#inflation6911#artificial-intelligence6112#nvidia5613#machine-learning5014#sanctions4615#google44
Tag Sentiment
#iran469 articles
#ai453 articles
#market320 articles
#geopolitical287 articles
#trump107 articles
#openai96 articles
#geopolitics94 articles
#security86 articles
#geopolitical-risk78 articles
#inflation69 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
205
#iran↔#market
137
#geopolitical↔#market
110
#iran↔#trump
80
#ai↔#artificial-intelligence
48
#ai↔#market
46
#geopolitical↔#trump
40
#market↔#trump
40
#ai↔#openai
38
#ai↔#google
35
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange