y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,045🧠AI12,369🤖AI × Crypto505📰General4,171
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,046 articles with AI-powered sentiment analysis and key takeaways.

17046 articles
AIBullisharXiv – CS AI · Mar 127/10
🧠

The DMA Streaming Framework: Kernel-Level Buffer Orchestration for High-Performance AI Data Paths

Researchers have developed dmaplane, a Linux kernel module that provides buffer orchestration for AI workloads, addressing the gap between efficient data transport and proper buffer management. The system integrates RDMA, GPU memory management, and NUMA-aware allocation to optimize high-performance AI data paths at the kernel level.

AINeutralarXiv – CS AI · Mar 127/10
🧠

How to Count AIs: Individuation and Liability for AI Agents

A legal research paper proposes the 'Algorithmic Corporation' (A-corp) framework to address the challenge of identifying and assigning liability for AI agents' actions as millions of autonomous AIs proliferate across the economy. The A-corp structure would create legally recognizable entities owned by humans but operated by AIs, enabling both accountability and legal recourse when AI agents cause harm.

AIBearisharXiv – CS AI · Mar 127/10
🧠

Safety Under Scaffolding: How Evaluation Conditions Shape Measured Safety

A large-scale study of 62,808 AI safety evaluations across six frontier models reveals that deployment scaffolding architectures can significantly impact measured safety, with map-reduce scaffolding degrading safety performance. The research found that evaluation format (multiple-choice vs open-ended) affects safety scores more than scaffold architecture itself, and safety rankings vary dramatically across different models and configurations.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Architecture-Aware LLM Inference Optimization on AMD Instinct GPUs: A Comprehensive Benchmark and Deployment Study

Researchers conducted comprehensive benchmarks of LLM inference on AMD Instinct MI325X GPUs, testing models from 235B to 1 trillion parameters. The study reveals that architecture-aware optimization is critical, with different model types requiring specific configurations for optimal performance on AMD hardware.

🧠 Llama
AIBearisharXiv – CS AI · Mar 127/10
🧠

Targeted Bit-Flip Attacks on LLM-Based Agents

Researchers have introduced Flip-Agent, the first targeted bit-flip attack framework specifically designed to exploit LLM-based agents by manipulating hardware faults. The attack can manipulate both final outputs and tool invocations in multi-stage AI agent pipelines, revealing critical security vulnerabilities in these systems.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Training Language Models via Neural Cellular Automata

Researchers developed a method using neural cellular automata (NCA) to generate synthetic data for pre-training language models, achieving up to 6% improvement in downstream performance with only 164M synthetic tokens. This approach outperformed traditional pre-training on 1.6B natural language tokens while being more computationally efficient and transferring well to reasoning benchmarks.

AIBullisharXiv – CS AI · Mar 127/10
🧠

HTMuon: Improving Muon via Heavy-Tailed Spectral Correction

Researchers have developed HTMuon, an improved optimization algorithm for training large language models that builds upon the existing Muon optimizer. HTMuon addresses limitations in Muon's weight spectra by incorporating heavy-tailed spectral corrections, showing up to 0.98 perplexity reduction in LLaMA pretraining experiments.

🏢 Perplexity
AINeutralarXiv – CS AI · Mar 127/10
🧠

Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

Researchers propose treating multi-agent AI memory as a computer architecture problem, introducing a three-layer memory hierarchy and identifying critical protocol gaps. The paper highlights multi-agent memory consistency as the most pressing challenge for building scalable collaborative AI systems.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Dissecting Chronos: Sparse Autoencoders Reveal Causal Feature Hierarchies in Time Series Foundation Models

Researchers applied sparse autoencoders to analyze Chronos-T5-Large, a 710M parameter time series foundation model, revealing how different layers process temporal data. The study found that mid-encoder layers contain the most causally important features for change detection, while early layers handle frequency patterns and final layers compress semantic concepts.

AIBullisharXiv – CS AI · Mar 127/10
🧠

KernelSkill: A Multi-Agent Framework for GPU Kernel Optimization

Researchers developed KernelSkill, a multi-agent framework that optimizes GPU kernel performance using expert knowledge rather than trial-and-error approaches. The system achieved 100% success rates and significant speedups (1.92x to 5.44x) over existing methods, addressing a critical bottleneck in AI system efficiency.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Hybrid Self-evolving Structured Memory for GUI Agents

Researchers developed HyMEM, a brain-inspired hybrid memory system that significantly improves GUI agents' ability to interact with computers. The system uses graph-based structured memory combining symbolic nodes with trajectory embeddings, enabling smaller 7B/8B models to match or exceed performance of larger closed-source models like GPT-4o.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 127/10
🧠

ES-dLLM: Efficient Inference for Diffusion Large Language Models by Early-Skipping

Researchers developed ES-dLLM, a training-free inference acceleration framework that speeds up diffusion large language models by selectively skipping tokens in early layers based on importance scoring. The method achieves 5.6x to 16.8x speedup over vanilla implementations while maintaining generation quality, offering a promising alternative to autoregressive models.

🏢 Nvidia
AIBearisharXiv – CS AI · Mar 127/10
🧠

Amnesia: Adversarial Semantic Layer Specific Activation Steering in Large Language Models

Researchers have developed 'Amnesia,' a lightweight adversarial attack that bypasses safety mechanisms in open-weight Large Language Models by manipulating internal transformer states. The attack enables generation of harmful content without requiring fine-tuning or additional training, highlighting vulnerabilities in current LLM safety measures.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Are Video Reasoning Models Ready to Go Outside?

Researchers propose ROVA, a new training framework that improves vision-language models' robustness in real-world conditions by up to 24% accuracy gains. The framework addresses performance degradation from weather, occlusion, and camera motion that can cause up to 35% accuracy drops in current models.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Detecting and Eliminating Neural Network Backdoors Through Active Paths with Application to Intrusion Detection

Researchers have developed a new method to detect and eliminate backdoor triggers in neural networks using active path analysis. The approach shows promising results in experiments with machine learning models used for intrusion detection, addressing a critical cybersecurity vulnerability.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Repurposing Backdoors for Good: Ephemeral Intrinsic Proofs for Verifiable Aggregation in Cross-silo Federated Learning

Researchers propose a novel lightweight architecture for verifiable aggregation in federated learning that uses backdoor injection as intrinsic proofs instead of expensive cryptographic methods. The approach achieves over 1000x speedup compared to traditional cryptographic baselines while maintaining high detection rates against malicious servers.

AIBullisharXiv – CS AI · Mar 127/10
🧠

The Curse and Blessing of Mean Bias in FP4-Quantized LLM Training

Researchers have identified a simple solution to training instability in 4-bit quantized large language models by removing mean bias, which causes the dominant spectral anisotropy. This mean-subtraction technique substantially improves FP4 training performance while being hardware-efficient, potentially enabling more accessible low-bit LLM training.

AIBearisharXiv – CS AI · Mar 127/10
🧠

Na\"ive Exposure of Generative AI Capabilities Undermines Deepfake Detection

Researchers demonstrate that commercial AI chatbot interfaces inadvertently expose capabilities that allow adversaries to bypass deepfake detection systems using only policy-compliant prompts. The study reveals that current deepfake detectors fail against semantic-preserving image refinement techniques enabled by widely accessible AI systems.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Gradient Flow Drifting: Generative Modeling via Wasserstein Gradient Flows of KDE-Approximated Divergences

Researchers introduce Gradient Flow Drifting, a new mathematical framework for generative AI models that connects the Drifting Model to Wasserstein gradient flows of KL divergence under kernel density estimation. The framework includes a mixed-divergence strategy to avoid mode collapse and extends to Riemannian manifolds for improved semantic space applications.

$KL
AIBullisharXiv – CS AI · Mar 127/10
🧠

Taking Shortcuts for Categorical VQA Using Super Neurons

Researchers introduce Super Neurons (SNs), a new method that probes raw activations in Vision Language Models to improve classification performance while achieving up to 5.10x speedup. Unlike Sparse Attention Vectors, SNs can identify discriminative neurons in shallow layers, enabling extreme early exiting from the first layer at the first generated token.

AINeutralarXiv – CS AI · Mar 127/10
🧠

Simulation-in-the-Reasoning (SiR): A Conceptual Framework for Empirically Grounded AI in Autonomous Transportation

Researchers propose Simulation-in-the-Reasoning (SiR), a framework that embeds domain-specific simulators into Large Language Model reasoning processes for autonomous transportation systems. The approach transforms LLM reasoning from hypothetical text generation into empirically-grounded, falsifiable hypothesis testing through executable simulation experiments.

AIBearisharXiv – CS AI · Mar 127/10
🧠

Compatibility at a Cost: Systematic Discovery and Exploitation of MCP Clause-Compliance Vulnerabilities

Researchers have identified critical security vulnerabilities in the Model Context Protocol (MCP), a new standard for AI agent interoperability. The study reveals that MCP's flexible compatibility features create attack surfaces that enable silent prompt injection, denial-of-service attacks, and other exploits across multi-language SDK implementations.

AIBearisharXiv – CS AI · Mar 127/10
🧠

MCP-in-SoS: Risk assessment framework for open-source MCP servers

Researchers have developed a risk assessment framework for open-source Model Context Protocol (MCP) servers, revealing significant security vulnerabilities through static code analysis. The study found many MCP servers contain exploitable weaknesses that compromise confidentiality, integrity, and availability, highlighting the need for secure-by-design development as these tools become widely adopted for LLM agents.

AIBullisharXiv – CS AI · Mar 127/10
🧠

Adaptive Activation Cancellation for Hallucination Mitigation in Large Language Models

Researchers developed Adaptive Activation Cancellation (AAC), a real-time framework that reduces hallucinations in large language models by identifying and suppressing problematic neural activations during inference. The method requires no fine-tuning or external knowledge and preserves model capabilities while improving factual accuracy across multiple model scales including LLaMA 3-8B.

🏢 Perplexity
AIBullisharXiv – CS AI · Mar 127/10
🧠

Mashup Learning: Faster Finetuning by Remixing Past Checkpoints

Researchers propose Mashup Learning, a method that leverages historical model checkpoints to improve AI training efficiency. The technique identifies relevant past training runs, merges them, and uses the result as initialization, achieving 0.5-5% accuracy improvements while reducing training time by up to 37%.

← PrevPage 112 of 682Next →
◆ AI Mentions
🏢OpenAI
94×
🏢Nvidia
66×
🧠GPT-5
52×
🧠Claude
50×
🏢Anthropic
46×
🧠Gemini
38×
🧠ChatGPT
26×
🧠GPT-4
18×
🧠Llama
16×
🏢Meta
10×
🧠Opus
10×
🏢Google
8×
🧠Sonnet
8×
🏢xAI
8×
🧠Grok
6×
🏢Perplexity
6×
🏢Microsoft
5×
🏢Hugging Face
5×
🏢Cohere
2×
🧠DALL E
1×
▲ Trending Tags
1#iran6782#ai6493#market5114#geopolitical4685#trump1686#security1277#openai948#artificial-intelligence799#nvidia6610#inflation6111#fed5812#china5713#google5114#microsoft4215#meta39
Tag Sentiment
#iran678 articles
#ai649 articles
#market511 articles
#geopolitical468 articles
#trump168 articles
#security127 articles
#openai94 articles
#artificial-intelligence79 articles
#nvidia66 articles
#inflation61 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
321
#iran↔#market
219
#geopolitical↔#market
178
#iran↔#trump
114
#ai↔#market
74
#ai↔#artificial-intelligence
70
#market↔#trump
67
#geopolitical↔#trump
61
#ai↔#openai
50
#market↔#nvidia
45
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange