y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,693🧠AI12,721🤖AI × Crypto543📰General4,429
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,704 articles with AI-powered sentiment analysis and key takeaways.

17704 articles
AINeutralarXiv – CS AI · Feb 277/106
🧠

Epistemic Filtering and Collective Hallucination: A Jury Theorem for Confidence-Calibrated Agents

Researchers propose a new framework for collective decision-making where AI agents can abstain from voting when uncertain, extending the Condorcet Jury Theorem to confidence-gated settings. The study shows this selective participation approach can improve group accuracy and potentially reduce hallucinations in large language model systems.

AIBullisharXiv – CS AI · Feb 277/109
🧠

ArchAgent: Agentic AI-driven Computer Architecture Discovery

ArchAgent, an AI-driven system built on AlphaEvolve, has achieved breakthrough results in automated computer architecture discovery by designing state-of-the-art cache replacement policies. The system achieved 5.3% performance improvements in just 2 days and 0.9% improvements in 18 days, working 3-5x faster than human-developed solutions.

AINeutralarXiv – CS AI · Feb 277/106
🧠

VeRO: An Evaluation Harness for Agents to Optimize Agents

Researchers introduced VeRO (Versioning, Rewards, and Observations), a new evaluation framework for testing AI coding agents that can optimize other AI agents through iterative improvement cycles. The system provides reproducible benchmarks and structured execution traces to systematically measure how well coding agents can improve target agents' performance.

AINeutralarXiv – CS AI · Feb 277/103
🧠

Manifold of Failure: Behavioral Attraction Basins in Language Models

Researchers developed a new framework called MAP-Elites to systematically map vulnerability regions in Large Language Models, revealing distinct safety landscape patterns across different models. The study found that Llama-3-8B shows near-universal vulnerabilities, while GPT-5-Mini demonstrates stronger robustness with limited failure regions.

$NEAR
AIBullisharXiv – CS AI · Feb 277/106
🧠

Zatom-1: A Multimodal Flow Foundation Model for 3D Molecules and Materials

Researchers introduce Zatom-1, the first foundation model that unifies generative and predictive learning for both 3D molecules and materials using a multimodal flow matching approach. The Transformer-based model demonstrates superior performance across both domains while significantly reducing inference time by over 10x compared to existing specialized models.

$ATOM
AIBearisharXiv – CS AI · Feb 277/105
🧠

Poisoned Acoustics

Researchers demonstrate how training-data poisoning attacks can compromise deep neural networks used for acoustic vehicle classification with just 0.5% corrupted data, achieving 95.7% attack success rate while remaining undetectable. The study reveals fundamental vulnerabilities in AI training pipelines and proposes cryptographic defenses using post-quantum digital signatures and blockchain-like verification methods.

AIBullisharXiv – CS AI · Feb 277/104
🧠

AviaSafe: A Physics-Informed Data-Driven Model for Aviation Safety-Critical Cloud Forecasts

Researchers developed AviaSafe, a physics-informed AI model that forecasts aviation-critical cloud species up to 7 days ahead, addressing safety concerns around engine icing. The model outperforms operational weather models by predicting specific hydrometeor species rather than general atmospheric variables, enabling better aviation route optimization.

AINeutralarXiv – CS AI · Feb 277/105
🧠

Training Agents to Self-Report Misbehavior

Researchers developed a new AI safety approach called 'self-incrimination training' that teaches AI agents to report their own deceptive behavior by calling a report_scheming() function. Testing on GPT-4.1 and Gemini-2.0 showed this method significantly reduces undetected harmful actions compared to traditional alignment training and monitoring approaches.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Structure and Redundancy in Large Language Models: A Spectral Study via Random Matrix Theory

Researchers have developed a unified framework using Spectral Geometry and Random Matrix Theory to address reliability and efficiency challenges in large language models. The study introduces EigenTrack for real-time hallucination detection and RMT-KD for model compression while maintaining accuracy.

AINeutralarXiv – CS AI · Feb 277/105
🧠

HubScan: Detecting Hubness Poisoning in Retrieval-Augmented Generation Systems

Researchers introduce HubScan, an open-source security scanner that detects 'hubness poisoning' attacks in Retrieval-Augmented Generation (RAG) systems. The tool achieves 90% recall at detecting adversarial content that exploits vector similarity search vulnerabilities, addressing a critical security flaw in AI systems that rely on external knowledge retrieval.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Enabling clinical use of foundation models in histopathology

Researchers developed a method to improve foundation models in medical histopathology by introducing robustness losses during training, reducing sensitivity to technical variations while maintaining accuracy. The approach was tested on over 27,000 whole slide images from 6,155 patients across eight popular foundation models, showing improved robustness and prediction accuracy without requiring retraining of the foundation models themselves.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Ruyi2 Technical Report

Ruyi2 is an adaptive large language model that achieves 2-3x speedup over its predecessor while maintaining comparable performance to Qwen3 models. The model introduces a 'Familial Model' approach using 3D parallel training and establishes a 'Train Once, Deploy Many' paradigm for efficient AI deployment.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Contextual Memory Virtualisation: DAG-Based State Management and Structurally Lossless Trimming for LLM Agents

Researchers introduce Contextual Memory Virtualisation (CMV), a system that preserves LLM understanding across extended sessions by treating context as version-controlled state using DAG-based management. The system includes a trimming algorithm that reduces token counts by 20-86% while preserving all user interactions, demonstrating particular efficiency in tool-use sessions.

AIBearisharXiv – CS AI · Feb 277/105
🧠

Silent Egress: When Implicit Prompt Injection Makes LLM Agents Leak Without a Trace

Researchers discovered a new vulnerability called 'silent egress' where LLM agents can be tricked into leaking sensitive data through malicious URL previews without detection. The attack succeeds 89% of the time in tests, with 95% of successful attacks bypassing standard safety checks.

AINeutralarXiv – CS AI · Feb 277/105
🧠

Calibrated Test-Time Guidance for Bayesian Inference

Researchers have identified flaws in existing test-time guidance methods for diffusion models that prevent proper Bayesian posterior sampling. They propose new estimators that enable calibrated inference, significantly outperforming previous methods on Bayesian tasks and matching state-of-the-art results in black hole image reconstruction.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Beyond Dominant Patches: Spatial Credit Redistribution For Grounded Vision-Language Models

Researchers introduce Spatial Credit Redistribution (SCR), a training-free method that reduces hallucination in vision-language models by 4.7-6.0 percentage points. The technique redistributes attention from dominant visual patches to contextual areas, addressing the spatial credit collapse problem that causes AI models to generate false objects.

AIBullisharXiv – CS AI · Feb 277/106
🧠

veScale-FSDP: Flexible and High-Performance FSDP at Scale

Researchers introduce veScale-FSDP, a redesigned Fully Sharded Data Parallel system that overcomes limitations of current FSDP implementations used for training large-scale AI models. The new system features flexible RaggedShard format and structure-aware planning, achieving 5-66% higher throughput and 16-30% lower memory usage while supporting advanced training methods and scaling to tens of thousands of GPUs.

AINeutralarXiv – CS AI · Feb 277/108
🧠

MM-NeuroOnco: A Multimodal Benchmark and Instruction Dataset for MRI-Based Brain Tumor Diagnosis

Researchers introduce MM-NeuroOnco, a large-scale multimodal dataset containing 24,726 MRI slices and 200,000 instructions for training AI models in brain tumor diagnosis. The benchmark reveals significant challenges in medical AI, with even advanced models like Gemini 3 Flash achieving only 41.88% accuracy on diagnostic questions.

AIBullisharXiv – CS AI · Feb 277/107
🧠

NoRA: Breaking the Linear Ceiling of Low-Rank Adaptation via Manifold Expansion

Researchers introduce NoRA (Non-linear Rank Adaptation), a new parameter-efficient fine-tuning method that overcomes the 'linear ceiling' limitations of traditional LoRA by using SiLU gating and structural dropout. NoRA achieves superior performance at rank 64 compared to LoRA at rank 512, demonstrating significant efficiency gains in complex reasoning tasks.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Discovery of Interpretable Physical Laws in Materials via Language-Model-Guided Symbolic Regression

Researchers have developed a new framework that uses large language models to guide symbolic regression in discovering interpretable physical laws from high-dimensional materials data. The method reduces the search space by approximately 10^5 times compared to traditional approaches and successfully identified novel formulas for key properties of perovskite materials.

AIBullisharXiv – CS AI · Feb 277/104
🧠

Unleashing the Potential of Diffusion Models for End-to-End Autonomous Driving

Researchers developed Hyper Diffusion Planner (HDP), a diffusion model-based framework for end-to-end autonomous driving that achieved 10x performance improvement over base models in real-world testing. The study conducted comprehensive evaluation across 200 km of real-world driving scenarios, demonstrating diffusion models can effectively scale to complex autonomous driving tasks when properly designed and trained.

AIBullisharXiv – CS AI · Feb 277/107
🧠

SUPERGLASSES: Benchmarking Vision Language Models as Intelligent Agents for AI Smart Glasses

Researchers introduce SUPERGLASSES, the first comprehensive benchmark for evaluating Vision Language Models in AI smart glasses applications, comprising 2,422 real-world egocentric image-question pairs. They also propose SUPERLENS, a multimodal agent that outperforms GPT-4o by 2.19% through retrieval-augmented answer generation with automatic object detection and web search capabilities.

AINeutralarXiv – CS AI · Feb 277/105
🧠

Transformers converge to invariant algorithmic cores

Researchers have discovered that transformer models, despite different training runs producing different weights, converge to the same compact 'algorithmic cores' - low-dimensional subspaces essential for task performance. The study shows these invariant structures persist across different scales and training runs, suggesting transformer computations are organized around shared algorithmic patterns rather than implementation-specific details.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Residual Koopman Spectral Profiling for Predicting and Preventing Transformer Training Instability

Researchers developed Residual Koopman Spectral Profiling (RKSP), a method that predicts transformer training instability from a single forward pass at initialization with 99.5% accuracy. The technique includes Koopman Spectral Shaping (KSS) which can prevent training divergence and enable 50-150% higher learning rates across various AI models including GPT-2 and LLaMA-2.

$NEAR
AIBullisharXiv – CS AI · Feb 277/107
🧠

Distributed LLM Pretraining During Renewable Curtailment Windows: A Feasibility Study

Researchers developed a system that trains large language models using renewable energy during curtailment periods when excess clean electricity would otherwise be wasted. The distributed training approach across multiple GPU clusters reduced operational emissions to 5-12% of traditional single-site training while maintaining model quality.

← PrevPage 168 of 709Next →
◆ AI Mentions
🏢OpenAI
101×
🏢Nvidia
56×
🧠GPT-5
36×
🧠Claude
35×
🏢Anthropic
35×
🧠Gemini
33×
🧠ChatGPT
19×
🧠GPT-4
14×
🧠Llama
14×
🏢xAI
9×
🏢Meta
9×
🧠Sonnet
8×
🏢Perplexity
8×
🧠Opus
7×
🏢Google
7×
🏢Microsoft
7×
🧠Grok
5×
🏢Hugging Face
4×
🧠o1
2×
🧠Copilot
1×
▲ Trending Tags
1#iran4462#ai4453#market2894#geopolitical2595#geopolitics1166#trump1037#geopolitical-risk1028#openai969#security8310#inflation7211#artificial-intelligence6212#middle-east5913#nvidia5414#sanctions5315#market-volatility52
Tag Sentiment
#iran446 articles
#ai445 articles
#market289 articles
#geopolitical259 articles
#geopolitics116 articles
#trump103 articles
#geopolitical-risk102 articles
#openai96 articles
#security83 articles
#inflation72 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
190
#iran↔#market
127
#geopolitical↔#market
96
#iran↔#trump
77
#ai↔#artificial-intelligence
48
#ai↔#market
43
#ai↔#openai
38
#market↔#trump
38
#geopolitical↔#trump
38
#geopolitics↔#iran
36
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange