y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,238🧠AI12,514🤖AI × Crypto512📰General4,212
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,247 articles with AI-powered sentiment analysis and key takeaways.

17247 articles
AIBullisharXiv – CS AI · Mar 57/10
🧠

The Geometry of Reasoning: Flowing Logics in Representation Space

Researchers propose a geometric framework showing how large language models 'think' through representation space as flows, with logical statements acting as controllers of these flows' velocities. The study provides evidence that LLMs can internalize logical invariants through next-token prediction training, challenging the 'stochastic parrot' criticism and suggesting universal representational laws underlying machine understanding.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Benchmarking MLLM-based Web Understanding: Reasoning, Robustness and Safety

Researchers introduced WebRRSBench, a comprehensive benchmark evaluating multimodal large language models' reasoning, robustness, and safety capabilities for web understanding tasks. Testing 11 MLLMs on 3,799 QA pairs from 729 websites revealed significant gaps in compositional reasoning, UI robustness, and safety-critical action recognition.

AIBullisharXiv – CS AI · Mar 56/10
🧠

SHE: Stepwise Hybrid Examination Reinforcement Learning Framework for E-commerce Search Relevance

Researchers introduce SHE (Stepwise Hybrid Examination), a new reinforcement learning framework that improves AI-powered e-commerce search relevance prediction. The framework addresses limitations in existing training methods by using step-level rewards and hybrid verification to enhance both accuracy and interpretability of search results.

AIBullisharXiv – CS AI · Mar 57/10
🧠

GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data

Researchers introduce GraphMERT, an 80M-parameter AI model that efficiently extracts reliable knowledge graphs from unstructured text data. The system outperforms much larger language models like Qwen3-32B in generating factually accurate and semantically valid knowledge graphs, achieving 69.8% FActScore versus 40.2% for the baseline.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Can a Small Model Learn to Look Before It Leaps? Dynamic Learning and Proactive Correction for Hallucination Detection

Researchers propose LEAP, a new framework for detecting AI hallucinations using efficient small models that can dynamically adapt verification strategies. The system uses a teacher-student approach where a powerful model trains smaller ones to detect false outputs, addressing a critical barrier to safe AI deployment in production environments.

AIBullisharXiv – CS AI · Mar 56/10
🧠

R1-Code-Interpreter: LLMs Reason with Code via Supervised and Multi-stage Reinforcement Learning

Researchers developed R1-Code-Interpreter, a large language model that uses multi-stage reinforcement learning to autonomously generate code for step-by-step reasoning across diverse tasks. The 14B parameter model achieves 72.4% accuracy on test tasks, outperforming GPT-4o variants and demonstrating emergent self-checking capabilities through code generation.

🏢 Hugging Face🧠 GPT-4
AIBullisharXiv – CS AI · Mar 57/10
🧠

LeanTutor: Towards a Verified AI Mathematical Proof Tutor

Researchers have developed LeanTutor, a proof-of-concept AI system that combines Large Language Models with theorem provers to create a mathematically verified proof tutor. The system features three modules for autoformalization, proof-checking, and natural language feedback, evaluated using PeanoBench, a new dataset of 371 Peano Arithmetic proofs.

AINeutralarXiv – CS AI · Mar 56/10
🧠

From Privacy to Trust in the Agentic Era: A Taxonomy of Challenges in Trustworthy Federated Learning Through the Lens of Trust Report 2.0

Researchers propose Trustworthy Federated Learning (TFL) framework that treats trust as a continuously maintained system condition rather than static property, addressing challenges in AI systems with autonomous decision-making. The framework introduces Trust Report 2.0 as a privacy-preserving coordination blueprint for multi-stakeholder governance in federated learning deployments.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Synthetic emotions and consciousness: exploring architectural boundaries

Researchers propose an architectural framework for implementing emotion-like AI systems while deliberately avoiding features associated with consciousness. The study introduces risk-reduction constraints and engineering principles to create sophisticated emotional AI without triggering consciousness-related safety concerns.

AIBullisharXiv – CS AI · Mar 57/10
🧠

ZipMap: Linear-Time Stateful 3D Reconstruction with Test-Time Training

Researchers introduce ZipMap, a new AI model for 3D reconstruction that achieves linear-time processing while maintaining accuracy comparable to slower quadratic-time methods. The system can reconstruct over 700 frames in under 10 seconds on a single H100 GPU, making it more than 20x faster than current state-of-the-art approaches like VGGT.

AINeutralarXiv – CS AI · Mar 57/10
🧠

Emotion-Gradient Metacognitive RSI (Part I): Theoretical Foundations and Single-Agent Architecture

Researchers introduce the Emotion-Gradient Metacognitive Recursive Self-Improvement (EG-MRSI) framework, a theoretical architecture for AI systems that can safely modify their own learning algorithms. The framework integrates metacognition, emotion-based motivation, and self-modification with formal safety constraints, representing foundational research toward safe artificial general intelligence.

AINeutralarXiv – CS AI · Mar 57/10
🧠

SpatialBench: Benchmarking Multimodal Large Language Models for Spatial Cognition

Researchers introduce SpatialBench, a comprehensive benchmark for evaluating spatial cognition in multimodal large language models (MLLMs). The framework reveals that while MLLMs excel at perceptual grounding, they struggle with symbolic reasoning, causal inference, and planning compared to humans who demonstrate more goal-directed spatial abstraction.

AIBullisharXiv – CS AI · Mar 56/10
🧠

ToolVQA: A Dataset for Multi-step Reasoning VQA with External Tools

Researchers introduce ToolVQA, a large-scale multimodal dataset with 23K instances designed to improve AI models' ability to use external tools for visual question answering. The dataset features real-world contexts and multi-step reasoning tasks, with fine-tuned 7B models outperforming GPT-3.5-turbo on various benchmarks.

AIBearisharXiv – CS AI · Mar 57/10
🧠

Efficient Refusal Ablation in LLM through Optimal Transport

Researchers developed a new AI safety attack method using optimal transport theory that achieves 11% higher success rates in bypassing language model safety mechanisms compared to existing approaches. The study reveals that AI safety refusal mechanisms are localized to specific network layers rather than distributed throughout the model, suggesting current alignment methods may be more vulnerable than previously understood.

🏢 Perplexity🧠 Llama
AIBullisharXiv – CS AI · Mar 57/10
🧠

RoboCasa365: A Large-Scale Simulation Framework for Training and Benchmarking Generalist Robots

Researchers have released RoboCasa365, a large-scale simulation benchmark featuring 365 household tasks across 2,500 kitchen environments with over 600 hours of human demonstration data. The platform is designed to train and evaluate generalist robots for everyday tasks, providing insights into factors affecting robot performance and generalization capabilities.

AIBullisharXiv – CS AI · Mar 56/10
🧠

Dissecting Quantization Error: A Concentration-Alignment Perspective

Researchers introduce Concentration-Alignment Transforms (CAT), a new method to reduce quantization error in large language and vision models by improving both weight/activation concentration and alignment. The technique consistently matches or outperforms existing quantization methods at 4-bit precision across several LLMs.

AIBullisharXiv – CS AI · Mar 56/10
🧠

RANGER: Sparsely-Gated Mixture-of-Experts with Adaptive Retrieval Re-ranking for Pathology Report Generation

Researchers introduce RANGER, a new AI framework using sparsely-gated Mixture-of-Experts architecture for generating pathology reports from medical images. The system achieves superior performance on standard benchmarks by enabling dynamic expert specialization and reducing noise through adaptive retrieval re-ranking.

AINeutralarXiv – CS AI · Mar 56/10
🧠

Activation Outliers in Transformer Quantization: Reproduction, Statistical Analysis, and Deployment Tradeoffs

Researchers reproduced and analyzed severe accuracy degradation in BERT transformer models when applying post-training quantization, showing validation accuracy drops from 89.66% to 54.33%. The study found that structured activation outliers intensify with model depth, with mixed precision quantization being the most effective mitigation strategy.

AINeutralarXiv – CS AI · Mar 57/10
🧠

World Properties without World Models: Recovering Spatial and Temporal Structure from Co-occurrence Statistics in Static Word Embeddings

Research shows that static word embeddings like GloVe and Word2Vec can recover substantial geographic and temporal information from text co-occurrence patterns alone, challenging assumptions that such capabilities require sophisticated world models in large language models. The study found these simple embeddings could predict city coordinates and historical birth years with high accuracy, suggesting that linear probe recoverability doesn't necessarily indicate advanced internal representations.

AIBullisharXiv – CS AI · Mar 57/10
🧠

SPRINT: Semi-supervised Prototypical Representation for Few-Shot Class-Incremental Tabular Learning

Researchers introduce SPRINT, the first Few-Shot Class-Incremental Learning (FSCIL) framework designed specifically for tabular data domains like cybersecurity and healthcare. The system achieves 77.37% accuracy in 5-shot learning scenarios, outperforming existing methods by 4.45% through novel semi-supervised techniques that leverage unlabeled data and confidence-based pseudo-labeling.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Dual-Modality Multi-Stage Adversarial Safety Training: Robustifying Multimodal Web Agents Against Cross-Modal Attacks

Researchers developed DMAST, a new training framework that protects multimodal web agents from cross-modal attacks where adversaries inject malicious content into webpages to deceive both visual and text processing channels. The method uses adversarial training through a three-stage pipeline and significantly outperforms existing defenses while doubling task completion efficiency.

AIBullisharXiv – CS AI · Mar 57/10
🧠

What Does Flow Matching Bring To TD Learning?

Researchers demonstrate that flow matching improves reinforcement learning through enhanced TD learning mechanisms rather than distributional modeling. The approach achieves 2x better final performance and 5x improved sample efficiency compared to standard critics by enabling test-time error recovery and more plastic feature learning.

AIBullisharXiv – CS AI · Mar 57/10
🧠

Architectural Proprioception in State Space Models: Thermodynamic Training Induces Anticipatory Halt Detection

Researchers introduce the Probability Navigation Architecture (PNA) framework that trains State Space Models with thermodynamic principles, discovering that SSMs develop 'architectural proprioception' - the ability to predict when to stop computation based on internal state entropy. This breakthrough shows SSMs can achieve computational self-awareness while Transformers cannot, with significant implications for efficient AI inference systems.

AINeutralarXiv – CS AI · Mar 56/10
🧠

CAM-LDS: Cyber Attack Manifestations for Automatic Interpretation of System Logs and Security Alerts

Researchers introduce CAM-LDS, a new dataset covering 81 cyber attack techniques to improve automated log analysis using Large Language Models. The study shows LLMs can correctly identify attack techniques in about one-third of cases, with adequate performance in another third, demonstrating potential for AI-powered cybersecurity analysis.

AIBullisharXiv – CS AI · Mar 56/10
🧠

PRAM-R: A Perception-Reasoning-Action-Memory Framework with LLM-Guided Modality Routing for Adaptive Autonomous Driving

PRAM-R introduces a new AI framework for autonomous driving that uses LLM-guided modality routing to adaptively select sensors based on environmental conditions. The system achieves 6.22% modality reduction while maintaining trajectory accuracy, demonstrating efficient resource management in multimodal perception systems.

← PrevPage 128 of 690Next →
◆ AI Mentions
🏢OpenAI
94×
🏢Nvidia
65×
🧠Claude
47×
🧠GPT-5
47×
🏢Anthropic
42×
🧠Gemini
40×
🧠ChatGPT
26×
🧠GPT-4
19×
🧠Llama
18×
🏢Meta
11×
🧠Opus
10×
🏢Google
9×
🏢xAI
9×
🧠Sonnet
8×
🏢Perplexity
7×
🏢Hugging Face
7×
🧠Grok
6×
🏢Microsoft
6×
🏢Cohere
2×
🧠Stable Diffusion
1×
▲ Trending Tags
1#ai6312#iran5813#market4394#geopolitical4025#trump1386#security1207#openai948#artificial-intelligence839#nvidia6410#inflation5611#fed5312#google5213#china4914#meta4415#machine-learning39
Tag Sentiment
#ai631 articles
#iran581 articles
#market439 articles
#geopolitical402 articles
#trump138 articles
#security120 articles
#openai94 articles
#artificial-intelligence83 articles
#nvidia64 articles
#inflation56 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
275
#iran↔#market
183
#geopolitical↔#market
155
#iran↔#trump
94
#ai↔#artificial-intelligence
70
#ai↔#market
68
#market↔#trump
53
#geopolitical↔#trump
53
#ai↔#openai
45
#ai↔#google
42
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange