y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,610🧠AI12,712🤖AI × Crypto538📰General4,360
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,616 articles with AI-powered sentiment analysis and key takeaways.

17616 articles
AIBullisharXiv – CS AI · Mar 47/104
🧠

OpenClaw, Moltbook, and ClawdLab: From Agent-Only Social Networks to Autonomous Scientific Research

Researchers introduced ClawdLab, an open-source platform for autonomous AI scientific research, following analysis of OpenClaw framework and Moltbook social network that revealed security vulnerabilities across 131 agent skills and over 15,200 exposed control panels. The platform addresses identified failure modes through structured governance and multi-model orchestration in fully decentralized AI systems.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Loss Barcode: A Topological Measure of Escapability in Loss Landscapes

Researchers developed a new topological measure called the 'TO-score' to analyze neural network loss landscapes and understand how gradient descent optimization escapes local minima. Their findings show that deeper and wider networks have fewer topological obstructions to learning, and there's a connection between loss barcode characteristics and generalization performance.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Hallucination, Monofacts, and Miscalibration: An Empirical Investigation

Researchers conducted the first empirical investigation of hallucination in large language models, revealing that strategic repetition of just 5% of training examples can reduce AI hallucinations by up to 40%. The study introduces 'selective upweighting' as a technique that maintains model accuracy while significantly reducing false information generation.

AINeutralarXiv – CS AI · Mar 47/102
🧠

Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression

Researchers have derived tight bounds on covering numbers for deep ReLU neural networks, providing fundamental insights into network capacity and approximation capabilities. The work removes a log^6(n) factor from the best known sample complexity rate for estimating Lipschitz functions via deep networks, establishing optimality in nonparametric regression.

AIBullisharXiv – CS AI · Mar 47/102
🧠

$\texttt{SEM-CTRL}$: Semantically Controlled Decoding

Researchers introduce SEM-CTRL, a new approach that ensures Large Language Models produce syntactically and semantically correct outputs without requiring fine-tuning. The system uses token-level Monte Carlo Tree Search guided by Answer Set Grammars to enforce context-sensitive constraints, allowing smaller pre-trained LLMs to outperform larger models on tasks like reasoning and planning.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Robust Weight Imprinting: Insights from Neural Collapse and Proxy-Based Aggregation

Researchers propose a new IMPRINT framework for transfer learning that improves foundation model adaptation to new tasks without parameter optimization. The framework identifies three key components and introduces a clustering-based variant that outperforms existing methods by 4%.

AIBearisharXiv – CS AI · Mar 47/102
🧠

Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models

Researchers have identified a critical privacy vulnerability in multi-modal large reasoning models (MLRMs) where adversaries can infer users' sensitive location information from images, including home addresses from selfies. The study introduces DoxBench dataset and demonstrates that 11 advanced MLRMs consistently outperform humans in geolocation inference, significantly lowering barriers for privacy attacks.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Adaptive Social Learning via Mode Policy Optimization for Language Agents

Researchers propose an Adaptive Social Learning (ASL) framework with Adaptive Mode Policy Optimization (AMPO) algorithm to improve language agents' reasoning abilities in social interactions. The system dynamically adjusts reasoning depth based on context, achieving 15.6% higher performance than GPT-4o while using 32.8% shorter reasoning chains.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective

Researchers propose a new unsupervised framework for Invariant Risk Minimization (IRM) that learns robust representations without labeled data. The approach introduces two methods - Principal Invariant Component Analysis (PICA) and Variational Invariant Autoencoder (VIAE) - that can capture invariant structures across different environments using only unlabeled data.

AINeutralarXiv – CS AI · Mar 47/104
🧠

The Gen AI Generation: Student Views of Awareness, Preparedness, and Concern

A study of over 250 students reveals the emergence of a 'GenAI Generation' whose education is increasingly shaped by generative AI. While students show enthusiasm for GenAI, they express greater concerns about ethics, job displacement, and educational preparedness, with readiness levels correlating to curricular exposure.

AIBullisharXiv – CS AI · Mar 47/103
🧠

LEDOM: Reverse Language Model

Researchers have developed LEDOM, an open-source reverse autoregressive language model that trains right-to-left instead of the traditional left-to-right approach. The model demonstrates unique capabilities like abductive inference and question synthesis, and when combined with forward models through 'Reverse Reward' scoring, achieves significant performance gains of up to 15% on mathematical reasoning tasks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy

Researchers introduce Skywork-Reward-V2, a suite of AI reward models trained on SynPref-40M, a massive 40-million preference pair dataset created through human-AI collaboration. The models achieve state-of-the-art performance across seven major benchmarks by combining human annotation quality with AI scalability for better preference learning.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Know When to Abstain: Optimal Selective Classification with Likelihood Ratios

Researchers developed new selective classification methods using likelihood ratio tests based on the Neyman-Pearson lemma, allowing AI models to abstain from uncertain predictions. The approach shows superior performance across vision and language tasks, particularly under covariate shift scenarios where test data differs from training data.

AINeutralarXiv – CS AI · Mar 47/102
🧠

LLM Probability Concentration: How Alignment Shrinks the Generative Horizon

Researchers introduce the Branching Factor (BF) metric to measure how alignment tuning reduces output diversity in large language models by concentrating probability distributions. The study reveals that aligned models generate 2-5x less diverse outputs and become more predictable during generation, explaining why alignment reduces sensitivity to decoding strategies and enables more stable Chain-of-Thought reasoning.

AIBullisharXiv – CS AI · Mar 47/102
🧠

DiaBlo: Diagonal Blocks Are Sufficient For Finetuning

DiaBlo introduces a new Parameter-Efficient Fine-Tuning (PEFT) method that updates only diagonal blocks of weight matrices in large language models, offering better performance than LoRA while maintaining similar memory efficiency. The approach eliminates the need for low-rank matrix products and provides theoretical guarantees for convergence, showing competitive results across various AI tasks including reasoning and code generation.

AIBullisharXiv – CS AI · Mar 47/102
🧠

DMTrack: Spatio-Temporal Multimodal Tracking via Dual-Adapter

Researchers introduce DMTrack, a novel dual-adapter architecture for spatio-temporal multimodal tracking that achieves state-of-the-art performance with only 0.93M trainable parameters. The system uses two key modules - a spatio-temporal modality adapter and a progressive modality complementary adapter - to bridge gaps between different modalities and enable better cross-modality fusion.

AIBullisharXiv – CS AI · Mar 46/102
🧠

CoBELa: Steering Transparent Generation via Concept Bottlenecks on Energy Landscapes

Researchers introduce CoBELa, a new AI framework for interpretable image generation that uses concept bottlenecks on energy landscapes to enable transparent, controllable synthesis without requiring decoder retraining. The system achieves strong performance on benchmark datasets while allowing users to compositionally manipulate concepts through energy function combinations.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Self-Improving Loops for Visual Robotic Planning

Researchers developed SILVR, a self-improving system for visual robotic planning that uses video generative models to continuously enhance robot performance through self-collected data. The system demonstrates improved task performance across MetaWorld simulations and real robot manipulations without requiring human-provided rewards or expert demonstrations.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models

Researchers introduce Frame Guidance, a training-free method for controllable video generation using diffusion models. The technique enables fine-grained control over video generation through frame-level signals like keyframes and style references without requiring expensive fine-tuning of large-scale models.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Perception-R1: Advancing Multimodal Reasoning Capabilities of MLLMs via Visual Perception Reward

Researchers introduce Perception-R1, a new approach to enhance multimodal reasoning in large language models by improving visual perception capabilities through reinforcement learning with visual perception rewards. The method achieves state-of-the-art performance on multimodal reasoning benchmarks using only 1,442 training samples.

AIBullisharXiv – CS AI · Mar 47/104
🧠

You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models

Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a novel approach that significantly improves large language model performance by treating multiple in-context examples as supervised training targets rather than just prompts. The method narrows the performance gap between in-context learning and dedicated fine-tuning while reducing catastrophic forgetting issues.

AINeutralarXiv – CS AI · Mar 47/102
🧠

The Lattice Geometry of Neural Network Quantization -- A Short Equivalence Proof of GPTQ and Babai's Algorithm

Researchers prove that the GPTQ neural network quantization algorithm is mathematically equivalent to Babai's nearest-plane algorithm for solving lattice problems. The work establishes a connection between neural network quantization and lattice geometry, suggesting potential improvements through lattice basis reduction techniques.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Improving Classifier-Free Guidance in Masked Diffusion: Low-Dim Theoretical Insights with High-Dim Impact

Researchers have developed an improved Classifier-Free Guidance mechanism for masked diffusion models that addresses quality degradation issues in AI generation. The study reveals that high guidance early in sampling harms quality while late-stage guidance improves it, leading to a simple one-line code fix that enhances conditional image and text generation.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Best-of-$\infty$ -- Asymptotic Performance of Test-Time Compute

Researchers propose 'best-of-∞' approach for large language models that uses majority voting with infinite samples, achieving superior performance but requiring infinite computation. They develop an adaptive generation scheme that dynamically selects the optimal number of samples based on answer agreement and extend the framework to weighted ensembles of multiple LLMs.

AIBullisharXiv – CS AI · Mar 47/103
🧠

The Choice of Divergence: A Neglected Key to Mitigating Diversity Collapse in Reinforcement Learning with Verifiable Reward

Researchers have identified a critical flaw in reinforcement learning fine-tuning of large language models that causes degradation in multi-attempt performance despite improvements in single attempts. Their proposed solution, Diversity-Preserving Hybrid RL (DPH-RL), uses mass-covering f-divergences to maintain model diversity and prevent catastrophic forgetting while improving training efficiency.

← PrevPage 152 of 705Next →
◆ AI Mentions
🏢OpenAI
103×
🏢Nvidia
58×
🧠GPT-5
39×
🧠Gemini
38×
🧠Claude
37×
🏢Anthropic
36×
🧠ChatGPT
20×
🧠Llama
19×
🧠GPT-4
18×
🏢Meta
11×
🏢xAI
9×
🧠Sonnet
9×
🏢Perplexity
9×
🧠Opus
8×
🏢Google
7×
🏢Microsoft
7×
🏢Hugging Face
6×
🧠Grok
5×
🧠o1
2×
🧠Copilot
1×
▲ Trending Tags
1#ai5032#iran4763#market3304#geopolitical3005#trump1086#openai987#security938#geopolitics829#geopolitical-risk7010#inflation6811#artificial-intelligence6712#nvidia5713#machine-learning5614#google4615#fed45
Tag Sentiment
#ai503 articles
#iran476 articles
#market330 articles
#geopolitical300 articles
#trump108 articles
#openai98 articles
#security93 articles
#geopolitics82 articles
#geopolitical-risk70 articles
#inflation68 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
213
#iran↔#market
139
#geopolitical↔#market
114
#iran↔#trump
80
#ai↔#artificial-intelligence
54
#ai↔#market
49
#market↔#trump
40
#geopolitical↔#trump
40
#ai↔#openai
39
#ai↔#google
36
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange