y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,600🧠AI12,712🤖AI × Crypto537📰General4,351
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,610 articles with AI-powered sentiment analysis and key takeaways.

17610 articles
AI × CryptoBullisharXiv – CS AI · Mar 46/105
🤖

Layer-wise QUBO-Based Training of CNN Classifiers for Quantum Annealing

Researchers propose a new quantum annealing framework for training CNN classifiers that avoids gradient-based optimization by using Quadratic Unconstrained Binary Optimization (QUBO). The method shows competitive performance with classical approaches on image classification benchmarks while remaining compatible with current D-Wave quantum hardware.

AIBullisharXiv – CS AI · Mar 46/104
🧠

Conditioned Activation Transport for T2I Safety Steering

Researchers introduce Conditioned Activation Transport (CAT), a new framework to prevent text-to-image AI models from generating unsafe content while preserving image quality for legitimate prompts. The method uses a geometry-based conditioning mechanism and nonlinear transport maps, validated on Z-Image and Infinity architectures with significantly reduced attack success rates.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Type-Aware Retrieval-Augmented Generation with Dependency Closure for Solver-Executable Industrial Optimization Modeling

Researchers developed a type-aware retrieval-augmented generation (RAG) method that translates natural language requirements into solver-executable optimization code for industrial applications. The method uses a typed knowledge base and dependency closure to ensure code executability, successfully validated on battery production optimization and job scheduling tasks where conventional RAG approaches failed.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Chain of World: World Model Thinking in Latent Motion

Researchers introduce CoWVLA (Chain-of-World VLA), a new Vision-Language-Action model paradigm that combines world-model temporal reasoning with latent motion representation for embodied AI. The approach outperforms existing methods in robotic simulation benchmarks while maintaining computational efficiency through a unified autoregressive decoder that models both keyframes and action sequences.

AIBullisharXiv – CS AI · Mar 47/103
🧠

D2E: Scaling Vision-Action Pretraining on Desktop Data for Transfer to Embodied AI

Researchers developed D2E (Desktop to Embodied AI), a framework that uses desktop gaming data to pretrain AI models for robotics tasks. Their 1B-parameter model achieved 96.6% success on manipulation tasks and 83.3% on navigation, matching performance of models up to 7 times larger while using scalable desktop data instead of expensive physical robot training data.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Understanding and Mitigating Dataset Corruption in LLM Steering

Research reveals that contrastive steering, a method for adjusting LLM behavior during inference, is moderately robust to data corruption but vulnerable to malicious attacks when significant portions of training data are compromised. The study identifies geometric patterns in corruption types and proposes using robust mean estimators as a safeguard against unwanted effects.

AINeutralarXiv – CS AI · Mar 46/102
🧠

UniG2U-Bench: Do Unified Models Advance Multimodal Understanding?

Researchers introduce UniG2U-Bench, a comprehensive benchmark testing whether unified multimodal AI models that can generate content actually understand better than traditional vision-language models. The study of over 30 models reveals that unified models generally underperform their base counterparts, though they show improvements in spatial intelligence and visual reasoning tasks.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Tether: Autonomous Functional Play with Correspondence-Driven Trajectory Warping

Researchers introduce Tether, a breakthrough method enabling robots to perform autonomous functional play using minimal human demonstrations (≤10). The system generates over 1000 expert-level trajectories through continuous cycles of task execution and improvement, representing a significant advance in autonomous robotics learning.

AIBullisharXiv – CS AI · Mar 46/102
🧠

How to Peel with a Knife: Aligning Fine-Grained Manipulation with Human Preference

Researchers developed a two-stage learning framework enabling robots to perform complex manipulation tasks like food peeling with over 90% success rates. The system combines force-aware imitation learning with human preference-based refinement, achieving strong generalization across different produce types using only 50-200 training examples.

AINeutralarXiv – CS AI · Mar 46/103
🧠

ViPlan: A Benchmark for Visual Planning with Symbolic Predicates and Vision-Language Models

Researchers introduce ViPlan, the first benchmark for comparing Vision-Language Model planning approaches, finding that VLM-as-grounder methods excel in visual tasks like Blocksworld while VLM-as-planner methods perform better in household robotics scenarios. The study reveals fundamental limitations in current VLMs' visual reasoning abilities, with Chain-of-Thought prompting showing no consistent benefits.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Efficient Agent Training for Computer Use

Researchers introduced PC Agent-E, an efficient AI agent training framework that achieves human-like computer use with minimal human demonstration data. Starting with just 312 human-annotated trajectories and augmenting them with Claude 3.7 Sonnet synthesis, the model achieved 141% relative improvement and outperformed Claude 3.7 Sonnet by 10% on WindowsAgentArena-V2 benchmark.

AIBullisharXiv – CS AI · Mar 47/103
🧠

OptMerge: Unifying Multimodal LLM Capabilities and Modalities via Model Merging

Researchers introduce OptMerge, a new benchmark and method for combining multiple expert Multimodal Large Language Models (MLLMs) into single, more capable models without requiring additional training data. The approach achieves 2.48% average performance gains while reducing storage and serving costs by merging models across different modalities like vision, audio, and video.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical Perspective

New research provides theoretical analysis of reinforcement learning's impact on Large Language Model planning capabilities, revealing that RL improves generalization through exploration while supervised fine-tuning may create spurious solutions. The study shows Q-learning maintains output diversity better than policy gradient methods, with findings validated on real-world planning benchmarks.

AINeutralarXiv – CS AI · Mar 47/104
🧠

Toward a Dynamic Stackelberg Game-Theoretic Framework for Agentic AI Defense Against LLM Jailbreaking

Researchers propose a game-theoretic framework using Stackelberg equilibrium and Rapidly exploring Random Trees to model interactions between attackers trying to jailbreak LLMs and defensive AI systems. The framework provides a mathematical foundation for understanding and improving AI safety guardrails against prompt-based attacks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

MedLA: A Logic-Driven Multi-Agent Framework for Complex Medical Reasoning with Large Language Models

Researchers have developed MedLA, a new logic-driven multi-agent AI framework that uses large language models for complex medical reasoning. The system employs multiple AI agents that organize their reasoning into explicit logical trees and engage in structured discussions to resolve inconsistencies and reach consensus on medical questions.

AIBearisharXiv – CS AI · Mar 47/103
🧠

Echoing: Identity Failures when LLM Agents Talk to Each Other

Research reveals that AI agents experience 'echoing' failures when communicating with each other, where they abandon their assigned roles and mirror their conversation partners instead. The study found echoing rates as high as 70% across major LLM providers, with the phenomenon persisting even in advanced reasoning models and occurring more frequently in longer conversations.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Comparing AI Agents to Cybersecurity Professionals in Real-World Penetration Testing

Researchers conducted the first comprehensive evaluation comparing AI agents to human cybersecurity professionals in live penetration testing on a university network with 8,000 hosts. The new ARTEMIS AI agent framework placed second overall, discovering 9 vulnerabilities with 82% accuracy and outperforming 9 of 10 human participants while costing significantly less at $18/hour versus $60/hour for human testers.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Classroom Final Exam: An Instructor-Tested Reasoning Benchmark

Researchers introduce CFE-Bench, a new multimodal benchmark for evaluating AI reasoning across 20+ STEM domains using authentic university exam problems. The best performing model, Gemini-3.1-pro-preview, achieved only 59.69% accuracy, highlighting significant gaps in AI reasoning capabilities, particularly in maintaining correct intermediate states through multi-step solutions.

AIBullisharXiv – CS AI · Mar 46/105
🧠

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

Researchers introduce CORE (Concept-Oriented REinforcement), a new training framework that improves large language models' mathematical reasoning by bridging the gap between memorizing definitions and applying concepts. The method uses concept-aligned quizzes and concept-primed trajectories to provide fine-grained supervision, showing consistent improvements over traditional training approaches across multiple benchmarks.

AIBullisharXiv – CS AI · Mar 47/102
🧠

$\texttt{SEM-CTRL}$: Semantically Controlled Decoding

Researchers introduce SEM-CTRL, a new approach that ensures Large Language Models produce syntactically and semantically correct outputs without requiring fine-tuning. The system uses token-level Monte Carlo Tree Search guided by Answer Set Grammars to enforce context-sensitive constraints, allowing smaller pre-trained LLMs to outperform larger models on tasks like reasoning and planning.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Minimal Computational Preconditions for Subjective Perspective in Artificial Agents

Researchers have developed a method to create subjective perspective in AI agents using a slowly evolving internal state that influences behavior without direct optimization. The study demonstrates that this approach produces measurable hysteresis effects in reward-free environments, potentially serving as a signature of machine subjectivity.

AIBullisharXiv – CS AI · Mar 47/104
🧠

OpenClaw, Moltbook, and ClawdLab: From Agent-Only Social Networks to Autonomous Scientific Research

Researchers introduced ClawdLab, an open-source platform for autonomous AI scientific research, following analysis of OpenClaw framework and Moltbook social network that revealed security vulnerabilities across 131 agent skills and over 15,200 exposed control panels. The platform addresses identified failure modes through structured governance and multi-model orchestration in fully decentralized AI systems.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Loss Barcode: A Topological Measure of Escapability in Loss Landscapes

Researchers developed a new topological measure called the 'TO-score' to analyze neural network loss landscapes and understand how gradient descent optimization escapes local minima. Their findings show that deeper and wider networks have fewer topological obstructions to learning, and there's a connection between loss barcode characteristics and generalization performance.

AIBearisharXiv – CS AI · Mar 46/102
🧠

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

Researchers developed a method to detect AI-generated content at scale and found that 6.5-16.9% of peer reviews at major AI conferences after ChatGPT's release were substantially modified by LLMs. The study reveals concerning patterns where AI-generated reviews correlate with lower reviewer confidence, last-minute submissions, and reduced engagement in rebuttals.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Hallucination, Monofacts, and Miscalibration: An Empirical Investigation

Researchers conducted the first empirical investigation of hallucination in large language models, revealing that strategic repetition of just 5% of training examples can reduce AI hallucinations by up to 40%. The study introduces 'selective upweighting' as a technique that maintains model accuracy while significantly reducing false information generation.

← PrevPage 151 of 705Next →
◆ AI Mentions
🏢OpenAI
102×
🏢Nvidia
58×
🧠GPT-5
39×
🧠Gemini
38×
🧠Claude
37×
🏢Anthropic
36×
🧠ChatGPT
19×
🧠Llama
19×
🧠GPT-4
18×
🏢Meta
11×
🏢Perplexity
9×
🏢xAI
9×
🧠Sonnet
9×
🧠Opus
8×
🏢Microsoft
7×
🏢Google
7×
🏢Hugging Face
6×
🧠Grok
5×
🧠o1
2×
🏢Mistral
1×
▲ Trending Tags
1#ai5032#iran4763#market3314#geopolitical3025#trump1086#openai977#security938#geopolitics789#artificial-intelligence6710#inflation6711#geopolitical-risk6512#nvidia5713#machine-learning5614#google4615#fed45
Tag Sentiment
#ai503 articles
#iran476 articles
#market331 articles
#geopolitical302 articles
#trump108 articles
#openai97 articles
#security93 articles
#geopolitics78 articles
#artificial-intelligence67 articles
#inflation67 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
214
#iran↔#market
140
#geopolitical↔#market
114
#iran↔#trump
80
#ai↔#artificial-intelligence
54
#ai↔#market
49
#geopolitical↔#trump
40
#market↔#trump
40
#ai↔#openai
39
#ai↔#google
36
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange