y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🧠All17,716🧠AI12,730🤖AI × Crypto545📰General4,441
Home/AI Pulse

AI Pulse News

Models, papers, tools. 17,716 articles with AI-powered sentiment analysis and key takeaways.

17716 articles
AIBearisharXiv – CS AI · Feb 277/107
🧠

Obscure but Effective: Classical Chinese Jailbreak Prompt Optimization via Bio-Inspired Search

Researchers developed CC-BOS, a framework that uses classical Chinese text to conduct more effective jailbreak attacks on Large Language Models. The method exploits the conciseness and obscurity of classical Chinese to bypass safety constraints, using bio-inspired optimization techniques to automatically generate adversarial prompts.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Enhancing CVRP Solver through LLM-driven Automatic Heuristic Design

Researchers developed AILS-AHD, a novel approach using Large Language Models to solve the Capacitated Vehicle Routing Problem (CVRP) more efficiently. The LLM-driven method achieved new best-known solutions for 8 out of 10 instances in large-scale benchmarks, demonstrating superior performance over existing state-of-the-art solvers.

AINeutralarXiv – CS AI · Feb 277/107
🧠

Vibe Researching as Wolf Coming: Can AI Agents with Skills Replace or Augment Social Scientists?

A research paper introduces the concept of 'vibe researching' where AI agents can autonomously execute entire research pipelines from idea to submission using specialized skills. The study analyzes how AI agents excel at speed and methodological tasks but struggle with theoretical originality and tacit knowledge, creating a cognitive rather than sequential delegation boundary in research workflows.

AINeutralarXiv – CS AI · Feb 277/106
🧠

Epistemic Filtering and Collective Hallucination: A Jury Theorem for Confidence-Calibrated Agents

Researchers propose a new framework for collective decision-making where AI agents can abstain from voting when uncertain, extending the Condorcet Jury Theorem to confidence-gated settings. The study shows this selective participation approach can improve group accuracy and potentially reduce hallucinations in large language model systems.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Towards Autonomous Memory Agents

Researchers introduce U-Mem, an autonomous memory agent system that actively acquires and validates knowledge for large language models. The system uses cost-aware knowledge extraction and semantic Thompson sampling to improve performance, showing significant gains on benchmarks like HotpotQA and AIME25.

AIBullisharXiv – CS AI · Feb 277/109
🧠

ArchAgent: Agentic AI-driven Computer Architecture Discovery

ArchAgent, an AI-driven system built on AlphaEvolve, has achieved breakthrough results in automated computer architecture discovery by designing state-of-the-art cache replacement policies. The system achieved 5.3% performance improvements in just 2 days and 0.9% improvements in 18 days, working 3-5x faster than human-developed solutions.

AINeutralarXiv – CS AI · Feb 277/106
🧠

VeRO: An Evaluation Harness for Agents to Optimize Agents

Researchers introduced VeRO (Versioning, Rewards, and Observations), a new evaluation framework for testing AI coding agents that can optimize other AI agents through iterative improvement cycles. The system provides reproducible benchmarks and structured execution traces to systematically measure how well coding agents can improve target agents' performance.

AIBullisharXiv – CS AI · Feb 277/107
🧠

The Trinity of Consistency as a Defining Principle for General World Models

Researchers propose a 'Trinity of Consistency' framework for developing General World Models in AI, consisting of Modal, Spatial, and Temporal consistency principles. They introduce CoW-Bench, a new benchmark for evaluating video generation models and unified multimodal models, aiming to establish a principled pathway toward AGI-capable world simulation systems.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models

Researchers propose Metacognitive Behavioral Tuning (MBT), a new framework that addresses structural fragility in Large Reasoning Models by injecting human-like self-regulatory control into AI thought processes. The approach reduces reasoning collapse and improves accuracy while consuming fewer computational tokens across multi-hop question-answering benchmarks.

AINeutralarXiv – CS AI · Feb 277/105
🧠

A Decision-Theoretic Formalisation of Steganography With Applications to LLM Monitoring

Researchers have developed a new decision-theoretic framework to detect steganographic capabilities in large language models, which could help identify when AI systems are hiding information to evade oversight. The method introduces 'generalized V-information' and a 'steganographic gap' measure to quantify hidden communication without requiring reference distributions.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

Molmo2 is a new open-source family of vision-language models that achieves state-of-the-art performance among open models, particularly excelling in video understanding and pixel-level grounding tasks. The research introduces 7 new video datasets and 2 multi-image datasets collected without using proprietary VLMs, along with an 8B parameter model that outperforms existing open-weight models and even some proprietary models on specific tasks.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Agent Behavioral Contracts: Formal Specification and Runtime Enforcement for Reliable Autonomous AI Agents

Researchers introduce Agent Behavioral Contracts (ABC), a formal framework for specifying and enforcing reliable behavior in autonomous AI agents. The system addresses critical issues of drift and governance failures in AI deployments by implementing runtime-enforceable contracts that achieve 88-100% compliance rates and significantly improve violation detection.

AIBullisharXiv – CS AI · Feb 277/108
🧠

A Confidence-Variance Theory for Pseudo-Label Selection in Semi-Supervised Learning

Researchers introduce a Confidence-Variance (CoVar) theory framework that improves pseudo-label selection in semi-supervised learning by combining maximum confidence with residual-class variance. The method addresses overconfidence issues in deep networks and demonstrates consistent improvements across multiple datasets including PASCAL VOC, Cityscapes, CIFAR-10, and Mini-ImageNet.

$NEAR
AIBullisharXiv – CS AI · Feb 277/107
🧠

Versor: A Geometric Sequence Architecture

Researchers introduce Versor, a novel sequence architecture using Conformal Geometric Algebra that significantly outperforms Transformers with 200x fewer parameters and better interpretability. The architecture achieves superior performance on various tasks including N-body dynamics, topological reasoning, and standard benchmarks while offering linear temporal complexity and 100x speedup improvements.

$SE
AIBullisharXiv – CS AI · Feb 277/105
🧠

VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

Researchers have developed VQ-Style, a new AI method that uses Residual Vector Quantized Variational Autoencoders to separate style from content in human motion data. The technique enables effective motion style transfer without requiring fine-tuning for new styles, with applications in animation, gaming, and digital content creation.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Rethinking Cross-Modal Fine-Tuning: Optimizing the Interaction between Feature Alignment and Target Fitting

Researchers developed a theoretical framework to optimize cross-modal fine-tuning of pre-trained AI models, addressing the challenge of aligning new feature modalities with existing representation spaces. The approach introduces a novel concept of feature-label distortion and demonstrates improved performance over state-of-the-art methods across benchmark datasets.

AIBullisharXiv – CS AI · Feb 277/108
🧠

Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation

Researchers propose Generalized On-Policy Distillation (G-OPD), a new AI training framework that improves upon standard on-policy distillation by introducing flexible reference models and reward scaling factors. The method, particularly ExOPD with reward extrapolation, enables smaller student models to surpass their teacher's performance in math reasoning and code generation tasks.

AINeutralarXiv – CS AI · Feb 277/106
🧠

Devling into Adversarial Transferability on Image Classification: Review, Benchmark, and Evaluation

Researchers have conducted a comprehensive review of adversarial transferability in image classification, identifying gaps in standardized evaluation frameworks for transfer-based attacks. They propose a benchmark framework and categorize existing attacks into six distinct types to address biased assessments in current research.

AI × CryptoBullisharXiv – CS AI · Feb 277/103
🤖

IMMACULATE: A Practical LLM Auditing Framework via Verifiable Computation

Researchers introduce IMMACULATE, a framework that audits commercial large language model API services to detect fraud like model substitution and token overbilling without requiring access to internal systems. The system uses verifiable computation to audit a small fraction of requests, achieving strong detection guarantees with less than 1% throughput overhead.

AIBearisharXiv – CS AI · Feb 277/107
🧠

Large-scale online deanonymization with LLMs

Researchers demonstrate that large language models can successfully deanonymize pseudonymous users across online platforms at scale, achieving up to 68% recall at 90% precision. The study shows LLMs can match users between platforms like Hacker News and LinkedIn, or across Reddit communities, using only unstructured text data.

$NEAR
AINeutralarXiv – CS AI · Feb 277/105
🧠

On the Equivalence of Random Network Distillation, Deep Ensembles, and Bayesian Inference

Researchers establish theoretical connections between Random Network Distillation (RND), deep ensembles, and Bayesian inference for uncertainty quantification in deep learning models. The study proves that RND's uncertainty signals are equivalent to deep ensemble predictive variance and can mirror Bayesian posterior distributions, providing a unified theoretical framework for efficient uncertainty quantification methods.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Decision MetaMamba: Enhancing Selective SSM in Offline RL with Heterogeneous Sequence Mixing

Researchers propose Decision MetaMamba (DMM), a new AI model architecture that improves offline reinforcement learning by addressing information loss issues in Mamba-based models. The solution uses a dense layer-based sequence mixer and modified positional structure to achieve state-of-the-art performance with fewer parameters.

AINeutralarXiv – CS AI · Feb 277/106
🧠

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training

Researchers identify a critical trade-off in AI model training where optimizing for Pass@k metrics (multiple attempts) degrades Pass@1 performance (single attempt). The study reveals this occurs due to gradient conflicts when the training process reweights toward low-success prompts, creating interference that hurts single-shot performance.

AIBullisharXiv – CS AI · Feb 277/105
🧠

AngelSlim: A more accessible, comprehensive, and efficient toolkit for large model compression

Tencent Hunyuan team introduces AngelSlim, a comprehensive toolkit for large model compression featuring quantization, speculative decoding, and pruning techniques. The toolkit includes the first industrially viable 2-bit large model (HY-1.8B-int2) and achieves 1.8x to 2.0x throughput gains while maintaining output quality.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning

Researchers developed a hierarchical multi-agent LLM framework that significantly improves multi-robot task planning by combining natural language processing with classical PDDL planners. The system uses prompt optimization and meta-learning to achieve success rates of up to 95% on compound tasks, outperforming previous state-of-the-art methods by substantial margins.

$COMP
← PrevPage 171 of 709Next →
◆ AI Mentions
🏢OpenAI
101×
🏢Nvidia
56×
🧠GPT-5
36×
🧠Claude
35×
🏢Anthropic
35×
🧠Gemini
33×
🧠ChatGPT
19×
🧠GPT-4
14×
🧠Llama
14×
🏢Meta
9×
🏢xAI
9×
🧠Sonnet
8×
🏢Perplexity
8×
🧠Opus
7×
🏢Microsoft
7×
🏢Google
7×
🧠Grok
5×
🏢Hugging Face
4×
🧠o1
2×
🧠Copilot
1×
▲ Trending Tags
1#ai4402#iran4253#market2744#geopolitical2525#geopolitics1216#geopolitical-risk1107#trump978#openai969#security7910#inflation7411#artificial-intelligence6312#middle-east6213#sanctions5514#market-volatility5515#nvidia53
Tag Sentiment
#ai440 articles
#iran425 articles
#market274 articles
#geopolitical252 articles
#geopolitics121 articles
#geopolitical-risk110 articles
#trump97 articles
#openai96 articles
#security79 articles
#inflation74 articles
BullishNeutralBearish
Stay Updated
Models, papers, tools
Tag Connections
#geopolitical↔#iran
185
#iran↔#market
118
#geopolitical↔#market
93
#iran↔#trump
70
#ai↔#artificial-intelligence
48
#ai↔#market
41
#geopolitical↔#trump
38
#ai↔#openai
38
#geopolitics↔#iran
36
#market↔#trump
35
Filters
Sentiment
Importance
Sort
📡 See all 70+ sources
y0.exchange
Your AI agent for DeFi
Connect Claude or GPT to your wallet. AI reads balances, proposes swaps and bridges — you approve. Your keys never leave your device.
8 MCP tools · 15 chains · $0 fees
Connect Wallet to AI →How it works →
Viewing: AI Pulse feed
Filters
Sentiment
Importance
Sort
Stay Updated
Models, papers, tools
y0news
y0.exchangeLaunch AppDigestsSourcesAboutRSSAI NewsCrypto News
© 2026 y0.exchange