y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,673🧠AI13,487⛓️Crypto11,483💎DeFi1,181🤖AI × Crypto586📰General4,936

AI × Crypto News Feed

Real-time AI-curated news from 31,673+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31673 articles
AIBullisharXiv – CS AI · Feb 277/106
🧠

Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning

Researchers propose Supervised Reinforcement Learning (SRL), a new training framework that helps small-scale language models solve complex multi-step reasoning problems by generating internal reasoning monologues and providing step-wise rewards. SRL outperforms traditional Supervised Fine-Tuning and Reinforcement Learning approaches, enabling smaller models to tackle previously unlearnable problems.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Intelligence per Watt: Measuring Intelligence Efficiency of Local AI

Researchers propose 'Intelligence per Watt' (IPW) as a metric to measure AI efficiency, finding that local AI models can handle 71.3% of queries while being 1.4x more energy efficient than cloud alternatives. The study demonstrates that smaller local language models (≤20B parameters) can redistribute computational demand from centralized cloud infrastructure.

AIBullisharXiv – CS AI · Feb 277/108
🧠

UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

Researchers introduce UniQL, a unified framework for quantizing and compressing large language models to run efficiently on mobile devices. The system achieves 4x-5.7x memory reduction and 2.7x-3.4x speed improvements while maintaining accuracy within 5% of original models.

AIBullisharXiv – CS AI · Feb 277/109
🧠

Sparse Attention Post-Training for Mechanistic Interpretability

Researchers have developed a post-training method that makes transformer attention 99.6% sparser while maintaining performance, reducing attention connectivity to just 0.4% of edges in models up to 7B parameters. This breakthrough demonstrates that most transformer computation is redundant and enables more interpretable AI models through simplified circuit structures.

AIBullisharXiv – CS AI · Feb 277/104
🧠

Imitation Game: Reproducing Deep Learning Bugs Leveraging an Intelligent Agent

Researchers developed RepGen, an AI-powered tool that automatically reproduces deep learning bugs with an 80.19% success rate, significantly improving upon the current 3% manual reproduction rate. The system uses LLMs to generate reproduction code through an iterative process, reducing debugging time by 56.8% in developer studies.

AINeutralarXiv – CS AI · Feb 277/107
🧠

LeanCat: A Benchmark Suite for Formal Category Theory in Lean (Part I: 1-Categories)

Researchers introduced LeanCat, a benchmark comprising 100 category-theory tasks in Lean to test AI's formal theorem proving capabilities. State-of-the-art models achieved only 12% success rates, revealing significant limitations in abstract mathematical reasoning, while a new retrieval-augmented approach doubled performance to 24%.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

Molmo2 is a new open-source family of vision-language models that achieves state-of-the-art performance among open models, particularly excelling in video understanding and pixel-level grounding tasks. The research introduces 7 new video datasets and 2 multi-image datasets collected without using proprietary VLMs, along with an 8B parameter model that outperforms existing open-weight models and even some proprietary models on specific tasks.

AIBullisharXiv – CS AI · Feb 277/108
🧠

A Confidence-Variance Theory for Pseudo-Label Selection in Semi-Supervised Learning

Researchers introduce a Confidence-Variance (CoVar) theory framework that improves pseudo-label selection in semi-supervised learning by combining maximum confidence with residual-class variance. The method addresses overconfidence issues in deep networks and demonstrates consistent improvements across multiple datasets including PASCAL VOC, Cityscapes, CIFAR-10, and Mini-ImageNet.

$NEAR
AIBullisharXiv – CS AI · Feb 277/106
🧠

Rethinking Cross-Modal Fine-Tuning: Optimizing the Interaction between Feature Alignment and Target Fitting

Researchers developed a theoretical framework to optimize cross-modal fine-tuning of pre-trained AI models, addressing the challenge of aligning new feature modalities with existing representation spaces. The approach introduces a novel concept of feature-label distortion and demonstrates improved performance over state-of-the-art methods across benchmark datasets.

AIBullisharXiv – CS AI · Feb 277/105
🧠

VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

Researchers have developed VQ-Style, a new AI method that uses Residual Vector Quantized Variational Autoencoders to separate style from content in human motion data. The technique enables effective motion style transfer without requiring fine-tuning for new styles, with applications in animation, gaming, and digital content creation.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Versor: A Geometric Sequence Architecture

Researchers introduce Versor, a novel sequence architecture using Conformal Geometric Algebra that significantly outperforms Transformers with 200x fewer parameters and better interpretability. The architecture achieves superior performance on various tasks including N-body dynamics, topological reasoning, and standard benchmarks while offering linear temporal complexity and 100x speedup improvements.

$SE
AIBullisharXiv – CS AI · Feb 277/108
🧠

Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation

Researchers propose Generalized On-Policy Distillation (G-OPD), a new AI training framework that improves upon standard on-policy distillation by introducing flexible reference models and reward scaling factors. The method, particularly ExOPD with reward extrapolation, enables smaller student models to surpass their teacher's performance in math reasoning and code generation tasks.

AIBearisharXiv – CS AI · Feb 277/107
🧠

Large-scale online deanonymization with LLMs

Researchers demonstrate that large language models can successfully deanonymize pseudonymous users across online platforms at scale, achieving up to 68% recall at 90% precision. The study shows LLMs can match users between platforms like Hacker News and LinkedIn, or across Reddit communities, using only unstructured text data.

$NEAR
AIBullisharXiv – CS AI · Feb 277/106
🧠

Decision MetaMamba: Enhancing Selective SSM in Offline RL with Heterogeneous Sequence Mixing

Researchers propose Decision MetaMamba (DMM), a new AI model architecture that improves offline reinforcement learning by addressing information loss issues in Mamba-based models. The solution uses a dense layer-based sequence mixer and modified positional structure to achieve state-of-the-art performance with fewer parameters.

AINeutralarXiv – CS AI · Feb 277/105
🧠

On the Equivalence of Random Network Distillation, Deep Ensembles, and Bayesian Inference

Researchers establish theoretical connections between Random Network Distillation (RND), deep ensembles, and Bayesian inference for uncertainty quantification in deep learning models. The study proves that RND's uncertainty signals are equivalent to deep ensemble predictive variance and can mirror Bayesian posterior distributions, providing a unified theoretical framework for efficient uncertainty quantification methods.

AINeutralarXiv – CS AI · Feb 277/106
🧠

Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training

Researchers identify a critical trade-off in AI model training where optimizing for Pass@k metrics (multiple attempts) degrades Pass@1 performance (single attempt). The study reveals this occurs due to gradient conflicts when the training process reweights toward low-success prompts, creating interference that hurts single-shot performance.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning

Researchers developed a hierarchical multi-agent LLM framework that significantly improves multi-robot task planning by combining natural language processing with classical PDDL planners. The system uses prompt optimization and meta-learning to achieve success rates of up to 95% on compound tasks, outperforming previous state-of-the-art methods by substantial margins.

$COMP
AIBullisharXiv – CS AI · Feb 277/105
🧠

Agent Behavioral Contracts: Formal Specification and Runtime Enforcement for Reliable Autonomous AI Agents

Researchers introduce Agent Behavioral Contracts (ABC), a formal framework for specifying and enforcing reliable behavior in autonomous AI agents. The system addresses critical issues of drift and governance failures in AI deployments by implementing runtime-enforceable contracts that achieve 88-100% compliance rates and significantly improve violation detection.

AINeutralarXiv – CS AI · Feb 277/107
🧠

Vibe Researching as Wolf Coming: Can AI Agents with Skills Replace or Augment Social Scientists?

A research paper introduces the concept of 'vibe researching' where AI agents can autonomously execute entire research pipelines from idea to submission using specialized skills. The study analyzes how AI agents excel at speed and methodological tasks but struggle with theoretical originality and tacit knowledge, creating a cognitive rather than sequential delegation boundary in research workflows.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Towards Autonomous Memory Agents

Researchers introduce U-Mem, an autonomous memory agent system that actively acquires and validates knowledge for large language models. The system uses cost-aware knowledge extraction and semantic Thompson sampling to improve performance, showing significant gains on benchmarks like HotpotQA and AIME25.

AIBullisharXiv – CS AI · Feb 277/109
🧠

ArchAgent: Agentic AI-driven Computer Architecture Discovery

ArchAgent, an AI-driven system built on AlphaEvolve, has achieved breakthrough results in automated computer architecture discovery by designing state-of-the-art cache replacement policies. The system achieved 5.3% performance improvements in just 2 days and 0.9% improvements in 18 days, working 3-5x faster than human-developed solutions.

AINeutralarXiv – CS AI · Feb 277/106
🧠

VeRO: An Evaluation Harness for Agents to Optimize Agents

Researchers introduced VeRO (Versioning, Rewards, and Observations), a new evaluation framework for testing AI coding agents that can optimize other AI agents through iterative improvement cycles. The system provides reproducible benchmarks and structured execution traces to systematically measure how well coding agents can improve target agents' performance.

AIBullisharXiv – CS AI · Feb 277/105
🧠

Mirroring the Mind: Distilling Human-Like Metacognitive Strategies into Large Language Models

Researchers propose Metacognitive Behavioral Tuning (MBT), a new framework that addresses structural fragility in Large Reasoning Models by injecting human-like self-regulatory control into AI thought processes. The approach reduces reasoning collapse and improves accuracy while consuming fewer computational tokens across multi-hop question-answering benchmarks.

← PrevPage 317 of 1267Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined