y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,636🧠AI13,484⛓️Crypto11,463💎DeFi1,179🤖AI × Crypto583📰General4,927

AI × Crypto News Feed

Real-time AI-curated news from 31,636+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31636 articles
AIBearisharXiv – CS AI · Mar 47/103
🧠

Echoing: Identity Failures when LLM Agents Talk to Each Other

Research reveals that AI agents experience 'echoing' failures when communicating with each other, where they abandon their assigned roles and mirror their conversation partners instead. The study found echoing rates as high as 70% across major LLM providers, with the phenomenon persisting even in advanced reasoning models and occurring more frequently in longer conversations.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Comparing AI Agents to Cybersecurity Professionals in Real-World Penetration Testing

Researchers conducted the first comprehensive evaluation comparing AI agents to human cybersecurity professionals in live penetration testing on a university network with 8,000 hosts. The new ARTEMIS AI agent framework placed second overall, discovering 9 vulnerabilities with 82% accuracy and outperforming 9 of 10 human participants while costing significantly less at $18/hour versus $60/hour for human testers.

AIBullisharXiv – CS AI · Mar 46/105
🧠

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

Researchers introduce CORE (Concept-Oriented REinforcement), a new training framework that improves large language models' mathematical reasoning by bridging the gap between memorizing definitions and applying concepts. The method uses concept-aligned quizzes and concept-primed trajectories to provide fine-grained supervision, showing consistent improvements over traditional training approaches across multiple benchmarks.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Minimal Computational Preconditions for Subjective Perspective in Artificial Agents

Researchers have developed a method to create subjective perspective in AI agents using a slowly evolving internal state that influences behavior without direct optimization. The study demonstrates that this approach produces measurable hysteresis effects in reward-free environments, potentially serving as a signature of machine subjectivity.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Classroom Final Exam: An Instructor-Tested Reasoning Benchmark

Researchers introduce CFE-Bench, a new multimodal benchmark for evaluating AI reasoning across 20+ STEM domains using authentic university exam problems. The best performing model, Gemini-3.1-pro-preview, achieved only 59.69% accuracy, highlighting significant gaps in AI reasoning capabilities, particularly in maintaining correct intermediate states through multi-step solutions.

AIBullisharXiv – CS AI · Mar 47/104
🧠

OpenClaw, Moltbook, and ClawdLab: From Agent-Only Social Networks to Autonomous Scientific Research

Researchers introduced ClawdLab, an open-source platform for autonomous AI scientific research, following analysis of OpenClaw framework and Moltbook social network that revealed security vulnerabilities across 131 agent skills and over 15,200 exposed control panels. The platform addresses identified failure modes through structured governance and multi-model orchestration in fully decentralized AI systems.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Loss Barcode: A Topological Measure of Escapability in Loss Landscapes

Researchers developed a new topological measure called the 'TO-score' to analyze neural network loss landscapes and understand how gradient descent optimization escapes local minima. Their findings show that deeper and wider networks have fewer topological obstructions to learning, and there's a connection between loss barcode characteristics and generalization performance.

AIBearisharXiv – CS AI · Mar 46/102
🧠

Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews

Researchers developed a method to detect AI-generated content at scale and found that 6.5-16.9% of peer reviews at major AI conferences after ChatGPT's release were substantially modified by LLMs. The study reveals concerning patterns where AI-generated reviews correlate with lower reviewer confidence, last-minute submissions, and reduced engagement in rebuttals.

AINeutralarXiv – CS AI · Mar 47/102
🧠

Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression

Researchers have derived tight bounds on covering numbers for deep ReLU neural networks, providing fundamental insights into network capacity and approximation capabilities. The work removes a log^6(n) factor from the best known sample complexity rate for estimating Lipschitz functions via deep networks, establishing optimality in nonparametric regression.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Hallucination, Monofacts, and Miscalibration: An Empirical Investigation

Researchers conducted the first empirical investigation of hallucination in large language models, revealing that strategic repetition of just 5% of training examples can reduce AI hallucinations by up to 40%. The study introduces 'selective upweighting' as a technique that maintains model accuracy while significantly reducing false information generation.

AIBullisharXiv – CS AI · Mar 47/102
🧠

$\texttt{SEM-CTRL}$: Semantically Controlled Decoding

Researchers introduce SEM-CTRL, a new approach that ensures Large Language Models produce syntactically and semantically correct outputs without requiring fine-tuning. The system uses token-level Monte Carlo Tree Search guided by Answer Set Grammars to enforce context-sensitive constraints, allowing smaller pre-trained LLMs to outperform larger models on tasks like reasoning and planning.

AIBearisharXiv – CS AI · Mar 47/102
🧠

Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models

Researchers have identified a critical privacy vulnerability in multi-modal large reasoning models (MLRMs) where adversaries can infer users' sensitive location information from images, including home addresses from selfies. The study introduces DoxBench dataset and demonstrates that 11 advanced MLRMs consistently outperform humans in geolocation inference, significantly lowering barriers for privacy attacks.

AIBullisharXiv – CS AI · Mar 47/104
🧠

Adaptive Social Learning via Mode Policy Optimization for Language Agents

Researchers propose an Adaptive Social Learning (ASL) framework with Adaptive Mode Policy Optimization (AMPO) algorithm to improve language agents' reasoning abilities in social interactions. The system dynamically adjusts reasoning depth based on context, achieving 15.6% higher performance than GPT-4o while using 32.8% shorter reasoning chains.

AINeutralarXiv – CS AI · Mar 47/104
🧠

The Gen AI Generation: Student Views of Awareness, Preparedness, and Concern

A study of over 250 students reveals the emergence of a 'GenAI Generation' whose education is increasingly shaped by generative AI. While students show enthusiasm for GenAI, they express greater concerns about ethics, job displacement, and educational preparedness, with readiness levels correlating to curricular exposure.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Unsupervised Representation Learning -- an Invariant Risk Minimization Perspective

Researchers propose a new unsupervised framework for Invariant Risk Minimization (IRM) that learns robust representations without labeled data. The approach introduces two methods - Principal Invariant Component Analysis (PICA) and Variational Invariant Autoencoder (VIAE) - that can capture invariant structures across different environments using only unlabeled data.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Know When to Abstain: Optimal Selective Classification with Likelihood Ratios

Researchers developed new selective classification methods using likelihood ratio tests based on the Neyman-Pearson lemma, allowing AI models to abstain from uncertain predictions. The approach shows superior performance across vision and language tasks, particularly under covariate shift scenarios where test data differs from training data.

AIBullisharXiv – CS AI · Mar 47/102
🧠

DiaBlo: Diagonal Blocks Are Sufficient For Finetuning

DiaBlo introduces a new Parameter-Efficient Fine-Tuning (PEFT) method that updates only diagonal blocks of weight matrices in large language models, offering better performance than LoRA while maintaining similar memory efficiency. The approach eliminates the need for low-rank matrix products and provides theoretical guarantees for convergence, showing competitive results across various AI tasks including reasoning and code generation.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Self-Improving Loops for Visual Robotic Planning

Researchers developed SILVR, a self-improving system for visual robotic planning that uses video generative models to continuously enhance robot performance through self-collected data. The system demonstrates improved task performance across MetaWorld simulations and real robot manipulations without requiring human-provided rewards or expert demonstrations.

AIBullisharXiv – CS AI · Mar 46/102
🧠

Frame Guidance: Training-Free Guidance for Frame-Level Control in Video Diffusion Models

Researchers introduce Frame Guidance, a training-free method for controllable video generation using diffusion models. The technique enables fine-grained control over video generation through frame-level signals like keyframes and style references without requiring expensive fine-tuning of large-scale models.

AIBullisharXiv – CS AI · Mar 47/104
🧠

You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models

Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a novel approach that significantly improves large language model performance by treating multiple in-context examples as supervised training targets rather than just prompts. The method narrows the performance gap between in-context learning and dedicated fine-tuning while reducing catastrophic forgetting issues.

AINeutralarXiv – CS AI · Mar 47/102
🧠

LLM Probability Concentration: How Alignment Shrinks the Generative Horizon

Researchers introduce the Branching Factor (BF) metric to measure how alignment tuning reduces output diversity in large language models by concentrating probability distributions. The study reveals that aligned models generate 2-5x less diverse outputs and become more predictable during generation, explaining why alignment reduces sensitivity to decoding strategies and enables more stable Chain-of-Thought reasoning.

AIBullisharXiv – CS AI · Mar 47/103
🧠

LEDOM: Reverse Language Model

Researchers have developed LEDOM, an open-source reverse autoregressive language model that trains right-to-left instead of the traditional left-to-right approach. The model demonstrates unique capabilities like abductive inference and question synthesis, and when combined with forward models through 'Reverse Reward' scoring, achieves significant performance gains of up to 15% on mathematical reasoning tasks.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy

Researchers introduce Skywork-Reward-V2, a suite of AI reward models trained on SynPref-40M, a massive 40-million preference pair dataset created through human-AI collaboration. The models achieve state-of-the-art performance across seven major benchmarks by combining human annotation quality with AI scalability for better preference learning.

← PrevPage 294 of 1266Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined