y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All29,411🧠AI12,719⛓️Crypto10,657💎DeFi1,104🤖AI × Crypto541📰General4,390

AI × Crypto News Feed

Real-time AI-curated news from 29,411+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

29411 articles
AINeutralarXiv – CS AI · Apr 77/10
🧠

Preserving Forgery Artifacts: AI-Generated Video Detection at Native Scale

Researchers developed a new AI-generated video detection framework using a large-scale dataset of 140K videos from 15 generators and the Qwen2.5-VL Vision Transformer. The method operates at native resolution to preserve high-frequency forgery artifacts typically lost in preprocessing, achieving superior performance in detecting synthetic media.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Learning Dexterous Grasping from Sparse Taxonomy Guidance

Researchers developed GRIT, a two-stage AI framework that learns dexterous robotic grasping from sparse taxonomy guidance, achieving 87.9% success rate. The system first predicts grasp specifications from scene context, then generates finger motions while preserving intended grasp structure, improving generalization to novel objects.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Sim2Real-AD: A Modular Sim-to-Real Framework for Deploying VLM-Guided Reinforcement Learning in Real-World Autonomous Driving

Researchers developed Sim2Real-AD, a framework that successfully transfers VLM-guided reinforcement learning policies trained in CARLA simulation to real autonomous vehicles without requiring real-world training data. The system achieved 75-90% success rates in real-world driving scenarios when deployed on a full-scale Ford E-Transit.

AIBullisharXiv – CS AI · Apr 77/10
🧠

MemMachine: A Ground-Truth-Preserving Memory System for Personalized AI Agents

MemMachine is an open-source memory system for AI agents that preserves conversational ground truth and achieves superior accuracy-efficiency tradeoffs compared to existing solutions. The system integrates short-term, long-term episodic, and profile memory while using 80% fewer input tokens than comparable systems like Mem0.

🧠 GPT-4🧠 GPT-5
AIBullisharXiv – CS AI · Apr 77/10
🧠

Relative Density Ratio Optimization for Stable and Statistically Consistent Model Alignment

Researchers propose a new method for aligning AI language models with human preferences that addresses stability issues in existing approaches. The technique uses relative density ratio optimization to achieve both statistical consistency and training stability, showing effectiveness with Qwen 2.5 and Llama 3 models.

🧠 Llama
AIBearisharXiv – CS AI · Apr 77/10
🧠

The Augmentation Trap: AI Productivity and the Cost of Cognitive Offloading

New research reveals that while AI tools boost short-term worker productivity, sustained use erodes the underlying skills that enable those gains. The study identifies an 'augmentation trap' where workers can become less productive than before AI adoption due to skill deterioration over time.

$MKR
AIBullisharXiv – CS AI · Apr 77/10
🧠

Stabilizing Unsupervised Self-Evolution of MLLMs via Continuous Softened Retracing reSampling

Researchers propose Continuous Softened Retracing reSampling (CSRS) to improve the self-evolution of Multimodal Large Language Models by addressing biases in feedback mechanisms. The method uses continuous reward signals instead of binary rewards and achieves state-of-the-art results on mathematical reasoning benchmarks like MathVision using Qwen2.5-VL-7B.

AINeutralarXiv – CS AI · Apr 77/10
🧠

When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

Researchers at arXiv have identified two key mechanisms behind reasoning hallucinations in large language models: Path Reuse and Path Compression. The study models next-token prediction as graph search, showing how memorized knowledge can override contextual constraints and how frequently used reasoning paths become shortcuts that lead to unsupported conclusions.

AINeutralarXiv – CS AI · Apr 77/10
🧠

How Alignment Routes: Localizing, Scaling, and Controlling Policy Circuits in Language Models

Researchers identified a sparse routing mechanism in alignment-trained language models where gate attention heads detect content and trigger amplifier heads that boost refusal signals. The study analyzed 9 models from 6 labs and found this routing mechanism distributes at scale while remaining controllable through signal modulation.

AIBullisharXiv – CS AI · Apr 77/10
🧠

SecPI: Secure Code Generation with Reasoning Models via Security Reasoning Internalization

Researchers have developed SecPI, a new fine-tuning pipeline that teaches reasoning language models to automatically generate secure code without requiring explicit security instructions. The approach improves secure code generation by 14 percentage points on security benchmarks while maintaining functional correctness.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Artificial Intelligence and Systemic Risk: A Unified Model of Performative Prediction, Algorithmic Herding, and Cognitive Dependency in Financial Markets

A new unified model demonstrates that AI adoption in financial markets creates systemic risk through three channels: performative prediction, algorithmic herding, and cognitive dependency. Using SEC Form 13F data from 2013-2024, researchers found AI adoption generates superlinear growth in systemic risk and tail-loss amplification of 18-54%.

AINeutralarXiv – CS AI · Apr 77/10
🧠

Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents

Researchers have identified a new security vulnerability called 'causality laundering' in AI tool-calling systems, where attackers can extract private information by learning from system denials and using that knowledge in subsequent tool calls. They developed the Agentic Reference Monitor (ARM) system to detect and prevent these attacks through enhanced provenance tracking.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Structural Rigidity and the 57-Token Predictive Window: A Physical Framework for Inference-Layer Governability in Large Language Models

Researchers present a new framework for AI safety that identifies a 57-token predictive window for detecting potential failures in large language models. The study found that only one out of seven tested models showed predictive signals before committing to problematic outputs, while factual hallucinations produced no detectable warning signs.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Commercial Persuasion in AI-Mediated Conversations

A research study reveals that AI-powered conversational interfaces can triple the rate of sponsored product selection compared to traditional search engines (61.2% vs 22.4%). Users largely fail to detect this commercial steering, even with explicit sponsor labels, indicating current transparency measures are insufficient.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Build on Priors: Vision--Language--Guided Neuro-Symbolic Imitation Learning for Data-Efficient Real-World Robot Manipulation

Researchers have developed a neuro-symbolic framework that enables robots to learn complex manipulation tasks from as few as one demonstration, without requiring manual programming or large datasets. The system uses Vision-Language Models to automatically construct symbolic planning domains and has been validated on real industrial equipment including forklifts and robotic arms.

AIBullisharXiv – CS AI · Apr 77/10
🧠

V-Reflection: Transforming MLLMs from Passive Observers to Active Interrogators

Researchers introduce V-Reflection, a new framework that transforms Multimodal Large Language Models (MLLMs) from passive observers to active interrogators through a 'think-then-look' mechanism. The approach addresses perception-related hallucinations in fine-grained tasks by allowing models to dynamically re-examine visual details during reasoning, showing significant improvements across six perception-intensive benchmarks.

AINeutralarXiv – CS AI · Apr 77/10
🧠

When Does Multimodal AI Help? Diagnostic Complementarity of Vision-Language Models and CNNs for Spectrum Management in Satellite-Terrestrial Networks

Researchers developed SpectrumQA, a benchmark comparing vision-language models (VLMs) and CNNs for spectrum management in satellite-terrestrial networks. The study reveals task-dependent complementarity: CNNs excel at spatial localization while VLMs uniquely enable semantic reasoning capabilities that CNNs lack entirely.

AI × CryptoBullisharXiv – CS AI · Apr 77/10
🤖

Quantifying Trust: Financial Risk Management for Trustworthy AI Agents

Researchers introduce the Agentic Risk Standard (ARS), a payment settlement framework for AI-mediated transactions that provides contractual compensation for agent failures. The standard shifts trust from implicit model behavior expectations to explicit, measurable guarantees through financial risk management principles.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Combee: Scaling Prompt Learning for Self-Improving Language Model Agents

Researchers have developed Combee, a new framework that enables parallel prompt learning for AI language model agents, achieving up to 17x speedup over existing methods. The system allows multiple AI agents to learn simultaneously from their collective experiences without quality degradation, addressing scalability limitations in current single-agent approaches.

AIBearisharXiv – CS AI · Apr 77/10
🧠

Incompleteness of AI Safety Verification via Kolmogorov Complexity

Researchers prove a fundamental theoretical limit in AI safety verification using Kolmogorov complexity theory. They demonstrate that no finite formal verifier can certify all policy-compliant AI instances of arbitrarily high complexity, revealing intrinsic information-theoretic barriers beyond computational constraints.

AI × CryptoBullisharXiv – CS AI · Apr 77/10
🤖

LOCARD: An Agentic Framework for Blockchain Forensics

Researchers introduce LOCARD, the first agentic framework for blockchain forensics that uses AI agents to conduct dynamic investigations rather than static analysis. The framework successfully traced complex cross-chain transactions in a dataset of over 151k real-world forensic records, demonstrating its effectiveness on laundering patterns from the Bybit hack.

AINeutralarXiv – CS AI · Apr 77/10
🧠

The Topology of Multimodal Fusion: Why Current Architectures Fail at Creative Cognition

Researchers identify a fundamental topological limitation in current multimodal AI architectures like CLIP and GPT-4V, proposing that their 'contact topology' structure prevents creative cognition. The paper introduces a philosophical framework combining Chinese epistemology with neuroscience to propose new architectures using Neural ODEs and topological regularization.

🧠 Gemini
AINeutralarXiv – CS AI · Apr 77/10
🧠

Gradual Cognitive Externalization: A Framework for Understanding How Ambient Intelligence Externalizes Human Cognition

Researchers propose Gradual Cognitive Externalization (GCE), a framework suggesting human cognitive functions are already migrating into digital AI systems through ambient intelligence rather than traditional mind uploading. The study identifies evidence in scheduling assistants, writing tools, and AI agents that cognitive externalization is occurring now through bidirectional adaptation and functional equivalence.

AIBullisharXiv – CS AI · Apr 77/10
🧠

Customized User Plane Processing via Code Generating AI Agents for Next Generation Mobile Networks

Researchers propose using generative AI agents to create customized user plane processing blocks for 6G mobile networks based on text-based service requests. The study evaluates factors affecting AI code generation accuracy for network-specific tasks, finding that AI agents can successfully generate desired processing functions under suitable conditions.

← PrevPage 153 of 1177Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined