y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,116🧠AI13,264⛓️Crypto11,236💎DeFi1,159🤖AI × Crypto566📰General4,891

AI × Crypto News Feed

Real-time AI-curated news from 31,116+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

31116 articles
CryptoBearishCryptoPotato · Mar 97/10
⛓️

Surging Oil Prices and Inflation Data Will Rattle Crypto Markets This Week

This week's economic calendar features critical oil price movements and key U.S. inflation data releases that are expected to create volatility in cryptocurrency markets. The confluence of energy market pressures and inflation metrics will likely drive significant price action across digital assets.

Surging Oil Prices and Inflation Data Will Rattle Crypto Markets This Week
CryptoNeutralDecrypt – AI · Mar 97/10
⛓️

Post-Quantum Shift Could Force Crypto Exchanges to Rethink Wallet Security

New research addresses potential security vulnerabilities that quantum computing could pose to cryptocurrency exchange wallet systems. The research focuses on maintaining exchanges' ability to generate deposit addresses without exposing private keys in a post-quantum cryptography environment.

Post-Quantum Shift Could Force Crypto Exchanges to Rethink Wallet Security
AIBullisharXiv – CS AI · Mar 97/10
🧠

Understanding and Improving Hyperbolic Deep Reinforcement Learning

Researchers have developed Hyper++, a new hyperbolic deep reinforcement learning agent that solves optimization challenges in hyperbolic geometry-based RL. The system outperforms previous approaches by 30% in training speed and demonstrates superior performance on benchmark tasks through improved gradient stability and feature regularization.

AINeutralarXiv – CS AI · Mar 97/10
🧠

LLMTM: Benchmarking and Optimizing LLMs for Temporal Motif Analysis in Dynamic Graphs

Researchers introduced LLMTM, a comprehensive benchmark to evaluate Large Language Models' performance on temporal motif analysis in dynamic graphs. The study tested nine different LLMs and developed a structure-aware dispatcher that balances accuracy with cost-effectiveness for graph analysis tasks.

🧠 GPT-4
AIBullisharXiv – CS AI · Mar 97/10
🧠

TADPO: Reinforcement Learning Goes Off-road

Researchers introduced TADPO, a novel reinforcement learning approach that extends PPO for autonomous off-road driving. The system achieved successful zero-shot sim-to-real transfer on a full-scale off-road vehicle, marking the first RL-based policy deployment on such a platform.

AIBullisharXiv – CS AI · Mar 97/10
🧠

COLD-Steer: Steering Large Language Models via In-Context One-step Learning Dynamics

Researchers introduce COLD-Steer, a training-free framework that enables efficient control of large language model behavior at inference time using just a few examples. The method approximates gradient descent effects without parameter updates, achieving 95% steering effectiveness while using 50 times fewer samples than existing approaches.

AIBullisharXiv – CS AI · Mar 97/10
🧠

SAHOO: Safeguarded Alignment for High-Order Optimization Objectives in Recursive Self-Improvement

Researchers introduce SAHOO, a framework to prevent alignment drift in AI systems that recursively self-improve by monitoring goal changes, preserving constraints, and quantifying regression risks. The system achieved 18.3% improvement in code generation and 16.8% in reasoning tasks while maintaining safety constraints across 189 test scenarios.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Physical Simulator In-the-Loop Video Generation

Researchers introduce PSIVG, a framework that integrates physical simulators into AI video generation to ensure generated videos obey real-world physics like gravity and collision. The system reconstructs 4D scenes from template videos and uses physical simulations to guide video generators toward more realistic motion while maintaining visual quality.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Stem: Rethinking Causal Information Flow in Sparse Attention

Researchers propose Stem, a new sparse attention mechanism for Large Language Models that reduces computational complexity while maintaining accuracy. The method uses position-dependent token selection and output-aware metrics to optimize information flow in causal attention, achieving faster pre-filling with better performance.

AINeutralarXiv – CS AI · Mar 97/10
🧠

Agentic retrieval-augmented reasoning reshapes collective reliability under model variability in radiology question answering

Researchers evaluated 34 large language models on radiology questions, finding that agentic retrieval-augmented reasoning systems improve consensus and reliability across different AI models. The study shows these systems reduce decision variability between models and increase robust correctness, though 72% of incorrect outputs still carried moderate to high clinical severity.

AIBullisharXiv – CS AI · Mar 97/10
🧠

AI End-to-End Radiation Treatment Planning Under One Second

Researchers developed AIRT, an AI-powered radiation therapy planning system that generates complete prostate cancer treatment plans in under one second using deep learning. The system processes CT scans and anatomical data to produce clinically-viable radiation treatment plans 100x faster than current methods, demonstrating non-inferiority to existing commercial solutions.

🏢 Nvidia
AINeutralarXiv – CS AI · Mar 97/10
🧠

Reasoning Models Struggle to Control their Chains of Thought

Researchers found that AI reasoning models struggle to control their chain-of-thought (CoT) outputs, with Claude Sonnet 4.5 able to control its CoT only 2.7% of the time versus 61.9% for final outputs. This limitation suggests CoT monitoring remains viable for detecting AI misbehavior, though the underlying mechanisms are poorly understood.

🧠 Claude🧠 Sonnet
AINeutralarXiv – CS AI · Mar 97/10
🧠

Real-Time AI Service Economy: A Framework for Agentic Computing Across the Continuum

Researchers propose a framework for decentralized resource allocation in real-time AI services across device-edge-cloud infrastructure. The study shows that dependency graph topology determines whether price-based allocation can work at scale, with hierarchical structures enabling stable pricing while complex dependencies cause instability.

AIBearisharXiv – CS AI · Mar 97/10
🧠

Depth Charge: Jailbreak Large Language Models from Deep Safety Attention Heads

Researchers have developed SAHA (Safety Attention Head Attack), a new jailbreak framework that exploits vulnerabilities in deeper attention layers of open-source large language models. The method improves attack success rates by 14% over existing techniques by targeting insufficiently aligned attention heads rather than surface-level prompts.

AIBullisharXiv – CS AI · Mar 97/10
🧠

SPARC: Concept-Aligned Sparse Autoencoders for Cross-Model and Cross-Modal Interpretability

Researchers introduced SPARC, a framework that creates unified latent spaces across different AI models and modalities, enabling direct comparison of how various architectures represent identical concepts. The method achieves 0.80 Jaccard similarity on Open Images, tripling alignment compared to previous methods, and enables practical applications like text-guided spatial localization in vision-only models.

AIBullisharXiv – CS AI · Mar 97/10
🧠

Localizing and Correcting Errors for LLM-based Planners

Researchers developed Localized In-Context Learning (L-ICL), a technique that significantly improves large language model performance on symbolic planning tasks by targeting specific constraint violations with minimal corrections. The method achieves 89% valid plan generation compared to 59% for best baselines, representing a major advancement in LLM reasoning capabilities.

AIBullisharXiv – CS AI · Mar 97/10
🧠

DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning

Researchers introduce DataChef-32B, an AI system that uses reinforcement learning to automatically generate optimal data processing recipes for training large language models. The system eliminates the need for manual data curation by automatically designing complete data pipelines, achieving performance comparable to human experts across six benchmark tasks.

AI × CryptoBullisharXiv – CS AI · Mar 97/10
🤖

Proof-of-Guardrail in AI Agents and What (Not) to Trust from It

Researchers propose 'proof-of-guardrail' system that uses cryptographic proof and Trusted Execution Environments to verify AI agent safety measures. The system allows users to cryptographically verify that AI responses were generated after specific open-source safety guardrails were executed, addressing concerns about falsely advertised safety measures.

AIBearisharXiv – CS AI · Mar 97/10
🧠

Knowing without Acting: The Disentangled Geometry of Safety Mechanisms in Large Language Models

Researchers propose the Disentangled Safety Hypothesis (DSH) revealing that AI safety mechanisms in large language models operate on two separate axes - recognition ('knowing') and execution ('acting'). They demonstrate how this separation can be exploited through the Refusal Erasure Attack to bypass safety controls while comparing architectural differences between Llama3.1 and Qwen2.5.

🧠 Llama
AIBullisharXiv – CS AI · Mar 97/10
🧠

LUMINA: LLM-Guided GPU Architecture Exploration via Bottleneck Analysis

LUMINA is a new LLM-driven framework for GPU architecture exploration that uses AI to optimize GPU designs for modern AI workloads like LLM inference. The system achieved 17.5x higher efficiency than traditional methods and identified 6 designs superior to NVIDIA's A100 GPU using only 20 exploration steps.

AINeutralarXiv – CS AI · Mar 97/10
🧠

Experiences Build Characters: The Linguistic Origins and Functional Impact of LLM Personality

Researchers developed a method called "Personality Engineering" to create AI models with diverse personality traits through continued pre-training on domain-specific texts. The study found that AI performance peaks in two types: "Expressive Generalists" and "Suppressed Specialists," with reduced social traits actually improving complex reasoning abilities.

AINeutralarXiv – CS AI · Mar 97/10
🧠

Aligning Compound AI Systems via System-level DPO

Researchers introduce SysDPO, a framework that extends Direct Preference Optimization to align compound AI systems comprising multiple interacting components like LLMs, foundation models, and external tools. The approach addresses challenges in optimizing complex AI systems by modeling them as Directed Acyclic Graphs and enabling system-level alignment through two variants: SysDPO-Direct and SysDPO-Sampling.

← PrevPage 251 of 1245Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined