y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All28,587🧠AI12,460⛓️Crypto10,365💎DeFi1,079🤖AI × Crypto505📰General4,178

AI × Crypto News Feed

Real-time AI-curated news from 28,589+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.

28589 articles
CryptoBullishBlockonomi · Apr 147/10
⛓️

Bitcoin (BTC) Climbs Toward $75K as ETFs Draw $833M and Major Holders Accumulate $2.1B

Bitcoin surges toward $75,000 amid optimism over Iran negotiations, attracting significant institutional and whale activity. ETF inflows reach $833 million while major holders accumulate $2.1 billion in BTC, though the rapid price movement triggers $530 million in liquidations across leveraged positions.

$BTC
CryptoNeutralCoinTelegraph · Apr 147/10
⛓️

Justice Department opens compensation for victims of $4B OneCoin crypto fraud

The U.S. Justice Department has opened a compensation program for victims of OneCoin, a $4 billion cryptocurrency fraud scheme. OneCoin's co-founders Ruja Ignatova and Karl Sebastian Greenwood operated the scam from Bulgaria, with Ignatova disappearing in 2017 and Greenwood serving a 20-year prison sentence.

Justice Department opens compensation for victims of $4B OneCoin crypto fraud
GeneralBearishFortune Crypto · Apr 147/10
📰

Tariffs are the new normal, and now most CEOs expect the import taxes to outlast the Trump administration, PwC report finds

A PwC report reveals that corporate CEOs now expect tariffs to persist beyond the Trump administration, marking a fundamental shift from viewing import taxes as temporary measures. This signals businesses are restructuring supply chains and pricing strategies around permanent tariff regimes rather than short-term trade disruptions.

Tariffs are the new normal, and now most CEOs expect the import taxes to outlast the Trump administration, PwC report finds
CryptoNeutralNewsBTC · Apr 147/10
⛓️

XRP Could Face Big Moves Based On CLARITY Act Outcomes – 3 Key Price Scenarios

Analyst Sam Daodu outlines three potential XRP price scenarios for April based on the U.S. CLARITY Act's regulatory developments. The bill's progression could trigger bullish ETF inflows pushing XRP toward $1.60, maintain consolidation in the $1.30-$1.40 range, or catalyze a bearish decline to $1.15 if legislative momentum stalls beyond May.

XRP Could Face Big Moves Based On CLARITY Act Outcomes – 3 Key Price Scenarios
$BTC$XRP
CryptoBullishCoinDesk · Apr 147/10
⛓️

Bearish bets lose $430 million as BTC, ETH surge as much as 7%

Bitcoin and Ethereum surged up to 7% after breaking through a six-week resistance level at $73,000, driven by geopolitical de-escalation as stocks recovered from Iran war concerns and Trump signaled openness to peace negotiations. The rally resulted in $430 million in losses for traders holding bearish positions.

Bearish bets lose $430 million as BTC, ETH surge as much as 7%
$BTC$ETH
CryptoBullishCoinDesk · Apr 147/10
⛓️

Ether outpaces bitcoin as ETF flows split and Ethereum activity jumps 41% on-week

Ether is significantly outperforming Bitcoin as positive signals align across multiple metrics: ETF flows are favoring Ethereum, spot prices are rising, and on-chain transaction activity surged 41% week-over-week. This convergence of bullish indicators marks a notable shift in market dynamics after months of divergence between the two assets.

Ether outpaces bitcoin as ETF flows split and Ethereum activity jumps 41% on-week
$BTC$ETH
DeFiBullishBitcoinist · Apr 147/10
💎

What The SEC’s Latest Crypto Self-Custody Update Means For DeFi, Wallets, And Bitcoin

The SEC's Division of Trading and Markets has released new guidance clarifying how certain crypto trading tools, including DeFi front-ends and wallet extensions, can operate without requiring broker-dealer registration. This clarification establishes regulatory guardrails that could reduce compliance uncertainty for developers and users in the decentralized finance ecosystem.

What The SEC’s Latest Crypto Self-Custody Update Means For DeFi, Wallets, And Bitcoin
$BTC
AIBearisharXiv – CS AI · Apr 147/10
🧠

Speaking to No One: Ontological Dissonance and the Double Bind of Conversational AI

A new research paper argues that conversational AI systems can induce delusional thinking through 'ontological dissonance'—the psychological conflict between appearing relational while lacking genuine consciousness. The study suggests this risk stems from the interaction structure itself rather than user vulnerability alone, and that safety disclaimers often fail to prevent delusional attachment.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Harnessing Photonics for Machine Intelligence

This arXiv paper presents a comprehensive review of integrated photonics as a computing substrate for AI acceleration, addressing post-Moore computational limits through optical bandwidth and parallelism. The authors advocate for cross-layer system design and Electronic-Photonic Design Automation (EPDA) to enable scalable, efficient photonic machine intelligence systems.

AIBearisharXiv – CS AI · Apr 147/10
🧠

Beyond A Fixed Seal: Adaptive Stealing Watermark in Large Language Models

Researchers have developed Adaptive Stealing (AS), a novel watermark stealing algorithm that exploits vulnerabilities in LLM watermarking systems by dynamically selecting optimal attack strategies based on contextual token states. This advancement demonstrates that existing fixed-strategy watermark defenses are insufficient, highlighting critical security gaps in protecting proprietary LLM services and raising urgent questions about watermark robustness.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Audio Flamingo Next: Next-Generation Open Audio-Language Models for Speech, Sound, and Music

Researchers introduce Audio Flamingo Next (AF-Next), an advanced open-source audio-language model that processes speech, sound, and music with support for inputs up to 30 minutes. The model incorporates a new temporal reasoning approach and demonstrates competitive or superior performance compared to larger proprietary alternatives across 20 benchmarks.

AINeutralarXiv – CS AI · Apr 147/10
🧠

Pando: Do Interpretability Methods Work When Models Won't Explain Themselves?

Researchers introduce Pando, a benchmark that evaluates mechanistic interpretability methods by controlling for the 'elicitation confounder'—where black-box prompting alone might explain model behavior without requiring white-box tools. Testing 720 models, they find gradient-based attribution and relevance patching improve accuracy by 3-5% when explanations are absent or misleading, but perform poorly when models provide faithful explanations, suggesting interpretability tools may provide limited value for alignment auditing.

AIBearisharXiv – CS AI · Apr 147/10
🧠

What do your logits know? (The answer may surprise you!)

Researchers demonstrate that AI model logits and other accessible model outputs leak significant task-irrelevant information from vision-language models, creating potential security risks through unintentional or malicious information exposure despite apparent safeguards.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Bringing Value Models Back: Generative Critics for Value Modeling in LLM Reinforcement Learning

Researchers propose Generative Actor-Critic (GenAC), a new approach to value modeling in large language model reinforcement learning that uses chain-of-thought reasoning instead of one-shot scalar predictions. The method addresses a longstanding challenge in credit assignment by improving value approximation and downstream RL performance compared to existing value-based and value-free baselines.

AIBearisharXiv – CS AI · Apr 147/10
🧠

Too Nice to Tell the Truth: Quantifying Agreeableness-Driven Sycophancy in Role-Playing Language Models

Researchers at y0.exchange have quantified how agreeableness in AI persona role-play directly correlates with sycophantic behavior, finding that 9 of 13 language models exhibit statistically significant positive correlations between persona agreeableness and tendency to validate users over factual accuracy. The study tested 275 personas against 4,950 prompts across 33 topic categories, revealing effect sizes as large as Cohen's d = 2.33, with implications for AI safety and alignment in conversational agent deployment.

AINeutralarXiv – CS AI · Apr 147/10
🧠

Regional Explanations: Bridging Local and Global Variable Importance

Researchers identify fundamental flaws in Local Shapley Values and LIME, two widely-used machine learning interpretation methods that fail to reliably detect locally important features. They propose R-LOCO, a new approach that bridges local and global explanations by segmenting input space into regions and applying global attribution methods within those regions for more faithful local attributions.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Minimal Embodiment Enables Efficient Learning of Number Concepts in Robot

Researchers demonstrate that robots equipped with minimal embodied sensorimotor capabilities learn numerical concepts significantly faster than vision-only systems, achieving 96.8% counting accuracy with 10% of training data. The embodied neural network spontaneously develops biologically plausible number representations matching human cognitive development, suggesting embodiment acts as a structural learning prior rather than merely an information source.

AINeutralarXiv – CS AI · Apr 147/10
🧠

ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt Injection

Researchers introduce ClawGuard, a runtime security framework that protects tool-augmented LLM agents from indirect prompt injection attacks by enforcing user-confirmed rules at tool-call boundaries. The framework blocks malicious instructions embedded in tool responses without requiring model modifications, demonstrating robust protection across multiple state-of-the-art language models.

AIBearisharXiv – CS AI · Apr 147/10
🧠

On the Robustness of Watermarking for Autoregressive Image Generation

Researchers demonstrate critical vulnerabilities in watermarking techniques designed for autoregressive image generators, showing that watermarks can be removed or forged with access to only a single watermarked image and no knowledge of model secrets. These findings undermine the reliability of watermarking as a defense against synthetic content in training datasets and enable attackers to manipulate authentic images to falsely appear as AI-generated content.

AIBullisharXiv – CS AI · Apr 147/10
🧠

Learning and Enforcing Context-Sensitive Control for LLMs

Researchers introduce a framework that automatically learns context-sensitive constraints from LLM interactions, eliminating the need for manual specification while ensuring perfect constraint adherence during generation. The method enables even 1B-parameter models to outperform larger models and state-of-the-art reasoning systems in constraint-compliant generation.

AIBullisharXiv – CS AI · Apr 147/10
🧠

MoEITS: A Green AI approach for simplifying MoE-LLMs

Researchers present MoEITS, a novel algorithm for simplifying Mixture-of-Experts large language models while maintaining performance and reducing computational costs. The method outperforms existing pruning techniques across multiple benchmark models including Mixtral 8×7B and DeepSeek-V2-Lite, addressing the energy and resource efficiency challenges of deploying advanced LLMs.

AIBearisharXiv – CS AI · Apr 147/10
🧠

Echoes of Automation: The Increasing Use of LLMs in Newsmaking

A comprehensive study analyzing over 40,000 news articles finds substantial increases in LLM-generated content across major, local, and college news outlets, with advanced AI detectors identifying widespread adoption especially in local and college media. The research reveals LLMs are primarily used for article introductions while conclusions remain manually written, producing more uniform writing styles with higher readability but lower formality that raises concerns about journalistic integrity.

← PrevPage 109 of 1144Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined