y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All25,197🧠AI11,264⛓️Crypto9,373💎DeFi947🤖AI × Crypto505📰General3,108
🧠

AI

11,264 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11264 articles
AIBearishBlockonomi · Apr 107/10
🧠

Shenzhen AI Firm Reveals $92M Purchase of Restricted Nvidia (NVDA) Servers

Chinese AI firm Sharetronic disclosed a $92 million purchase of restricted Nvidia H100/H200 servers, raising questions about export control enforcement. The revelation coincides with US charges against Super Micro Computer's co-founder for allegedly smuggling advanced chips to China.

🏢 Nvidia
AIBearishBlockonomi · Apr 107/10
🧠

Adobe (ADBE) Stock Plunges to 52-Week Low Amid AI Disruption Fears

Adobe (ADBE) has fallen to a 52-week low around $230 as the company faces mounting pressure from AI competition and market concerns about disruption to its core business. Despite beating Q1 expectations, Citi downgraded its price target to $253, and the announcement of a CEO transition has compounded investor anxiety about the company's strategic direction.

AIBullishBlockonomi · Apr 107/10
🧠

Micron (MU) Stock Soars 123% in Six Months: Why Wall Street Remains Optimistic

Micron's stock has surged 123% over six months driven by exceptional AI-related memory chip demand, with HBM (high-bandwidth memory) products sold out through 2026 and revenue climbing 196%. Despite these stellar fundamentals, the stock trades at a modest 5-6x forward price-to-earnings ratio, suggesting Wall Street sees significant upside remaining.

AIBearishThe Verge – AI · Apr 107/10
🧠

Fear and loathing at OpenAI

The New Yorker published an investigative piece examining Sam Altman's leadership at OpenAI, questioning his suitability to control transformative AI technology following his brief removal and reinstatement as CEO. The article explores the organizational instability and leadership concerns surrounding one of the world's most influential AI companies.

Fear and loathing at OpenAI
🏢 OpenAI
AIBearishThe Register – AI · Apr 107/10
🧠

Suits won't quit AI spending, even if they can't prove it's working

Enterprise executives continue increasing AI spending despite difficulty measuring concrete returns on investment, driven by competitive pressure and fear of falling behind. This trend reveals a disconnect between AI's promised transformative potential and demonstrable business outcomes, raising questions about sustainable spending patterns in the sector.

AIBearishBlockonomi · Apr 107/10
🧠

ServiceNow (NOW) Stock Plunges Nearly 8% Amid Geopolitical Chaos and AI Disruption Concerns

ServiceNow stock declined 7.86% on Friday, driven by Middle East geopolitical tensions and competitive pressure from Anthropic's new AI agents platform. The decline extends ServiceNow's year-to-date losses to 38.3%, signaling investor concerns about both macroeconomic uncertainty and AI-driven market disruption in enterprise software.

🏢 Anthropic
AIBearishWired – AI · Apr 107/10
🧠

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Meta's Muse Spark AI model requests access to users' raw health data including lab results, raising significant privacy concerns while demonstrating poor medical judgment. The system exemplifies how large language models lack the expertise to provide reliable healthcare guidance despite their persuasive presentation.

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice
AIBullishBlockonomi · Apr 107/10
🧠

Taiwan Semiconductor (TSM) Stock Soars on 45% Revenue Surge Fueled by AI Boom

Taiwan Semiconductor Manufacturing Company (TSMC) reported a 45% year-over-year revenue increase in March to $13.07 billion, with Q1 revenue of T$1.134 trillion exceeding analyst estimates. The surge is primarily driven by accelerating demand for AI chips, positioning TSMC as a critical beneficiary of the AI infrastructure boom.

AIBearishBlockonomi · Apr 107/10
🧠

Why Did Federal Officials Urgently Summon Banking CEOs Over Anthropic’s Mythos AI?

U.S. Treasury and Federal Reserve officials convened urgent meetings with major banking CEOs regarding Anthropic's Mythos AI system, which possesses the capability to identify and exploit vulnerabilities in critical financial infrastructure. The high-level engagement signals government concern about AI-driven cybersecurity risks to the banking sector.

🏢 Anthropic
AINeutralCoinTelegraph · Apr 107/10
🧠

Elon Musk’s xAI sues Colorado arguing its AI rules restrict speech

Elon Musk's xAI has filed a lawsuit against Colorado, arguing that the state's AI regulations violate free speech protections by forcing developers to align their models with state-mandated political perspectives. xAI contends that such restrictions would compromise Grok's ability to pursue truth-seeking functionality and operate without ideological constraints.

Elon Musk’s xAI sues Colorado arguing its AI rules restrict speech
🏢 xAI🧠 Grok
AIBullisharXiv – CS AI · Apr 107/10
🧠

AI-Driven Research for Databases

Researchers propose AI-Driven Research for Systems (ADRS), a framework using large language models to automate database optimization by generating and evaluating hundreds of candidate solutions. By co-evolving evaluators with solutions, the team demonstrates discovery of novel algorithms achieving up to 6.8x latency improvements over existing baselines in buffer management, query rewriting, and index selection tasks.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Efficient Quantization of Mixture-of-Experts with Theoretical Generalization Guarantees

Researchers propose an expert-wise mixed-precision quantization strategy for Mixture-of-Experts models that assigns bit-widths based on router gradient changes and neuron variance. The method achieves higher accuracy than existing approaches while reducing inference memory overhead on large-scale models like Switch Transformer and Mixtral with minimal computational overhead.

AIBearisharXiv – CS AI · Apr 107/10
🧠

Invisible to Humans, Triggered by Agents: Stealthy Jailbreak Attacks on Mobile Vision-Language Agents

Researchers have discovered a new attack vulnerability in mobile vision-language agents where malicious prompts remain invisible to human users but are triggered during autonomous agent interactions. Using an optimization method called HG-IDA*, attackers can achieve 82.5% planning and 75.0% execution hijack rates on GPT-4o by exploiting the lack of touch signals during agent operations, exposing a critical security gap in deployed mobile AI systems.

🧠 GPT-4
AIBullisharXiv – CS AI · Apr 107/10
🧠

Scientific Knowledge-driven Decoding Constraints Improving the Reliability of LLMs

Researchers propose SciDC, a method that constrains large language model outputs using subject-specific scientific rules to reduce hallucinations and improve reliability. The approach demonstrates 12% average accuracy improvements across domain tasks including drug formulation, clinical diagnosis, and chemical synthesis planning.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Distributed Interpretability and Control for Large Language Models

Researchers have developed a scalable system for interpreting and controlling large language models distributed across multiple GPUs, achieving up to 7x memory reduction and 41x throughput improvements. The method enables real-time behavioral steering of frontier LLMs like LLaMA and Qwen without fine-tuning, with results released as open-source tooling.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Inference-Time Code Selection via Symbolic Equivalence Partitioning

Researchers propose Symbolic Equivalence Partitioning, a novel inference-time selection method for code generation that uses symbolic execution and SMT constraints to identify correct solutions without expensive external verifiers. The approach improves accuracy on HumanEval+ by 10.3% and on LiveCodeBench by 17.1% at N=10 without requiring additional LLM inference.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Weakly Supervised Distillation of Hallucination Signals into Transformer Representations

Researchers developed a weak supervision framework to detect hallucinations in large language models by distilling grounding signals into transformer representations during training. Using substring matching, sentence embeddings, and LLM judges, they created a 15,000-sample dataset and trained five probing classifiers that achieve hallucination detection from internal activations alone at inference time, eliminating the need for external verification systems.

AIBullisharXiv – CS AI · Apr 107/10
🧠

Can VLMs Unlock Semantic Anomaly Detection? A Framework for Structured Reasoning

Researchers introduce SAVANT, a model-agnostic framework that improves Vision Language Models' ability to detect semantic anomalies in autonomous driving scenarios by 18.5% through structured reasoning instead of ad hoc prompting. The team used this approach to label 10,000 real-world images and fine-tuned an open-source 7B model achieving 90.8% recall, demonstrating practical deployment feasibility without proprietary model dependency.

AIBullisharXiv – CS AI · Apr 107/10
🧠

MoBiE: Efficient Inference of Mixture of Binary Experts under Post-Training Quantization

Researchers introduce MoBiE, a novel binarization framework designed specifically for Mixture-of-Experts large language models that achieves significant efficiency gains through weight compression while maintaining model performance. The method addresses unique challenges in quantizing MoE architectures and demonstrates over 2× inference speedup with substantial perplexity reductions on benchmark models.

🏢 Perplexity
← PrevPage 20 of 451Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined