y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All25,606🧠AI11,450⛓️Crypto9,475💎DeFi960🤖AI × Crypto505📰General3,216
🧠

AI

11,450 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11450 articles
AIBullisharXiv – CS AI · Mar 277/10
🧠

Cross-Model Disagreement as a Label-Free Correctness Signal

Researchers introduce cross-model disagreement as a training-free method to detect when AI language models make confident errors without requiring ground truth labels. The approach uses Cross-Model Perplexity and Cross-Model Entropy to measure how surprised a second verifier model is when reading another model's answers, significantly outperforming existing uncertainty-based methods across multiple benchmarks.

🏢 Perplexity
AIBearisharXiv – CS AI · Mar 277/10
🧠

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Researchers have developed PIDP-Attack, a new cybersecurity threat that combines prompt injection with database poisoning to manipulate AI responses in Retrieval-Augmented Generation (RAG) systems. The attack method demonstrated 4-16% higher success rates than existing techniques across multiple benchmark datasets and eight different large language models.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment

Researchers introduce WriteBack-RAG, a framework that treats knowledge bases in retrieval-augmented generation systems as trainable components rather than static databases. The method distills relevant information from documents into compact knowledge units, improving RAG performance across multiple benchmarks by an average of +2.14%.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Sketch2Simulation: Automating Flowsheet Generation via Multi Agent Large Language Models

Researchers developed an end-to-end multi-agent AI system that automatically converts hand-drawn process engineering diagrams into executable simulation models for Aspen HYSYS software. The framework achieved high accuracy with connection consistency above 0.93 and stream consistency above 0.96 across four chemical engineering case studies of varying complexity.

AIBullisharXiv – CS AI · Mar 277/10
🧠

The Future of AI-Driven Software Engineering

A paradigm shift is occurring in software engineering as AI systems like LLMs increasingly boost development productivity. The paper presents a vision for growing symbiotic partnerships between human developers and AI, identifying key research challenges the software engineering community must address.

AINeutralarXiv – CS AI · Mar 277/10
🧠

ARC-AGI-3: A New Challenge for Frontier Agentic Intelligence

Researchers introduce ARC-AGI-3, a new benchmark for testing agentic AI systems that focuses on fluid adaptive intelligence without relying on language or external knowledge. While humans can solve 100% of the benchmark's abstract reasoning tasks, current frontier AI systems score below 1% as of March 2026.

AINeutralarXiv – CS AI · Mar 277/10
🧠

When Is Collective Intelligence a Lottery? Multi-Agent Scaling Laws for Memetic Drift in LLMs

Researchers introduce Quantized Simplex Gossip (QSG) model to explain how multi-agent LLM systems reach consensus through 'memetic drift' - where arbitrary choices compound into collective agreement. The study reveals scaling laws for when collective intelligence operates like a lottery versus amplifying weak biases, providing a framework for understanding AI system behavior in consequential decision-making.

AIBullisharXiv – CS AI · Mar 277/10
🧠

SWAA: Sliding Window Attention Adaptation for Efficient and Quality Preserving Long Context Processing

Researchers propose SWAA (Sliding Window Attention Adaptation), a toolkit that enables efficient long-context processing in large language models by adapting full attention models to sliding window attention without expensive retraining. The solution achieves 30-100% speedups for long context inference while maintaining acceptable performance quality through four core strategies that address training-inference mismatches.

AIBearisharXiv – CS AI · Mar 277/10
🧠

Impact of AI Search Summaries on Website Traffic: Evidence from Google AI Overviews and Wikipedia

A research study analyzing Google's AI Overviews feature found it reduces Wikipedia traffic by approximately 15% through causal analysis of 161,382 matched articles. The impact varies by content type, with Culture articles experiencing larger traffic declines than STEM topics, suggesting AI summaries substitute for clicks when brief answers satisfy user queries.

🏢 Google
AIBearisharXiv – CS AI · Mar 277/10
🧠

Epistemic Bias Injection: Biasing LLMs via Selective Context Retrieval

Researchers have identified a new attack vector called Epistemic Bias Injection (EBI) that manipulates AI language models by injecting factually correct but biased content into retrieval-augmented generation databases. The attack steers model outputs toward specific viewpoints while evading traditional detection methods, though a new defense mechanism called BiasDef shows promise in mitigating these threats.

AIBullisharXiv – CS AI · Mar 277/10
🧠

Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation

Ming-Flash-Omni is a new 100 billion parameter multimodal AI model with Mixture-of-Experts architecture that uses only 6.1 billion active parameters per token. The model demonstrates unified capabilities across vision, speech, and language tasks, achieving performance comparable to Gemini 2.5 Pro on vision-language benchmarks.

🧠 Gemini
AIBullisharXiv – CS AI · Mar 277/10
🧠

DRIFT: Dynamic Rule-Based Defense with Injection Isolation for Securing LLM Agents

Researchers introduce DRIFT, a new security framework designed to protect AI agents from prompt injection attacks through dynamic rule enforcement and memory isolation. The system uses a three-component approach with a Secure Planner, Dynamic Validator, and Injection Isolator to maintain security while preserving functionality across diverse AI models.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Beyond Content Safety: Real-Time Monitoring for Reasoning Vulnerabilities in Large Language Models

Researchers have identified a new category of AI safety called 'reasoning safety' that focuses on protecting the logical consistency and integrity of LLM reasoning processes. They developed a real-time monitoring system that can detect unsafe reasoning behaviors with over 84% accuracy, addressing vulnerabilities beyond traditional content safety measures.

AINeutralarXiv – CS AI · Mar 277/10
🧠

DiffuGuard: How Intrinsic Safety is Lost and Found in Diffusion Large Language Models

Researchers identified critical security vulnerabilities in Diffusion Large Language Models (dLLMs) that differ from traditional autoregressive LLMs, stemming from their iterative generation process. They developed DiffuGuard, a training-free defense framework that reduces jailbreak attack success rates from 47.9% to 14.7% while maintaining model performance.

AIBearisharXiv – CS AI · Mar 277/10
🧠

LLMs know their vulnerabilities: Uncover Safety Gaps through Natural Distribution Shifts

Researchers have identified a new vulnerability in large language models called 'natural distribution shifts' where seemingly benign prompts can bypass safety mechanisms to reveal harmful content. They developed ActorBreaker, a novel attack method that uses multi-turn prompts to gradually expose unsafe content, and proposed expanding safety training to address this vulnerability.

AINeutralarXiv – CS AI · Mar 277/10
🧠

Sparse Visual Thought Circuits in Vision-Language Models

Research reveals that sparse autoencoder (SAE) features in vision-language models often fail to compose modularly for reasoning tasks. The study finds that combining task-selective feature sets frequently causes output drift and accuracy degradation, challenging assumptions used in AI model steering methods.

AIBearisharXiv – CS AI · Mar 277/10
🧠

The LLM Bottleneck: Why Open-Source Vision LLMs Struggle with Hierarchical Visual Recognition

Research reveals that open-source large language models (LLMs) lack hierarchical knowledge of visual taxonomies, creating a bottleneck for vision LLMs in hierarchical visual recognition tasks. The study used one million visual question answering tasks across six taxonomies to demonstrate this limitation, finding that even fine-tuning cannot overcome the underlying LLM knowledge gaps.

AIBullisharXiv – CS AI · Mar 277/10
🧠

LLM4AD: Large Language Models for Autonomous Driving -- Concept, Review, Benchmark, Experiments, and Future Trends

Researchers have published a comprehensive review of Large Language Models for Autonomous Driving (LLM4AD), introducing new benchmarks and conducting real-world experiments on autonomous vehicle platforms. The paper explores how LLMs can enhance perception, decision-making, and motion control in self-driving cars, while identifying key challenges including latency, security, and safety concerns.

AIBullishFortune Crypto · Mar 277/10
🧠

Exclusive: Anthropic acknowledges testing new AI model representing ‘step change’ in capabilities, after accidental data leak reveals its existence

Anthropic accidentally revealed through a publicly accessible draft blog post that it is testing a new AI model called 'Mythos' which represents a significant advancement in capabilities beyond their current offerings. The company has acknowledged the testing after the accidental data leak exposed the previously undisclosed model's existence.

Exclusive: Anthropic acknowledges testing new AI model representing ‘step change’ in capabilities, after accidental data leak reveals its existence
🏢 Anthropic
AIBearishFortune Crypto · Mar 277/10
🧠

Exclusive: Anthropic left details of an unreleased model, exclusive CEO retreat, sitting in an unsecured data trove in a significant security lapse

Anthropic experienced a significant security breach where sensitive information including details of unreleased AI models, unpublished blog drafts, and exclusive CEO retreat information was left accessible through an unsecured content management system. This represents a major data security lapse for one of the leading AI companies.

Exclusive: Anthropic left details of an unreleased model, exclusive CEO retreat, sitting in an unsecured data trove in a significant security lapse
🏢 Anthropic
AIBullishTechCrunch – AI · Mar 277/10
🧠

Anthropic wins injunction against Trump administration over Defense Department saga

A federal judge has ruled in favor of AI company Anthropic, ordering the Trump administration to rescind recent restrictions placed on the company related to Defense Department dealings. The injunction represents a legal victory for Anthropic against government regulatory action.

🏢 Anthropic
AIBullishThe Verge – AI · Mar 277/10
🧠

Judge sides with Anthropic to temporarily block the Pentagon’s ban

A federal judge granted Anthropic a preliminary injunction against the Pentagon's blacklisting, ruling that the company was designated as a supply chain risk due to its 'hostile manner through the press.' The injunction temporarily blocks the ban while the lawsuit proceeds, with the judge citing potential First Amendment violations.

Judge sides with Anthropic to temporarily block the Pentagon’s ban
🏢 Anthropic
AIBullishMIT News – AI · Mar 267/10
🧠

MIT engineers design proteins by their motion, not just their shape

MIT engineers have developed an AI model that generates novel proteins based on their vibrational motion and dynamics rather than just static structure. This breakthrough approach opens new possibilities for creating dynamic biomaterials and adaptive therapeutics that leverage protein movement.

MIT engineers design proteins by their motion, not just their shape
AIBearishDecrypt · Mar 267/10
🧠

Is AGI Here? Not Even Close, New AI Benchmark Suggests

A new AI benchmark called ARC-AGI-3 was released the same week Jensen Huang claimed AGI was achieved, showing dramatically poor performance from leading AI models. While humans scored 100% on the benchmark, advanced models like Gemini and GPT scored less than 0.4%, suggesting artificial general intelligence remains far from reality.

Is AGI Here? Not Even Close, New AI Benchmark Suggests
🧠 GPT-5🧠 Gemini
← PrevPage 31 of 458Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined