y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All26,539🧠AI11,680⛓️Crypto9,733💎DeFi997🤖AI × Crypto505📰General3,624
🧠

AI

11,680 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

11680 articles
AIBearisharXiv – CS AI · Mar 47/103
🧠

ZeroDayBench: Evaluating LLM Agents on Unseen Zero-Day Vulnerabilities for Cyberdefense

Researchers introduced ZeroDayBench, a new benchmark testing LLM agents' ability to find and patch 22 critical vulnerabilities in open-source code. Testing on frontier models GPT-5.2, Claude Sonnet 4.5, and Grok 4.1 revealed that current LLMs cannot yet autonomously solve cybersecurity tasks, highlighting limitations in AI-powered code security.

AIBearisharXiv – CS AI · Mar 47/102
🧠

Silent Sabotage During Fine-Tuning: Few-Shot Rationale Poisoning of Compact Medical LLMs

Researchers discovered a new stealth poisoning attack method targeting medical AI language models during fine-tuning that degrades performance on specific medical topics without detection. The attack injects poisoned rationales into training data, proving more effective than traditional backdoor attacks or catastrophic forgetting methods.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Social-JEPA: Emergent Geometric Isomorphism

Researchers developed Social-JEPA, showing that separate AI agents learning from different viewpoints of the same environment develop internal representations that are mathematically aligned through approximate linear isometry. This enables models trained on one agent to work on another without retraining, suggesting a path toward interoperable decentralized AI vision systems.

AIBullisharXiv – CS AI · Mar 46/104
🧠

Universal Conceptual Structure in Neural Translation: Probing NLLB-200's Multilingual Geometry

Researchers analyzed Meta's NLLB-200 neural machine translation model across 135 languages, finding that it has implicitly learned universal conceptual structures and language genealogical relationships. The study reveals the model creates language-neutral conceptual representations similar to how multilingual brains organize information, with semantic relationships preserved across diverse languages.

AIBullisharXiv – CS AI · Mar 46/104
🧠

OCR or Not? Rethinking Document Information Extraction in the MLLMs Era with Real-World Large-Scale Datasets

A large-scale benchmarking study finds that powerful Multimodal Large Language Models (MLLMs) can extract information from business documents using image-only input, potentially eliminating the need for traditional OCR preprocessing. The research demonstrates that well-designed prompts and instructions can further enhance MLLM performance in document processing tasks.

AINeutralarXiv – CS AI · Mar 46/103
🧠

Engineering Reasoning and Instruction (ERI) Benchmark: A Large Taxonomy-driven Dataset for Foundation Models and Agents

Researchers released the ERI benchmark, a comprehensive dataset spanning 9 engineering fields and 55 subdomains to evaluate large language models' engineering capabilities. The benchmark tested 7 LLMs across 57,750 records, revealing a clear three-tier performance structure with frontier models like GPT-5 and Claude Sonnet 4 significantly outperforming mid-tier and smaller models.

AINeutralarXiv – CS AI · Mar 47/105
🧠

Federated Inference: Toward Privacy-Preserving Collaborative and Incentivized Model Serving

Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Concept Heterogeneity-aware Representation Steering

Researchers introduce CHaRS (Concept Heterogeneity-aware Representation Steering), a new method for controlling large language model behavior that uses optimal transport theory to create context-dependent steering rather than global directions. The approach models representations as Gaussian mixture models and derives input-dependent steering maps, showing improved behavioral control over existing methods.

AINeutralarXiv – CS AI · Mar 46/104
🧠

CUDABench: Benchmarking LLMs for Text-to-CUDA Generation

Researchers introduce CUDABench, a comprehensive benchmark for evaluating Large Language Models' ability to generate CUDA code from text descriptions. The benchmark reveals significant challenges including high compilation success rates but low functional correctness, lack of domain-specific knowledge, and poor GPU hardware utilization.

AIBullisharXiv – CS AI · Mar 46/103
🧠

Reducing Belief Deviation in Reinforcement Learning for Active Reasoning

Researchers introduce T³, a new method to improve large language model (LLM) agents' reasoning abilities by tracking and correcting 'belief deviation' - when AI agents lose accurate understanding of problem states. The technique achieved up to 30-point performance gains and 34% token cost reduction across challenging tasks.

$COMP
AIBullisharXiv – CS AI · Mar 46/103
🧠

PRISM: Exploring Heterogeneous Pretrained EEG Foundation Model Transfer to Clinical Differential Diagnosis

Researchers introduce PRISM, an EEG foundation model that demonstrates how diverse pretraining data leads to better clinical performance than narrow-source datasets. The study shows that geographically diverse EEG data outperforms larger but homogeneous datasets in medical diagnosis tasks, particularly achieving 12.3% better accuracy in distinguishing epilepsy from similar conditions.

$COMP
AIBullisharXiv – CS AI · Mar 46/103
🧠

MEBM-Speech: Multi-scale Enhanced BrainMagic for Robust MEG Speech Detection

Researchers propose MEBM-Speech, a neural decoder that detects speech activity from brain signals using magnetoencephalography (MEG). The system achieved 89.3% F1 score on benchmark tests and could advance brain-computer interfaces for cognitive neuroscience and clinical applications.

AIBullisharXiv – CS AI · Mar 47/102
🧠

Physics-Informed Neural Networks with Architectural Physics Embedding for Large-Scale Wave Field Reconstruction

Researchers developed Physics-Embedded PINNs (PE-PINN) that achieve 10x faster convergence than standard physics-informed neural networks and orders of magnitude memory reduction compared to traditional methods for large-scale wave field reconstruction. The breakthrough enables high-fidelity electromagnetic wave modeling for wireless communications, sensing, and room acoustics applications.

AIBullisharXiv – CS AI · Mar 46/104
🧠

Talking with Verifiers: Automatic Specification Generation for Neural Network Verification

Researchers have developed a framework that allows neural network verification tools to accept natural language specifications instead of low-level technical constraints. The system automatically translates human-readable requirements into formal verification queries, significantly expanding the practical applicability of neural network verification across diverse domains.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Structured vs. Unstructured Pruning: An Exponential Gap

Research reveals an exponential gap between structured and unstructured neural network pruning methods. While unstructured weight pruning can approximate target functions with O(d log(1/ε)) neurons, structured neuron pruning requires Ω(d/ε) neurons, demonstrating fundamental limitations of structured approaches.

AINeutralarXiv – CS AI · Mar 47/102
🧠

LLM Probability Concentration: How Alignment Shrinks the Generative Horizon

Researchers introduce the Branching Factor (BF) metric to measure how alignment tuning reduces output diversity in large language models by concentrating probability distributions. The study reveals that aligned models generate 2-5x less diverse outputs and become more predictable during generation, explaining why alignment reduces sensitivity to decoding strategies and enables more stable Chain-of-Thought reasoning.

AIBullisharXiv – CS AI · Mar 47/103
🧠

Self-Improving Loops for Visual Robotic Planning

Researchers developed SILVR, a self-improving system for visual robotic planning that uses video generative models to continuously enhance robot performance through self-collected data. The system demonstrates improved task performance across MetaWorld simulations and real robot manipulations without requiring human-provided rewards or expert demonstrations.

AIBearisharXiv – CS AI · Mar 47/102
🧠

The Geometry of Learning Under AI Delegation

Researchers developed a mathematical model showing how AI delegation can create stable low-skill equilibria where humans become persistently reliant on AI systems. The study reveals that while AI assistance improves short-term performance, it can lead to long-term skill degradation through reduced practice and negative feedback loops.

AIBearisharXiv – CS AI · Mar 46/102
🧠

Scores Know Bobs Voice: Speaker Impersonation Attack

Researchers developed a new AI attack method that can fool speaker recognition systems with 10x fewer attempts than previous approaches. The technique uses feature-aligned inversion to optimize attacks in latent space, achieving up to 91.65% success rate with only 50 queries.

AINeutralarXiv – CS AI · Mar 47/103
🧠

Forecasting as Rendering: A 2D Gaussian Splatting Framework for Time Series Forecasting

Researchers introduce TimeGS, a novel time series forecasting framework that reimagines prediction as 2D generative rendering using Gaussian splatting techniques. The approach addresses key limitations in existing methods by treating future sequences as continuous latent surfaces and enforcing temporal continuity across periodic boundaries.

← PrevPage 72 of 468Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined