11,680 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBearisharXiv – CS AI · Mar 47/103
🧠Researchers introduced ZeroDayBench, a new benchmark testing LLM agents' ability to find and patch 22 critical vulnerabilities in open-source code. Testing on frontier models GPT-5.2, Claude Sonnet 4.5, and Grok 4.1 revealed that current LLMs cannot yet autonomously solve cybersecurity tasks, highlighting limitations in AI-powered code security.
AIBearisharXiv – CS AI · Mar 47/102
🧠Researchers discovered a new stealth poisoning attack method targeting medical AI language models during fine-tuning that degrades performance on specific medical topics without detection. The attack injects poisoned rationales into training data, proving more effective than traditional backdoor attacks or catastrophic forgetting methods.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers developed Social-JEPA, showing that separate AI agents learning from different viewpoints of the same environment develop internal representations that are mathematically aligned through approximate linear isometry. This enables models trained on one agent to work on another without retraining, suggesting a path toward interoperable decentralized AI vision systems.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers analyzed Meta's NLLB-200 neural machine translation model across 135 languages, finding that it has implicitly learned universal conceptual structures and language genealogical relationships. The study reveals the model creates language-neutral conceptual representations similar to how multilingual brains organize information, with semantic relationships preserved across diverse languages.
AINeutralarXiv – CS AI · Mar 47/104
🧠Researchers introduce GraphSSR, a new framework that improves zero-shot graph learning by combining Large Language Models with adaptive subgraph denoising. The system addresses structural noise issues in existing methods through a dynamic 'Sample-Select-Reason' pipeline and reinforcement learning training.
AIBullisharXiv – CS AI · Mar 46/104
🧠A large-scale benchmarking study finds that powerful Multimodal Large Language Models (MLLMs) can extract information from business documents using image-only input, potentially eliminating the need for traditional OCR preprocessing. The research demonstrates that well-designed prompts and instructions can further enhance MLLM performance in document processing tasks.
AIBullisharXiv – CS AI · Mar 46/102
🧠Researchers identified a critical problem in Large Audio-Language Models (LALMs) where audio perception deteriorates during extended reasoning processes. They developed MPAR² framework using reinforcement learning, which improved perception performance from 31.74% to 63.51% and achieved 74.59% accuracy on MMAU benchmark.
AINeutralarXiv – CS AI · Mar 46/103
🧠Researchers released the ERI benchmark, a comprehensive dataset spanning 9 engineering fields and 55 subdomains to evaluate large language models' engineering capabilities. The benchmark tested 7 LLMs across 57,750 records, revealing a clear three-tier performance structure with frontier models like GPT-5 and Claude Sonnet 4 significantly outperforming mid-tier and smaller models.
AINeutralarXiv – CS AI · Mar 47/105
🧠Researchers introduce Federated Inference (FI), a new collaborative paradigm where independently trained AI models can work together at inference time without sharing data or model parameters. The study identifies key requirements including privacy preservation and performance gains, while highlighting system-level challenges that differ from traditional federated learning approaches.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers introduce CHaRS (Concept Heterogeneity-aware Representation Steering), a new method for controlling large language model behavior that uses optimal transport theory to create context-dependent steering rather than global directions. The approach models representations as Gaussian mixture models and derives input-dependent steering maps, showing improved behavioral control over existing methods.
AINeutralarXiv – CS AI · Mar 46/104
🧠Researchers introduce CUDABench, a comprehensive benchmark for evaluating Large Language Models' ability to generate CUDA code from text descriptions. The benchmark reveals significant challenges including high compilation success rates but low functional correctness, lack of domain-specific knowledge, and poor GPU hardware utilization.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers introduce T³, a new method to improve large language model (LLM) agents' reasoning abilities by tracking and correcting 'belief deviation' - when AI agents lose accurate understanding of problem states. The technique achieved up to 30-point performance gains and 34% token cost reduction across challenging tasks.
$COMP
AINeutralarXiv – CS AI · Mar 46/103
🧠Researchers have developed SEAL, a reference framework for measuring carbon emissions from Large Language Model inference at the prompt level. The framework addresses the growing sustainability concerns as LLM inference emissions are rapidly surpassing training emissions due to massive usage volumes.
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers introduce PRISM, an EEG foundation model that demonstrates how diverse pretraining data leads to better clinical performance than narrow-source datasets. The study shows that geographically diverse EEG data outperforms larger but homogeneous datasets in medical diagnosis tasks, particularly achieving 12.3% better accuracy in distinguishing epilepsy from similar conditions.
$COMP
AIBullisharXiv – CS AI · Mar 46/103
🧠Researchers propose MEBM-Speech, a neural decoder that detects speech activity from brain signals using magnetoencephalography (MEG). The system achieved 89.3% F1 score on benchmark tests and could advance brain-computer interfaces for cognitive neuroscience and clinical applications.
AIBullisharXiv – CS AI · Mar 47/104
🧠Researchers present a new mathematical framework for training AI reward models using Likert scale preferences instead of simple binary comparisons. The approach uses ordinal regression to better capture nuanced human feedback, outperforming existing methods across chat, reasoning, and safety benchmarks.
AIBullisharXiv – CS AI · Mar 47/102
🧠Researchers developed Physics-Embedded PINNs (PE-PINN) that achieve 10x faster convergence than standard physics-informed neural networks and orders of magnitude memory reduction compared to traditional methods for large-scale wave field reconstruction. The breakthrough enables high-fidelity electromagnetic wave modeling for wireless communications, sensing, and room acoustics applications.
AINeutralarXiv – CS AI · Mar 47/102
🧠Research identifies a critical bottleneck in Vision-Language-Action (VLA) models for edge AI, where up to 75% of latency comes from memory-bound action generation phases. The study analyzes performance on Nvidia edge hardware and projects requirements for scaling to 100B parameter models in robotics applications.
AIBullisharXiv – CS AI · Mar 46/104
🧠Researchers have developed a framework that allows neural network verification tools to accept natural language specifications instead of low-level technical constraints. The system automatically translates human-readable requirements into formal verification queries, significantly expanding the practical applicability of neural network verification across diverse domains.
AINeutralarXiv – CS AI · Mar 47/103
🧠Research reveals an exponential gap between structured and unstructured neural network pruning methods. While unstructured weight pruning can approximate target functions with O(d log(1/ε)) neurons, structured neuron pruning requires Ω(d/ε) neurons, demonstrating fundamental limitations of structured approaches.
AINeutralarXiv – CS AI · Mar 47/102
🧠Researchers introduce the Branching Factor (BF) metric to measure how alignment tuning reduces output diversity in large language models by concentrating probability distributions. The study reveals that aligned models generate 2-5x less diverse outputs and become more predictable during generation, explaining why alignment reduces sensitivity to decoding strategies and enables more stable Chain-of-Thought reasoning.
AIBullisharXiv – CS AI · Mar 47/103
🧠Researchers developed SILVR, a self-improving system for visual robotic planning that uses video generative models to continuously enhance robot performance through self-collected data. The system demonstrates improved task performance across MetaWorld simulations and real robot manipulations without requiring human-provided rewards or expert demonstrations.
AIBearisharXiv – CS AI · Mar 47/102
🧠Researchers developed a mathematical model showing how AI delegation can create stable low-skill equilibria where humans become persistently reliant on AI systems. The study reveals that while AI assistance improves short-term performance, it can lead to long-term skill degradation through reduced practice and negative feedback loops.
AIBearisharXiv – CS AI · Mar 46/102
🧠Researchers developed a new AI attack method that can fool speaker recognition systems with 10x fewer attempts than previous approaches. The technique uses feature-aligned inversion to optimize attacks in latent space, achieving up to 91.65% success rate with only 50 queries.
AINeutralarXiv – CS AI · Mar 47/103
🧠Researchers introduce TimeGS, a novel time series forecasting framework that reimagines prediction as 2D generative rendering using Gaussian splatting techniques. The approach addresses key limitations in existing methods by treating future sequences as continuous latent surfaces and enforcing temporal continuity across periodic boundaries.