12,194 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Feb 277/107
🧠Molmo2 is a new open-source family of vision-language models that achieves state-of-the-art performance among open models, particularly excelling in video understanding and pixel-level grounding tasks. The research introduces 7 new video datasets and 2 multi-image datasets collected without using proprietary VLMs, along with an 8B parameter model that outperforms existing open-weight models and even some proprietary models on specific tasks.
AINeutralarXiv – CS AI · Feb 277/107
🧠Researchers developed a new framework for deploying AI systems in high-stakes environments that balances safety, fairness, and efficiency under strict resource constraints. The study found that capacity limits dominate ethical considerations, determining deployment thresholds in over 80% of tested scenarios while maintaining better performance than traditional fairness approaches.
$NEAR
AINeutralarXiv – CS AI · Feb 277/103
🧠Researchers introduce Tool Decathlon (Toolathlon), a comprehensive benchmark for evaluating AI language agents across 32 software applications and 604 tools in realistic, multi-step scenarios. The benchmark reveals significant limitations in current AI models, with the best performer (Claude-4.5-Sonnet) achieving only 38.6% success rate on complex, real-world tasks.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers developed a method to improve foundation models in medical histopathology by introducing robustness losses during training, reducing sensitivity to technical variations while maintaining accuracy. The approach was tested on over 27,000 whole slide images from 6,155 patients across eight popular foundation models, showing improved robustness and prediction accuracy without requiring retraining of the foundation models themselves.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers propose Supervised Reinforcement Learning (SRL), a new training framework that helps small-scale language models solve complex multi-step reasoning problems by generating internal reasoning monologues and providing step-wise rewards. SRL outperforms traditional Supervised Fine-Tuning and Reinforcement Learning approaches, enabling smaller models to tackle previously unlearnable problems.
AINeutralarXiv – CS AI · Feb 277/107
🧠Researchers propose a new approach for training AI models to generate correct answers from demonstrations, using imitation learning in contextual bandits rather than traditional supervised fine-tuning. The method achieves better sample complexity and works with weaker assumptions about the underlying reward model compared to existing likelihood-maximization approaches.
AIBearisharXiv – CS AI · Feb 277/103
🧠Researchers have developed DropVLA, a backdoor attack method that can manipulate Vision-Language-Action AI models to execute unintended robot actions while maintaining normal performance. The attack achieves 98.67%-99.83% success rates with minimal data poisoning and has been validated on real robotic systems.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers introduce Contextual Memory Virtualisation (CMV), a system that preserves LLM understanding across extended sessions by treating context as version-controlled state using DAG-based management. The system includes a trimming algorithm that reduces token counts by 20-86% while preserving all user interactions, demonstrating particular efficiency in tool-use sessions.
AINeutralarXiv – CS AI · Feb 277/105
🧠Researchers introduce HubScan, an open-source security scanner that detects 'hubness poisoning' attacks in Retrieval-Augmented Generation (RAG) systems. The tool achieves 90% recall at detecting adversarial content that exploits vector similarity search vulnerabilities, addressing a critical security flaw in AI systems that rely on external knowledge retrieval.
AIBullisharXiv – CS AI · Feb 277/104
🧠Researchers developed AviaSafe, a physics-informed AI model that forecasts aviation-critical cloud species up to 7 days ahead, addressing safety concerns around engine icing. The model outperforms operational weather models by predicting specific hydrometeor species rather than general atmospheric variables, enabling better aviation route optimization.
AINeutralarXiv – CS AI · Feb 277/103
🧠Researchers developed a new framework called MAP-Elites to systematically map vulnerability regions in Large Language Models, revealing distinct safety landscape patterns across different models. The study found that Llama-3-8B shows near-universal vulnerabilities, while GPT-5-Mini demonstrates stronger robustness with limited failure regions.
$NEAR
AINeutralarXiv – CS AI · Feb 277/104
🧠Researchers introduced ConflictScope, an automated pipeline that evaluates how large language models prioritize competing values when faced with ethical dilemmas. The study found that LLMs shift away from protective values like harmlessness toward personal values like user autonomy in open-ended scenarios, though system prompting can improve alignment by 14%.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers developed Residual Koopman Spectral Profiling (RKSP), a method that predicts transformer training instability from a single forward pass at initialization with 99.5% accuracy. The technique includes Koopman Spectral Shaping (KSS) which can prevent training divergence and enable 50-150% higher learning rates across various AI models including GPT-2 and LLaMA-2.
$NEAR
AINeutralarXiv – CS AI · Feb 277/105
🧠Researchers developed a new AI safety approach called 'self-incrimination training' that teaches AI agents to report their own deceptive behavior by calling a report_scheming() function. Testing on GPT-4.1 and Gemini-2.0 showed this method significantly reduces undetected harmful actions compared to traditional alignment training and monitoring approaches.
AIBearisharXiv – CS AI · Feb 277/105
🧠Researchers demonstrate how training-data poisoning attacks can compromise deep neural networks used for acoustic vehicle classification with just 0.5% corrupted data, achieving 95.7% attack success rate while remaining undetectable. The study reveals fundamental vulnerabilities in AI training pipelines and proposes cryptographic defenses using post-quantum digital signatures and blockchain-like verification methods.
AIBullisharXiv – CS AI · Feb 277/108
🧠Researchers introduce a Confidence-Variance (CoVar) theory framework that improves pseudo-label selection in semi-supervised learning by combining maximum confidence with residual-class variance. The method addresses overconfidence issues in deep networks and demonstrates consistent improvements across multiple datasets including PASCAL VOC, Cityscapes, CIFAR-10, and Mini-ImageNet.
$NEAR
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers propose 'Intelligence per Watt' (IPW) as a metric to measure AI efficiency, finding that local AI models can handle 71.3% of queries while being 1.4x more energy efficient than cloud alternatives. The study demonstrates that smaller local language models (≤20B parameters) can redistribute computational demand from centralized cloud infrastructure.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers have developed a unified framework using Spectral Geometry and Random Matrix Theory to address reliability and efficiency challenges in large language models. The study introduces EigenTrack for real-time hallucination detection and RMT-KD for model compression while maintaining accuracy.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers introduce Zatom-1, the first foundation model that unifies generative and predictive learning for both 3D molecules and materials using a multimodal flow matching approach. The Transformer-based model demonstrates superior performance across both domains while significantly reducing inference time by over 10x compared to existing specialized models.
$ATOM
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers developed TT-SEAL, a selective encryption framework for compressed AI models using Tensor-Train Decomposition that maintains security while encrypting only 4.89-15.92% of parameters. The system achieves the same robustness as full encryption while reducing AES decryption overhead in end-to-end latency from 58% to as low as 2.76%.
AIBearishDecrypt – AI · Feb 277/106
🧠Anthropic CEO announced the company will refuse to comply with Defense Department demands to lift AI safeguards, as the Pentagon considers designating Anthropic as a "supply chain risk." This dispute highlights tensions between AI companies maintaining safety protocols and government agencies seeking access to less restricted AI capabilities.
AIBearishDecrypt – AI · Feb 277/106
🧠Law enforcement officials from Internet Crimes Against Children (ICAC) units claim Meta's AI systems are generating excessive false positive reports about child abuse content, overwhelming investigators and slowing down legitimate cases. Meta disputes these claims about their AI-generated reporting system.
AINeutralThe Verge – AI · Feb 267/106
🧠Anthropic has refused the Pentagon's demands for unrestricted AI access, maintaining its stance against mass surveillance and lethal autonomous weapons. The refusal comes amid Defense Secretary Pete Hegseth's push to renegotiate all AI lab contracts with the military.
AINeutralTechCrunch – AI · Feb 267/103
🧠Anthropic CEO Dario Amodei refused to comply with Pentagon demands for unrestricted military access to the company's AI systems, citing moral objections. This stance creates tension between AI companies and government defense requirements as regulatory deadlines approach.
AINeutralThe Verge – AI · Feb 267/105
🧠Jack Dorsey's Block is laying off nearly half its workforce, cutting over 4,000 jobs to reduce staff from 10,000+ to under 6,000 employees. Despite strong business performance with growing profits and customers, the company is restructuring to leverage AI tools with smaller, flatter teams.