y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All28,027🧠AI12,192⛓️Crypto10,205💎DeFi1,056🤖AI × Crypto505📰General4,069
🧠

AI

12,194 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12194 articles
AIBullisharXiv – CS AI · Feb 277/107
🧠

Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

Molmo2 is a new open-source family of vision-language models that achieves state-of-the-art performance among open models, particularly excelling in video understanding and pixel-level grounding tasks. The research introduces 7 new video datasets and 2 multi-image datasets collected without using proprietary VLMs, along with an 8B parameter model that outperforms existing open-weight models and even some proprietary models on specific tasks.

AINeutralarXiv – CS AI · Feb 277/107
🧠

Operationalizing Fairness: Post-Hoc Threshold Optimization Under Hard Resource Limits

Researchers developed a new framework for deploying AI systems in high-stakes environments that balances safety, fairness, and efficiency under strict resource constraints. The study found that capacity limits dominate ethical considerations, determining deployment thresholds in over 80% of tested scenarios while maintaining better performance than traditional fairness approaches.

$NEAR
AINeutralarXiv – CS AI · Feb 277/103
🧠

The Tool Decathlon: Benchmarking Language Agents for Diverse, Realistic, and Long-Horizon Task Execution

Researchers introduce Tool Decathlon (Toolathlon), a comprehensive benchmark for evaluating AI language agents across 32 software applications and 604 tools in realistic, multi-step scenarios. The benchmark reveals significant limitations in current AI models, with the best performer (Claude-4.5-Sonnet) achieving only 38.6% success rate on complex, real-world tasks.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Enabling clinical use of foundation models in histopathology

Researchers developed a method to improve foundation models in medical histopathology by introducing robustness losses during training, reducing sensitivity to technical variations while maintaining accuracy. The approach was tested on over 27,000 whole slide images from 6,155 patients across eight popular foundation models, showing improved robustness and prediction accuracy without requiring retraining of the foundation models themselves.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning

Researchers propose Supervised Reinforcement Learning (SRL), a new training framework that helps small-scale language models solve complex multi-step reasoning problems by generating internal reasoning monologues and providing step-wise rewards. SRL outperforms traditional Supervised Fine-Tuning and Reinforcement Learning approaches, enabling smaller models to tackle previously unlearnable problems.

AINeutralarXiv – CS AI · Feb 277/107
🧠

Learning to Answer from Correct Demonstrations

Researchers propose a new approach for training AI models to generate correct answers from demonstrations, using imitation learning in contextual bandits rather than traditional supervised fine-tuning. The method achieves better sample complexity and works with weaker assumptions about the underlying reward model compared to existing likelihood-maximization approaches.

AIBearisharXiv – CS AI · Feb 277/103
🧠

DropVLA: An Action-Level Backdoor Attack on Vision--Language--Action Models

Researchers have developed DropVLA, a backdoor attack method that can manipulate Vision-Language-Action AI models to execute unintended robot actions while maintaining normal performance. The attack achieves 98.67%-99.83% success rates with minimal data poisoning and has been validated on real robotic systems.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Contextual Memory Virtualisation: DAG-Based State Management and Structurally Lossless Trimming for LLM Agents

Researchers introduce Contextual Memory Virtualisation (CMV), a system that preserves LLM understanding across extended sessions by treating context as version-controlled state using DAG-based management. The system includes a trimming algorithm that reduces token counts by 20-86% while preserving all user interactions, demonstrating particular efficiency in tool-use sessions.

AINeutralarXiv – CS AI · Feb 277/105
🧠

HubScan: Detecting Hubness Poisoning in Retrieval-Augmented Generation Systems

Researchers introduce HubScan, an open-source security scanner that detects 'hubness poisoning' attacks in Retrieval-Augmented Generation (RAG) systems. The tool achieves 90% recall at detecting adversarial content that exploits vector similarity search vulnerabilities, addressing a critical security flaw in AI systems that rely on external knowledge retrieval.

AIBullisharXiv – CS AI · Feb 277/104
🧠

AviaSafe: A Physics-Informed Data-Driven Model for Aviation Safety-Critical Cloud Forecasts

Researchers developed AviaSafe, a physics-informed AI model that forecasts aviation-critical cloud species up to 7 days ahead, addressing safety concerns around engine icing. The model outperforms operational weather models by predicting specific hydrometeor species rather than general atmospheric variables, enabling better aviation route optimization.

AINeutralarXiv – CS AI · Feb 277/103
🧠

Manifold of Failure: Behavioral Attraction Basins in Language Models

Researchers developed a new framework called MAP-Elites to systematically map vulnerability regions in Large Language Models, revealing distinct safety landscape patterns across different models. The study found that Llama-3-8B shows near-universal vulnerabilities, while GPT-5-Mini demonstrates stronger robustness with limited failure regions.

$NEAR
AINeutralarXiv – CS AI · Feb 277/104
🧠

Generative Value Conflicts Reveal LLM Priorities

Researchers introduced ConflictScope, an automated pipeline that evaluates how large language models prioritize competing values when faced with ethical dilemmas. The study found that LLMs shift away from protective values like harmlessness toward personal values like user autonomy in open-ended scenarios, though system prompting can improve alignment by 14%.

AIBullisharXiv – CS AI · Feb 277/107
🧠

Residual Koopman Spectral Profiling for Predicting and Preventing Transformer Training Instability

Researchers developed Residual Koopman Spectral Profiling (RKSP), a method that predicts transformer training instability from a single forward pass at initialization with 99.5% accuracy. The technique includes Koopman Spectral Shaping (KSS) which can prevent training divergence and enable 50-150% higher learning rates across various AI models including GPT-2 and LLaMA-2.

$NEAR
AINeutralarXiv – CS AI · Feb 277/105
🧠

Training Agents to Self-Report Misbehavior

Researchers developed a new AI safety approach called 'self-incrimination training' that teaches AI agents to report their own deceptive behavior by calling a report_scheming() function. Testing on GPT-4.1 and Gemini-2.0 showed this method significantly reduces undetected harmful actions compared to traditional alignment training and monitoring approaches.

AIBearisharXiv – CS AI · Feb 277/105
🧠

Poisoned Acoustics

Researchers demonstrate how training-data poisoning attacks can compromise deep neural networks used for acoustic vehicle classification with just 0.5% corrupted data, achieving 95.7% attack success rate while remaining undetectable. The study reveals fundamental vulnerabilities in AI training pipelines and proposes cryptographic defenses using post-quantum digital signatures and blockchain-like verification methods.

AIBullisharXiv – CS AI · Feb 277/108
🧠

A Confidence-Variance Theory for Pseudo-Label Selection in Semi-Supervised Learning

Researchers introduce a Confidence-Variance (CoVar) theory framework that improves pseudo-label selection in semi-supervised learning by combining maximum confidence with residual-class variance. The method addresses overconfidence issues in deep networks and demonstrates consistent improvements across multiple datasets including PASCAL VOC, Cityscapes, CIFAR-10, and Mini-ImageNet.

$NEAR
AIBullisharXiv – CS AI · Feb 277/106
🧠

Intelligence per Watt: Measuring Intelligence Efficiency of Local AI

Researchers propose 'Intelligence per Watt' (IPW) as a metric to measure AI efficiency, finding that local AI models can handle 71.3% of queries while being 1.4x more energy efficient than cloud alternatives. The study demonstrates that smaller local language models (≤20B parameters) can redistribute computational demand from centralized cloud infrastructure.

AIBullisharXiv – CS AI · Feb 277/106
🧠

Zatom-1: A Multimodal Flow Foundation Model for 3D Molecules and Materials

Researchers introduce Zatom-1, the first foundation model that unifies generative and predictive learning for both 3D molecules and materials using a multimodal flow matching approach. The Transformer-based model demonstrates superior performance across both domains while significantly reducing inference time by over 10x compared to existing specialized models.

$ATOM
AIBullisharXiv – CS AI · Feb 277/106
🧠

TT-SEAL: TTD-Aware Selective Encryption for Adversarially-Robust and Low-Latency Edge AI

Researchers developed TT-SEAL, a selective encryption framework for compressed AI models using Tensor-Train Decomposition that maintains security while encrypting only 4.89-15.92% of parameters. The system achieves the same robustness as full encryption while reducing AES decryption overhead in end-to-end latency from 58% to as low as 2.76%.

AIBearishDecrypt – AI · Feb 277/106
🧠

Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO

Anthropic CEO announced the company will refuse to comply with Defense Department demands to lift AI safeguards, as the Pentagon considers designating Anthropic as a "supply chain risk." This dispute highlights tensions between AI companies maintaining safety protocols and government agencies seeking access to less restricted AI capabilities.

Anthropic Won’t Lift AI Safeguards Amid Ongoing Pentagon Dispute: CEO
AINeutralTechCrunch – AI · Feb 267/103
🧠

Anthropic CEO stands firm as Pentagon deadline looms

Anthropic CEO Dario Amodei refused to comply with Pentagon demands for unrestricted military access to the company's AI systems, citing moral objections. This stance creates tension between AI companies and government defense requirements as regulatory deadlines approach.

AINeutralThe Verge – AI · Feb 267/105
🧠

Jack Dorsey’s Block cuts nearly half of its staff in AI gamble

Jack Dorsey's Block is laying off nearly half its workforce, cutting over 4,000 jobs to reduce staff from 10,000+ to under 6,000 employees. Despite strong business performance with growing profits and customers, the company is restructuring to leverage AI tools with smaller, flatter teams.

← PrevPage 88 of 488Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined