Real-time AI-curated news from 20,318+ articles across 50+ sources. Sentiment analysis, importance scoring, and key takeaways — updated every 15 minutes.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers introduce SUPERGLASSES, the first comprehensive benchmark for evaluating Vision Language Models in AI smart glasses applications, comprising 2,422 real-world egocentric image-question pairs. They also propose SUPERLENS, a multimodal agent that outperforms GPT-4o by 2.19% through retrieval-augmented answer generation with automatic object detection and web search capabilities.
AI × CryptoBullisharXiv – CS AI · Feb 277/103
🤖Researchers introduce IMMACULATE, a framework that audits commercial large language model API services to detect fraud like model substitution and token overbilling without requiring access to internal systems. The system uses verifiable computation to audit a small fraction of requests, achieving strong detection guarantees with less than 1% throughput overhead.
AIBullisharXiv – CS AI · Feb 277/104
🧠Researchers have developed AgentSentry, a novel defense framework that protects AI agents from indirect prompt injection attacks by detecting and mitigating malicious control attempts in real-time. The system achieved 74.55% utility under attack, significantly outperforming existing defenses by 20-33 percentage points while maintaining benign performance.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers developed a system that trains large language models using renewable energy during curtailment periods when excess clean electricity would otherwise be wasted. The distributed training approach across multiple GPU clusters reduced operational emissions to 5-12% of traditional single-site training while maintaining model quality.
AIBullisharXiv – CS AI · Feb 277/104
🧠Researchers developed Hyper Diffusion Planner (HDP), a diffusion model-based framework for end-to-end autonomous driving that achieved 10x performance improvement over base models in real-world testing. The study conducted comprehensive evaluation across 200 km of real-world driving scenarios, demonstrating diffusion models can effectively scale to complex autonomous driving tasks when properly designed and trained.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers introduce NoRA (Non-linear Rank Adaptation), a new parameter-efficient fine-tuning method that overcomes the 'linear ceiling' limitations of traditional LoRA by using SiLU gating and structural dropout. NoRA achieves superior performance at rank 64 compared to LoRA at rank 512, demonstrating significant efficiency gains in complex reasoning tasks.
AINeutralarXiv – CS AI · Feb 277/108
🧠Researchers introduce MM-NeuroOnco, a large-scale multimodal dataset containing 24,726 MRI slices and 200,000 instructions for training AI models in brain tumor diagnosis. The benchmark reveals significant challenges in medical AI, with even advanced models like Gemini 3 Flash achieving only 41.88% accuracy on diagnostic questions.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers have developed a new framework that uses large language models to guide symbolic regression in discovering interpretable physical laws from high-dimensional materials data. The method reduces the search space by approximately 10^5 times compared to traditional approaches and successfully identified novel formulas for key properties of perovskite materials.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers developed Residual Koopman Spectral Profiling (RKSP), a method that predicts transformer training instability from a single forward pass at initialization with 99.5% accuracy. The technique includes Koopman Spectral Shaping (KSS) which can prevent training divergence and enable 50-150% higher learning rates across various AI models including GPT-2 and LLaMA-2.
$NEAR
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers have released LLMServingSim 2.0, a unified simulator that models the complex interactions between heterogeneous hardware and disaggregated software in large language model serving infrastructures. The simulator achieves 0.97% average error compared to real deployments while maintaining 10-minute simulation times for complex configurations.
$NEAR
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers propose Affine-Scaled Attention, a new mechanism that improves Transformer model training stability by introducing flexible scaling and bias terms to attention weights. The approach shows consistent improvements in optimization behavior and downstream task performance compared to standard softmax attention across multiple language model sizes.
AINeutralarXiv – CS AI · Feb 277/106
🧠Researchers developed a new theoretical framework for accelerated risk-averse policy evaluation in partially observable Markov decision processes (POMDPs) using Conditional Value-at-Risk (CVaR) bounds. The method enables safe elimination of suboptimal actions while maintaining computational guarantees, achieving substantial speedups in autonomous agent decision-making under uncertainty.
AINeutralarXiv – CS AI · Feb 277/106
🧠Researchers have conducted a comprehensive review of adversarial transferability in image classification, identifying gaps in standardized evaluation frameworks for transfer-based attacks. They propose a benchmark framework and categorize existing attacks into six distinct types to address biased assessments in current research.
AIBullisharXiv – CS AI · Feb 277/105
🧠Researchers developed a convolutional neural network model that can automatically detect vulnerabilities in C source code using deep learning techniques. The model was trained on datasets from Draper Labs and NIST, achieving higher recall than previous work while maintaining high precision and demonstrating effectiveness on real Linux kernel vulnerabilities.
AINeutralarXiv – CS AI · Feb 277/106
🧠Researchers identified a fundamental limitation in multimodal LLMs where decoders trained on text cannot effectively utilize non-text information like speaker identity or visual textures, despite this information being preserved through all model layers. The study demonstrates this 'modality collapse' is due to decoder design rather than encoding failures, with experiments showing targeted training can improve specific modality accessibility.
AIBullisharXiv – CS AI · Feb 277/107
🧠Researchers introduce GUIPruner, a training-free framework that addresses efficiency bottlenecks in high-resolution GUI agents by eliminating spatiotemporal redundancy. The system achieves 3.4x reduction in computational operations and 3.3x speedup while maintaining 94% of original performance, enabling real-time navigation with minimal resource consumption.
AINeutralarXiv – CS AI · Feb 277/105
🧠Researchers propose FedWQ-CP, a new approach for uncertainty quantification in federated learning that addresses both data and model heterogeneity challenges. The method enables reliable uncertainty estimation across distributed agents while maintaining efficiency through single-round communication and weighted threshold aggregation.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers developed a runtime-reconfigurable bitwise systolic array architecture for multi-precision quantized neural networks on FPGA hardware accelerators. The system achieves 1.3-3.6x speedup on mixed-precision models while supporting higher clock frequencies up to 250MHz, addressing the trade-off between hardware efficiency and inference accuracy.
AIBullisharXiv – CS AI · Feb 277/108
🧠FlashOptim introduces memory optimization techniques that reduce AI training memory requirements by over 50% per parameter while maintaining model quality. The suite reduces AdamW memory usage from 16 bytes to 7 bytes per parameter through improved master weight splitting and 8-bit optimizer state quantization.
AINeutralarXiv – CS AI · Feb 277/107
🧠Researchers developed Compositional-ARC, a dataset to test AI models' ability to systematically generalize abstract spatial reasoning tasks. A small 5.7M parameter transformer model trained with meta-learning outperformed large language models like GPT-4o and Gemini 2.0 Flash on novel geometric transformation combinations.
AIBullisharXiv – CS AI · Feb 277/105
🧠Researchers developed a new economic framework called 'cost-of-pass' to evaluate AI language models by combining accuracy with inference costs. The study found that lightweight models are most cost-effective for basic tasks while reasoning models excel at complex problems, with costs for complex quantitative tasks roughly halving every few months.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers introduce GraftLLM, a new method for transferring knowledge between large language models using 'SkillPack' format that preserves capabilities while avoiding catastrophic forgetting. The approach enables efficient model fusion and continual learning for heterogeneous models through modular knowledge storage.
AINeutralarXiv – CS AI · Feb 277/107
🧠LiveMCPBench introduces the first large-scale benchmark evaluating AI agents' ability to navigate real-world tasks using Model Context Protocol (MCP) tools across multiple servers. The benchmark reveals significant performance gaps, with top model Claude-Sonnet-4 achieving 78.95% success while most models only reach 30-50%, identifying tool retrieval as the primary bottleneck.
$OCEAN
AINeutralarXiv – CS AI · Feb 277/107
🧠A qualitative study with 26 non-AI expert stakeholders reveals that everyday users assess AI fairness more comprehensively than AI experts, considering broader features beyond legally protected categories and setting stricter fairness thresholds. The research highlights the importance of incorporating stakeholder perspectives in AI governance and fairness assessment processes.
AIBullisharXiv – CS AI · Feb 277/106
🧠Researchers have developed DAIL (Discovered Adversarial Imitation Learning), the first meta-learned AI algorithm that uses LLM-guided evolutionary methods to automatically discover reward assignment functions for training AI agents. This breakthrough addresses stability issues in adversarial imitation learning and demonstrates superior performance compared to human-designed approaches across different environments.