y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,548🧠AI12,951⛓️Crypto11,092💎DeFi1,141🤖AI × Crypto566📰General4,798
🧠

AI

12,951 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12951 articles
AIBullisharXiv – CS AI · Mar 36/107
🧠

YCDa: YCbCr Decoupled Attention for Real-time Realistic Camouflaged Object Detection

Researchers propose YCDa, a new AI strategy for real-time camouflaged object detection that mimics human vision by separating color and brightness information. The method achieves 112% improvement in detection accuracy and can be easily integrated into existing AI detection systems with minimal computational overhead.

AIBullisharXiv – CS AI · Mar 36/108
🧠

FAST-DIPS: Adjoint-Free Analytic Steps and Hard-Constrained Likelihood Correction for Diffusion-Prior Inverse Problems

Researchers propose FAST-DIPS, a new training-free diffusion prior method for solving inverse problems that achieves up to 19.5x speedup while maintaining competitive image quality metrics. The method replaces computationally expensive inner optimization loops with closed-form projections and analytic step sizes, significantly reducing the number of required denoiser evaluations.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Towards Principled Dataset Distillation: A Spectral Distribution Perspective

Researchers propose Class-Aware Spectral Distribution Matching (CSDM), a new dataset distillation method that addresses performance issues on imbalanced datasets. The technique achieves 14% improvement over existing methods on CIFAR-10-LT with enhanced stability on long-tailed data distributions.

AIBullisharXiv – CS AI · Mar 37/107
🧠

DeLo: Dual Decomposed Low-Rank Experts Collaboration for Continual Missing Modality Learning

Researchers propose DeLo, a new framework using dual-decomposed low-rank expert architecture to help Large Multimodal Models adapt to real-world scenarios with incomplete data. The system addresses continual missing modality learning by preventing interference between different data types and tasks through specialized routing and memory mechanisms.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Pri4R: Learning World Dynamics for Vision-Language-Action Models with Privileged 4D Representation

Researchers introduce Pri4R, a new approach that enhances Vision-Language-Action (VLA) models by incorporating 4D spatiotemporal understanding during training. The method adds a lightweight point track head that predicts 3D trajectories, improving physical world understanding while maintaining the original architecture during inference with no computational overhead.

AIBearisharXiv – CS AI · Mar 37/108
🧠

Extracting Training Dialogue Data from Large Language Model based Task Bots

Researchers have identified significant privacy risks in Large Language Model-based Task-Oriented Dialogue Systems, demonstrating that these AI systems can memorize and leak sensitive training data including phone numbers and complete dialogue exchanges. The study proposes new attack methods that can extract thousands of training dialogue states with over 70% precision in best-case scenarios.

$RNDR
AIBullisharXiv – CS AI · Mar 37/106
🧠

Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report)

Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.

AIBullisharXiv – CS AI · Mar 37/107
🧠

LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models

Researchers propose Likelihood-Free Policy Optimization (LFPO), a new framework for improving Diffusion Large Language Models by bypassing likelihood computation issues that plague existing methods. LFPO uses geometric velocity rectification to optimize denoising logits directly, achieving better performance on code and reasoning tasks while reducing inference time by 20%.

AIBullisharXiv – CS AI · Mar 37/108
🧠

GAC: Stabilizing Asynchronous RL Training for LLMs via Gradient Alignment Control

Researchers propose GAC (Gradient Alignment Control), a new method to stabilize asynchronous reinforcement learning training for large language models. The technique addresses training instability issues that arise when scaling RL to modern AI workloads by regulating gradient alignment and preventing overshooting.

$NEAR
AINeutralarXiv – CS AI · Mar 37/106
🧠

The Sentience Readiness Index: Measuring National Preparedness for the Possibility of Artificial Sentience

Researchers have created the Sentience Readiness Index (SRI) to measure how prepared 31 countries are for the possibility of AI achieving consciousness. No nation scored above 'Partially Prepared,' with the UK leading at 49/100, revealing significant gaps in institutional, professional, and cultural infrastructure needed to handle potentially sentient AI systems.

AINeutralarXiv – CS AI · Mar 37/108
🧠

PhotoBench: Beyond Visual Matching Towards Personalized Intent-Driven Photo Retrieval

Researchers introduce PhotoBench, the first benchmark for personalized photo retrieval using authentic personal albums rather than web images. The study reveals critical limitations in current AI systems, including modality gaps in unified embedding models and poor tool orchestration in agentic systems.

AINeutralarXiv – CS AI · Mar 36/105
🧠

A SUPERB-Style Benchmark of Self-Supervised Speech Models for Audio Deepfake Detection

Researchers introduced Spoof-SUPERB, a new benchmark for evaluating self-supervised learning models' ability to detect audio deepfakes. The study tested 20 SSL models and found that large-scale discriminative models like XLS-R and WavLM Large consistently outperformed others, especially under acoustic degradations.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Mean-Flow based One-Step Vision-Language-Action

Researchers developed a Mean-Flow based One-Step Vision-Language-Action (VLA) approach that dramatically improves robotic manipulation efficiency by eliminating iterative sampling requirements. The new method achieves 8.7x faster generation than SmolVLA and 83.9x faster than Diffusion Policy in real-world robotic experiments.

AIBullisharXiv – CS AI · Mar 37/1010
🧠

Inference-Time Safety For Code LLMs Via Retrieval-Augmented Revision

Researchers developed a new inference-time safety mechanism for code-generating AI models that uses retrieval-augmented generation to identify and fix security vulnerabilities in real-time. The approach leverages Stack Overflow discussions to guide AI code revision without requiring model retraining, improving security while maintaining interpretability.

AIBullisharXiv – CS AI · Mar 37/108
🧠

DualSentinel: A Lightweight Framework for Detecting Targeted Attacks in Black-box LLM via Dual Entropy Lull Pattern

Researchers introduce DualSentinel, a lightweight framework for detecting targeted attacks on Large Language Models by identifying 'Entropy Lull' patterns - periods of abnormally low token probability entropy that indicate when LLMs are being coercively controlled. The system uses dual-check verification to accurately detect backdoor and prompt injection attacks with near-zero false positives while maintaining minimal computational overhead.

$NEAR
AIBullisharXiv – CS AI · Mar 37/109
🧠

From Verbatim to Gist: Distilling Pyramidal Multimodal Memory via Semantic Information Bottleneck for Long-Horizon Video Agents

Researchers have developed MM-Mem, a new pyramidal multimodal memory architecture that enables AI systems to better understand long-horizon videos by mimicking human cognitive memory processes. The system addresses current limitations in multimodal large language models by creating a hierarchical memory structure that progressively distills detailed visual information into high-level semantic understanding.

AINeutralarXiv – CS AI · Mar 37/108
🧠

Align and Filter: Improving Performance in Asynchronous On-Policy RL

Researchers propose a new method called total Variation-based Advantage aligned Constrained policy Optimization to address policy lag issues in distributed reinforcement learning systems. The approach aims to improve performance when scaling on-policy learning algorithms by mitigating the mismatch between behavior and learning policies during high-frequency updates.

AIBullisharXiv – CS AI · Mar 37/107
🧠

Causal Neural Probabilistic Circuits

Researchers propose Causal Neural Probabilistic Circuits (CNPC), a new AI model that enhances interpretable machine learning by incorporating causal dependencies between concepts. The model allows domain experts to make corrections that properly propagate through causal relationships, achieving higher accuracy than baseline models across benchmark datasets.

AIBearisharXiv – CS AI · Mar 36/107
🧠

PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology

Researchers created PanCanBench, a comprehensive benchmark evaluating 22 large language models on pancreatic cancer-related patient questions, revealing significant variations in clinical accuracy and high hallucination rates. The study found that even top-performing models like GPT-4o and Gemini-2.5 Pro had hallucination rates of 6%, while newer reasoning-optimized models didn't consistently improve factual accuracy.

AIBullisharXiv – CS AI · Mar 36/109
🧠

Provable and Practical In-Context Policy Optimization for Self-Improvement

Researchers introduce In-Context Policy Optimization (ICPO), a new method that allows AI models to improve their responses during inference through multi-round self-reflection without parameter updates. The practical ME-ICPO algorithm demonstrates competitive performance on mathematical reasoning tasks while maintaining affordable inference costs.

AIBullisharXiv – CS AI · Mar 36/106
🧠

MetaState: Persistent Working Memory for Discrete Diffusion Language Models

Researchers introduce MetaState, a recurrent augmentation for discrete diffusion language models (dLLMs) that adds persistent working memory to improve text generation quality. The system addresses the 'Information Island' problem where intermediate representations are discarded between denoising steps, achieving improved accuracy on LLaDA-8B and Dream-7B models with minimal parameter overhead.

← PrevPage 226 of 519Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined