y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,490🧠AI12,929⛓️Crypto11,071💎DeFi1,137🤖AI × Crypto566📰General4,787
🧠

AI

12,931 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12931 articles
AINeutralarXiv – CS AI · Mar 37/106
🧠

Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments

Researchers developed the first real-time framework for natural non-verbal human-AI interaction using body language, achieving 100 FPS on NVIDIA hardware. The study found that while AI models can mimic human motion, measurable differences persist between human and AI-generated body language, with temporal coherence being more important than visual fidelity.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Hyperparameter Trajectory Inference with Conditional Lagrangian Optimal Transport

Researchers introduce Hyperparameter Trajectory Inference (HTI), a method to predict how neural networks behave with different hyperparameter settings without expensive retraining. The approach uses conditional Lagrangian optimal transport to create surrogate models that approximate neural network outputs across various hyperparameter configurations.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Explanation-Guided Adversarial Training for Robust and Interpretable Models

Researchers propose Explanation-Guided Adversarial Training (EGAT), a framework that combines adversarial training with explainable AI to create more robust and interpretable deep neural networks. The method achieves 37% improvement in adversarial accuracy while producing semantically meaningful explanations with only 16% increase in training time.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Towards Principled Dataset Distillation: A Spectral Distribution Perspective

Researchers propose Class-Aware Spectral Distribution Matching (CSDM), a new dataset distillation method that addresses performance issues on imbalanced datasets. The technique achieves 14% improvement over existing methods on CIFAR-10-LT with enhanced stability on long-tailed data distributions.

AIBullisharXiv – CS AI · Mar 36/105
🧠

Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control

Researchers developed a shape-interpretable visual self-modeling framework for continuum robots that enables geometry-aware control using Bezier-curve representations and neural ordinary differential equations. The system achieves accurate shape-position regulation with shape errors within 1.56% and end-effector errors within 2% while enabling obstacle avoidance and environmental awareness.

$CRV
AIBullisharXiv – CS AI · Mar 37/105
🧠

DynaMoE: Dynamic Token-Level Expert Activation with Layer-Wise Adaptive Capacity for Mixture-of-Experts Neural Networks

Researchers introduce DynaMoE, a new Mixture-of-Experts framework that dynamically activates experts based on input complexity and uses adaptive capacity allocation across network layers. The system achieves superior parameter efficiency compared to static baselines and demonstrates that optimal expert scheduling strategies vary by task type and model scale.

AIBullisharXiv – CS AI · Mar 36/108
🧠

MVR: Multi-view Video Reward Shaping for Reinforcement Learning

Researchers introduce Multi-View Video Reward Shaping (MVR), a new reinforcement learning framework that uses multi-viewpoint video analysis and vision-language models to improve reward design for complex AI tasks. The system addresses limitations of single-image approaches by analyzing dynamic motions across multiple camera angles, showing improved performance on humanoid locomotion and manipulation tasks.

AIBullisharXiv – CS AI · Mar 36/109
🧠

Surgical Post-Training: Cutting Errors, Keeping Knowledge

Researchers introduce Surgical Post-Training (SPoT), a new method to improve Large Language Model reasoning while preventing catastrophic forgetting. SPoT achieved 6.2% accuracy improvement on Qwen3-8B using only 4k data pairs and 28 minutes of training, offering a more efficient alternative to traditional post-training approaches.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Modular Memory is the Key to Continual Learning Agents

Researchers propose combining In-Weight Learning (IWL) and In-Context Learning (ICL) through modular memory architectures to solve continual learning challenges in AI. The framework aims to enable AI agents to continuously adapt and accumulate knowledge without catastrophic forgetting, addressing key limitations of current foundation models.

AIBullisharXiv – CS AI · Mar 36/107
🧠

QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions

Researchers have developed QIME, a new framework for creating interpretable medical text embeddings that uses ontology-grounded questions to represent biomedical text. Unlike black-box AI models, QIME provides clinically meaningful explanations while achieving performance close to traditional dense embeddings in medical text analysis tasks.

AIBullisharXiv – CS AI · Mar 36/108
🧠

Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search

Researchers introduced GOME, an AI agent that uses gradient-based optimization instead of tree search for machine learning engineering tasks, achieving 35.1% success rate on MLE-Bench. The study shows gradient-based approaches outperform tree search as AI reasoning capabilities improve, suggesting this method will become more effective as LLMs advance.

AIBullisharXiv – CS AI · Mar 37/107
🧠

DeLo: Dual Decomposed Low-Rank Experts Collaboration for Continual Missing Modality Learning

Researchers propose DeLo, a new framework using dual-decomposed low-rank expert architecture to help Large Multimodal Models adapt to real-world scenarios with incomplete data. The system addresses continual missing modality learning by preventing interference between different data types and tasks through specialized routing and memory mechanisms.

AIBullisharXiv – CS AI · Mar 36/1011
🧠

FreeGNN: Continual Source-Free Graph Neural Network Adaptation for Renewable Energy Forecasting

Researchers developed FreeGNN, a continual source-free graph neural network framework for renewable energy forecasting that adapts to new sites without requiring source data or target labels. The system uses a teacher-student strategy with memory replay and achieved strong performance across three real-world datasets including GEFCom2012, Solar PV, and Wind SCADA.

AIBullisharXiv – CS AI · Mar 36/106
🧠

What Helps -- and What Hurts: Bidirectional Explanations for Vision Transformers

Researchers propose BiCAM, a new method for interpreting Vision Transformer (ViT) decisions that captures both positive and negative contributions to predictions. The approach improves explanation quality and enables adversarial example detection across multiple ViT variants without requiring model retraining.

AIBullisharXiv – CS AI · Mar 36/107
🧠

YCDa: YCbCr Decoupled Attention for Real-time Realistic Camouflaged Object Detection

Researchers propose YCDa, a new AI strategy for real-time camouflaged object detection that mimics human vision by separating color and brightness information. The method achieves 112% improvement in detection accuracy and can be easily integrated into existing AI detection systems with minimal computational overhead.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Adaptive Confidence Regularization for Multimodal Failure Detection

Researchers propose Adaptive Confidence Regularization (ACR), a new framework for detecting failures in multimodal AI systems used in critical applications like autonomous vehicles and medical diagnostics. The approach uses confidence degradation detection and synthetic failure generation to improve reliability of AI predictions in high-stakes scenarios.

AIBullisharXiv – CS AI · Mar 37/105
🧠

CHLU: The Causal Hamiltonian Learning Unit as a Symplectic Primitive for Deep Learning

Researchers propose the Causal Hamiltonian Learning Unit (CHLU), a physics-based deep learning primitive that addresses stability issues in temporal dynamics models. The CHLU uses symplectic integration and Hamiltonian structure to maintain infinite-horizon stability while preserving information, potentially solving the memory-stability trade-off in neural networks.

AINeutralarXiv – CS AI · Mar 37/108
🧠

A Practical Guide to Streaming Continual Learning

Researchers propose Streaming Continual Learning (SCL) as a unified paradigm that combines Continual Learning and Streaming Machine Learning approaches. SCL aims to enable AI systems to both rapidly adapt to new information and retain previously learned knowledge, addressing limitations of existing methods that excel at only one aspect.

AIBullisharXiv – CS AI · Mar 37/107
🧠

LFPO: Likelihood-Free Policy Optimization for Masked Diffusion Models

Researchers propose Likelihood-Free Policy Optimization (LFPO), a new framework for improving Diffusion Large Language Models by bypassing likelihood computation issues that plague existing methods. LFPO uses geometric velocity rectification to optimize denoising logits directly, achieving better performance on code and reasoning tasks while reducing inference time by 20%.

AIBearisharXiv – CS AI · Mar 37/108
🧠

Extracting Training Dialogue Data from Large Language Model based Task Bots

Researchers have identified significant privacy risks in Large Language Model-based Task-Oriented Dialogue Systems, demonstrating that these AI systems can memorize and leak sensitive training data including phone numbers and complete dialogue exchanges. The study proposes new attack methods that can extract thousands of training dialogue states with over 70% precision in best-case scenarios.

$RNDR
AIBullisharXiv – CS AI · Mar 37/107
🧠

Pri4R: Learning World Dynamics for Vision-Language-Action Models with Privileged 4D Representation

Researchers introduce Pri4R, a new approach that enhances Vision-Language-Action (VLA) models by incorporating 4D spatiotemporal understanding during training. The method adds a lightweight point track head that predicts 3D trajectories, improving physical world understanding while maintaining the original architecture during inference with no computational overhead.

AINeutralarXiv – CS AI · Mar 37/106
🧠

The Sentience Readiness Index: Measuring National Preparedness for the Possibility of Artificial Sentience

Researchers have created the Sentience Readiness Index (SRI) to measure how prepared 31 countries are for the possibility of AI achieving consciousness. No nation scored above 'Partially Prepared,' with the UK leading at 49/100, revealing significant gaps in institutional, professional, and cultural infrastructure needed to handle potentially sentient AI systems.

← PrevPage 224 of 518Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined