y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All30,463🧠AI12,926⛓️Crypto11,055💎DeFi1,136🤖AI × Crypto566📰General4,780
🧠

AI

12,927 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

12927 articles
AIBullisharXiv – CS AI · Mar 36/102
🧠

Probabilistic Retrofitting of Learned Simulators

Researchers developed a training-efficient method to convert pre-trained deterministic AI models for solving Partial Differential Equations into probabilistic ones using Continuous Ranked Probability Score (CRPS) retrofitting. The approach achieves 20-54% improvements in accuracy metrics while requiring minimal additional training costs compared to retraining models from scratch.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Explanation-Guided Adversarial Training for Robust and Interpretable Models

Researchers propose Explanation-Guided Adversarial Training (EGAT), a framework that combines adversarial training with explainable AI to create more robust and interpretable deep neural networks. The method achieves 37% improvement in adversarial accuracy while producing semantically meaningful explanations with only 16% increase in training time.

AIBullisharXiv – CS AI · Mar 37/105
🧠

KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

Researchers have developed KDFlow, a new framework for compressing large language models that achieves 1.44x to 6.36x faster training speeds compared to existing knowledge distillation methods. The framework uses a decoupled architecture that optimizes both training and inference efficiency while reducing communication costs through innovative data transfer techniques.

AINeutralarXiv – CS AI · Mar 37/108
🧠

Diagnosing Generalization Failures from Representational Geometry Markers

Researchers propose a new approach to predict AI model failures by analyzing geometric properties of data representations rather than reverse-engineering internal mechanisms. They found that reduced manifold dimensionality and utility in training data consistently predict poor performance on out-of-distribution tasks across different architectures and datasets.

AIBullisharXiv – CS AI · Mar 36/105
🧠

Agentic Code Reasoning

Researchers introduce 'semi-formal reasoning' for LLM agents to analyze code semantics without execution, showing significant accuracy improvements across multiple tasks. The methodology achieves 88-93% accuracy on patch verification and 87% on code question answering, potentially enabling practical applications in automated code review and static analysis.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Closed-Loop Action Chunks with Dynamic Corrections for Training-Free Diffusion Policy

Researchers have developed DCDP, a Dynamic Closed-Loop Diffusion Policy framework that significantly improves robotic manipulation in dynamic environments. The system achieves 19% better adaptability without retraining while requiring only 5% additional computational overhead through real-time action correction and environmental dynamics integration.

AIBullisharXiv – CS AI · Mar 37/105
🧠

ALTER: Asymmetric LoRA for Token-Entropy-Guided Unlearning of LLMs

Researchers introduce ALTER, a new framework for efficiently "unlearning" specific knowledge from large language models while preserving their overall utility. The system uses asymmetric LoRA architecture to selectively forget targeted information with 95% effectiveness while maintaining over 90% model utility, significantly outperforming existing methods.

AIBearisharXiv – CS AI · Mar 37/105
🧠

Real Money, Fake Models: Deceptive Model Claims in Shadow APIs

A systematic audit of 17 shadow APIs used in 187 academic papers reveals widespread deception, with performance divergence up to 47.21% and identity verification failures in 45.83% of tests. These third-party services claim to provide access to frontier LLMs like GPT-5 and Gemini-2.5 but deliver inconsistent outputs, undermining research validity and reproducibility.

AINeutralarXiv – CS AI · Mar 37/106
🧠

Non-verbal Real-time Human-AI Interaction in Constrained Robotic Environments

Researchers developed the first real-time framework for natural non-verbal human-AI interaction using body language, achieving 100 FPS on NVIDIA hardware. The study found that while AI models can mimic human motion, measurable differences persist between human and AI-generated body language, with temporal coherence being more important than visual fidelity.

AIBullisharXiv – CS AI · Mar 37/105
🧠

CHLU: The Causal Hamiltonian Learning Unit as a Symplectic Primitive for Deep Learning

Researchers propose the Causal Hamiltonian Learning Unit (CHLU), a physics-based deep learning primitive that addresses stability issues in temporal dynamics models. The CHLU uses symplectic integration and Hamiltonian structure to maintain infinite-horizon stability while preserving information, potentially solving the memory-stability trade-off in neural networks.

AIBullisharXiv – CS AI · Mar 37/104
🧠

Modular Memory is the Key to Continual Learning Agents

Researchers propose combining In-Weight Learning (IWL) and In-Context Learning (ICL) through modular memory architectures to solve continual learning challenges in AI. The framework aims to enable AI agents to continuously adapt and accumulate knowledge without catastrophic forgetting, addressing key limitations of current foundation models.

AIBullisharXiv – CS AI · Mar 36/103
🧠

Hyperparameter Trajectory Inference with Conditional Lagrangian Optimal Transport

Researchers introduce Hyperparameter Trajectory Inference (HTI), a method to predict how neural networks behave with different hyperparameter settings without expensive retraining. The approach uses conditional Lagrangian optimal transport to create surrogate models that approximate neural network outputs across various hyperparameter configurations.

AIBullisharXiv – CS AI · Mar 37/104
🧠

FreeAct: Freeing Activations for LLM Quantization

Researchers propose FreeAct, a new quantization framework for Large Language Models that improves efficiency by using dynamic transformation matrices for different token types. The method achieves up to 5.3% performance improvement over existing approaches by addressing the memory and computational overhead challenges in LLMs.

AIBullisharXiv – CS AI · Mar 37/105
🧠

DynaMoE: Dynamic Token-Level Expert Activation with Layer-Wise Adaptive Capacity for Mixture-of-Experts Neural Networks

Researchers introduce DynaMoE, a new Mixture-of-Experts framework that dynamically activates experts based on input complexity and uses adaptive capacity allocation across network layers. The system achieves superior parameter efficiency compared to static baselines and demonstrates that optimal expert scheduling strategies vary by task type and model scale.

AIBullisharXiv – CS AI · Mar 36/104
🧠

Towards Principled Dataset Distillation: A Spectral Distribution Perspective

Researchers propose Class-Aware Spectral Distribution Matching (CSDM), a new dataset distillation method that addresses performance issues on imbalanced datasets. The technique achieves 14% improvement over existing methods on CIFAR-10-LT with enhanced stability on long-tailed data distributions.

AIBullisharXiv – CS AI · Mar 36/105
🧠

Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control

Researchers developed a shape-interpretable visual self-modeling framework for continuum robots that enables geometry-aware control using Bezier-curve representations and neural ordinary differential equations. The system achieves accurate shape-position regulation with shape errors within 1.56% and end-effector errors within 2% while enabling obstacle avoidance and environmental awareness.

$CRV
AIBullisharXiv – CS AI · Mar 36/108
🧠

MVR: Multi-view Video Reward Shaping for Reinforcement Learning

Researchers introduce Multi-View Video Reward Shaping (MVR), a new reinforcement learning framework that uses multi-viewpoint video analysis and vision-language models to improve reward design for complex AI tasks. The system addresses limitations of single-image approaches by analyzing dynamic motions across multiple camera angles, showing improved performance on humanoid locomotion and manipulation tasks.

AIBullisharXiv – CS AI · Mar 36/108
🧠

Reasoning as Gradient: Scaling MLE Agents Beyond Tree Search

Researchers introduced GOME, an AI agent that uses gradient-based optimization instead of tree search for machine learning engineering tasks, achieving 35.1% success rate on MLE-Bench. The study shows gradient-based approaches outperform tree search as AI reasoning capabilities improve, suggesting this method will become more effective as LLMs advance.

AINeutralarXiv – CS AI · Mar 37/108
🧠

A Practical Guide to Streaming Continual Learning

Researchers propose Streaming Continual Learning (SCL) as a unified paradigm that combines Continual Learning and Streaming Machine Learning approaches. SCL aims to enable AI systems to both rapidly adapt to new information and retain previously learned knowledge, addressing limitations of existing methods that excel at only one aspect.

AIBullisharXiv – CS AI · Mar 36/109
🧠

Surgical Post-Training: Cutting Errors, Keeping Knowledge

Researchers introduce Surgical Post-Training (SPoT), a new method to improve Large Language Model reasoning while preventing catastrophic forgetting. SPoT achieved 6.2% accuracy improvement on Qwen3-8B using only 4k data pairs and 28 minutes of training, offering a more efficient alternative to traditional post-training approaches.

AIBullisharXiv – CS AI · Mar 36/103
🧠

TiledAttention: a CUDA Tile SDPA Kernel for PyTorch

TiledAttention is a new CUDA-based scaled dot-product attention kernel for PyTorch that enables easier modification of attention mechanisms for AI research. It provides a balance between performance and customizability, delivering significant speedups over standard attention implementations while remaining directly editable from Python.

$DOT
AIBullisharXiv – CS AI · Mar 36/105
🧠

Co-Evolutionary Multi-Modal Alignment via Structured Adversarial Evolution

Researchers introduce CEMMA, a co-evolutionary framework for improving AI safety alignment in multimodal large language models. The system uses evolving adversarial attacks and adaptive defenses to create more robust AI systems that better resist jailbreak attempts while maintaining functionality.

AIBullisharXiv – CS AI · Mar 36/107
🧠

QIME: Constructing Interpretable Medical Text Embeddings via Ontology-Grounded Questions

Researchers have developed QIME, a new framework for creating interpretable medical text embeddings that uses ontology-grounded questions to represent biomedical text. Unlike black-box AI models, QIME provides clinically meaningful explanations while achieving performance close to traditional dense embeddings in medical text analysis tasks.

AIBullisharXiv – CS AI · Mar 36/107
🧠

YCDa: YCbCr Decoupled Attention for Real-time Realistic Camouflaged Object Detection

Researchers propose YCDa, a new AI strategy for real-time camouflaged object detection that mimics human vision by separating color and brightness information. The method achieves 112% improvement in detection accuracy and can be easily integrated into existing AI detection systems with minimal computational overhead.

← PrevPage 223 of 518Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined