y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto
🤖All31,087🧠AI13,260⛓️Crypto11,216💎DeFi1,157🤖AI × Crypto566📰General4,888
🧠

AI

13,260 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.

13260 articles
AIBullisharXiv – CS AI · Mar 27/1014
🧠

Carr\'e du champ flow matching: better quality-generalisation tradeoff in generative models

Researchers introduce Carrée du champ flow matching (CDC-FM), a new generative AI model that improves the quality-generalization tradeoff by using geometry-aware noise instead of standard uniform noise. The method shows significant improvements in data-scarce scenarios and non-uniformly sampled datasets, particularly relevant for AI applications in scientific domains.

AIBullisharXiv – CS AI · Mar 27/1022
🧠

Scaling Generalist Data-Analytic Agents

Researchers introduce DataMind, a new training framework for building open-source data-analytic AI agents that can handle complex, multi-step data analysis tasks. The DataMind-14B model achieves state-of-the-art performance with 71.16% average score, outperforming proprietary models like DeepSeek-V3.1 and GPT-5 on data analysis benchmarks.

AIBullisharXiv – CS AI · Mar 27/1014
🧠

VoiceBridge: General Speech Restoration with One-step Latent Bridge Models

VoiceBridge is a new AI model that can restore high-quality 48kHz speech from various types of audio distortions using a single one-step process. The model uses a latent bridge approach with an energy-preserving variational autoencoder and transformer architecture to handle multiple speech restoration tasks simultaneously.

AIBullisharXiv – CS AI · Mar 26/1019
🧠

BEV-VLM: Trajectory Planning via Unified BEV Abstraction

Researchers introduced BEV-VLM, a new autonomous driving trajectory planning system that combines Vision-Language Models with Bird's-Eye View maps from camera and LiDAR data. The approach achieved 53.1% better planning accuracy and complete collision avoidance compared to vision-only methods on the nuScenes dataset.

AIBullisharXiv – CS AI · Mar 27/1016
🧠

Activation Function Design Sustains Plasticity in Continual Learning

Researchers from arXiv demonstrate that activation function design is crucial for maintaining neural network plasticity in continual learning scenarios. They introduce two new activation functions (Smooth-Leaky and Randomized Smooth-Leaky) that help prevent models from losing their ability to adapt to new tasks over time.

$LINK
AIBullisharXiv – CS AI · Mar 27/1020
🧠

MobileLLM-R1: Exploring the Limits of Sub-Billion Language Model Reasoners with Open Training Recipes

Researchers developed MobileLLM-R1, a sub-billion parameter AI model that demonstrates strong reasoning capabilities using only 2T tokens of high-quality data instead of massive 10T+ token datasets. The 950M parameter model achieves superior performance on reasoning benchmarks compared to larger competitors while using only 11.7% of the training data compared to proprietary models like Qwen3.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

Audio-Conditioned Diffusion LLMs for ASR and Deliberation Processing

Researchers developed Whisper-LLaDA, a diffusion-based large language model for automatic speech recognition that achieves 12.3% relative improvement over baseline models. The study demonstrates that audio-conditioned embeddings are crucial for accuracy improvements, while plain-text processing without acoustic features fails to enhance performance.

AIBullisharXiv – CS AI · Mar 27/1012
🧠

MEGS$^{2}$: Memory-Efficient Gaussian Splatting via Spherical Gaussians and Unified Pruning

Researchers introduce MEGS², a new memory-efficient framework for 3D Gaussian Splatting that reduces memory consumption by 50% for static rendering and 40% for real-time rendering. The breakthrough enables 3D rendering on edge devices by replacing memory-intensive spherical harmonics with lightweight spherical Gaussian lobes and implementing unified pruning optimization.

AIBullisharXiv – CS AI · Mar 26/1016
🧠

Context and Diversity Matter: The Emergence of In-Context Learning in World Models

Researchers investigate in-context learning (ICL) in world models, identifying two core mechanisms - environment recognition and environment learning - that enable AI systems to adapt to new configurations. The study provides theoretical error bounds and empirical evidence showing that diverse environments and long context windows are crucial for developing self-adapting world models.

AINeutralarXiv – CS AI · Mar 27/1013
🧠

Efficient Ensemble Conditional Independence Test Framework for Causal Discovery

Researchers introduce E-CIT (Ensemble Conditional Independence Test), a new framework that significantly reduces computational costs in causal discovery by partitioning data into subsets and aggregating results. The method achieves linear computational complexity while maintaining competitive performance, particularly on real-world datasets.

AIBullisharXiv – CS AI · Mar 27/1014
🧠

Less is More: Lean yet Powerful Vision-Language Model for Autonomous Driving

Researchers introduce Max-V1, a novel vision-language model framework that treats autonomous driving as a language problem, predicting trajectories from camera input. The model achieved over 30% performance improvement on the nuScenes dataset and demonstrates strong cross-vehicle adaptability.

AINeutralarXiv – CS AI · Mar 27/1010
🧠

Veritas: Generalizable Deepfake Detection via Pattern-Aware Reasoning

Researchers introduce Veritas, a multi-modal large language model designed for deepfake detection that uses pattern-aware reasoning to mimic human forensic processes. The system addresses real-world challenges through the HydraFake dataset and achieves significant improvements in detecting unseen forgeries across different domains.

AIBullisharXiv – CS AI · Mar 27/1022
🧠

Beyond Na\"ive Prompting: Strategies for Improved Context-aided Forecasting with LLMs

Researchers introduce a framework of four strategies to improve large language models' performance in context-aided forecasting, addressing diagnostic tools, accuracy, and efficiency. The study reveals an 'Execution Gap' where models understand context but fail to apply reasoning, while showing 25-50% performance improvements and cost-effective adaptive routing approaches.

AIBullisharXiv – CS AI · Mar 26/1015
🧠

OM2P: Offline Multi-Agent Mean-Flow Policy

Researchers propose OM2P, a new offline multi-agent reinforcement learning algorithm that achieves efficient one-step action sampling using mean-flow models. The approach delivers up to 3.8x reduction in GPU memory usage and 10.8x speed-up in training time compared to existing diffusion and flow-based models.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

Actor-Critic for Continuous Action Chunks: A Reinforcement Learning Framework for Long-Horizon Robotic Manipulation with Sparse Reward

Researchers introduced AC3 (Actor-Critic for Continuous Chunks), a new reinforcement learning framework that addresses challenges in long-horizon robotic manipulation tasks with sparse rewards. The system uses continuous action chunks with stabilization mechanisms and achieved superior performance on 25 benchmark tasks using minimal demonstrations.

AIBullisharXiv – CS AI · Mar 26/1011
🧠

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation

Researchers developed AMBER-AFNO, a new lightweight architecture for 3D medical image segmentation that replaces traditional attention mechanisms with Adaptive Fourier Neural Operators. The model achieves state-of-the-art results on medical datasets while maintaining linear memory scaling and quasi-linear computational complexity.

$NEAR
AINeutralarXiv – CS AI · Mar 27/1012
🧠

Representing local protein environments with atomistic foundation models

Researchers developed a novel method to represent local protein environments using atomistic foundation models (AFMs), creating embeddings that capture both structural and chemical features. The approach enables construction of data-driven priors for biomolecular environments and achieves state-of-the-art accuracy in physics-informed chemical shift prediction for NMR spectroscopy.

AIBullisharXiv – CS AI · Mar 26/1014
🧠

From Generator to Embedder: Harnessing Innate Abilities of Multimodal LLMs via Building Zero-Shot Discriminative Embedding Model

Researchers propose a data-efficient framework to convert generative Multimodal Large Language Models into universal embedding models without extensive pre-training. The method uses hierarchical embedding prompts and Self-aware Hard Negative Sampling to achieve competitive performance on embedding benchmarks using minimal training data.

AIBullisharXiv – CS AI · Mar 26/1017
🧠

LiteReality: Graphics-Ready 3D Scene Reconstruction from RGB-D Scans

Researchers have developed LiteReality, a novel pipeline that converts RGB-D scans of indoor environments into compact, realistic 3D virtual replicas suitable for AR/VR, gaming, robotics, and digital twins. The system features scene understanding, object retrieval, material painting, and physics integration to create graphics-ready environments that support object individuality and physically-based rendering.

AIBullisharXiv – CS AI · Mar 27/1017
🧠

SceneTok: A Compressed, Diffusable Token Space for 3D Scenes

SceneTok introduces a novel 3D scene tokenizer that compresses view sets into permutation-invariant tokens, achieving 1-3 orders of magnitude better compression than existing methods while maintaining state-of-the-art reconstruction quality. The system enables efficient 3D scene generation in 5 seconds using a lightweight decoder that can render novel viewpoints.

AINeutralarXiv – CS AI · Mar 27/1018
🧠

LumiMAS: A Comprehensive Framework for Real-Time Monitoring and Enhanced Observability in Multi-Agent Systems

Researchers have developed LumiMAS, a comprehensive framework for monitoring and detecting failures in multi-agent systems that incorporate large language models. The framework features three layers: monitoring and logging, anomaly detection, and anomaly explanation with root cause analysis, addressing the unique challenges of observing entire multi-agent systems rather than individual agents.

AIBullisharXiv – CS AI · Mar 27/1018
🧠

Semantic Parallelism: Redefining Efficient MoE Inference via Model-Data Co-Scheduling

Researchers propose Semantic Parallelism, a new framework called Sem-MoE that significantly improves efficiency of large language model inference by optimizing how AI models distribute computational tasks across multiple devices. The system reduces communication overhead between devices by 'collocating' frequently-used model components with their corresponding data, achieving superior throughput compared to existing solutions.

← PrevPage 246 of 531Next →
Filters
Sentiment
Importance
Sort
Stay Updated
Everything combined