12,942 AI articles curated from 50+ sources with AI-powered sentiment analysis, importance scoring, and key takeaways.
AIBullisharXiv – CS AI · Mar 37/105
🧠Researchers introduce DynaMoE, a new Mixture-of-Experts framework that dynamically activates experts based on input complexity and uses adaptive capacity allocation across network layers. The system achieves superior parameter efficiency compared to static baselines and demonstrates that optimal expert scheduling strategies vary by task type and model scale.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers introduce Multi-View Video Reward Shaping (MVR), a new reinforcement learning framework that uses multi-viewpoint video analysis and vision-language models to improve reward design for complex AI tasks. The system addresses limitations of single-image approaches by analyzing dynamic motions across multiple camera angles, showing improved performance on humanoid locomotion and manipulation tasks.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers introduced GOME, an AI agent that uses gradient-based optimization instead of tree search for machine learning engineering tasks, achieving 35.1% success rate on MLE-Bench. The study shows gradient-based approaches outperform tree search as AI reasoning capabilities improve, suggesting this method will become more effective as LLMs advance.
AINeutralarXiv – CS AI · Mar 37/106
🧠Researchers developed the first real-time framework for natural non-verbal human-AI interaction using body language, achieving 100 FPS on NVIDIA hardware. The study found that while AI models can mimic human motion, measurable differences persist between human and AI-generated body language, with temporal coherence being more important than visual fidelity.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers have developed QIME, a new framework for creating interpretable medical text embeddings that uses ontology-grounded questions to represent biomedical text. Unlike black-box AI models, QIME provides clinically meaningful explanations while achieving performance close to traditional dense embeddings in medical text analysis tasks.
AIBullisharXiv – CS AI · Mar 36/109
🧠Researchers introduce Surgical Post-Training (SPoT), a new method to improve Large Language Model reasoning while preventing catastrophic forgetting. SPoT achieved 6.2% accuracy improvement on Qwen3-8B using only 4k data pairs and 28 minutes of training, offering a more efficient alternative to traditional post-training approaches.
AIBullisharXiv – CS AI · Mar 36/1011
🧠Researchers developed FreeGNN, a continual source-free graph neural network framework for renewable energy forecasting that adapts to new sites without requiring source data or target labels. The system uses a teacher-student strategy with memory replay and achieved strong performance across three real-world datasets including GEFCom2012, Solar PV, and Wind SCADA.
AINeutralarXiv – CS AI · Mar 37/108
🧠Researchers propose Streaming Continual Learning (SCL) as a unified paradigm that combines Continual Learning and Streaming Machine Learning approaches. SCL aims to enable AI systems to both rapidly adapt to new information and retain previously learned knowledge, addressing limitations of existing methods that excel at only one aspect.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose DeLo, a new framework using dual-decomposed low-rank expert architecture to help Large Multimodal Models adapt to real-world scenarios with incomplete data. The system addresses continual missing modality learning by preventing interference between different data types and tasks through specialized routing and memory mechanisms.
AINeutralarXiv – CS AI · Mar 37/108
🧠Researchers introduce SafeSci, a comprehensive framework for evaluating safety in large language models used for scientific applications. The framework includes a 0.25M sample benchmark and 1.5M sample training dataset, revealing critical vulnerabilities in 24 advanced LLMs while demonstrating that fine-tuning can significantly improve safety alignment.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers propose YCDa, a new AI strategy for real-time camouflaged object detection that mimics human vision by separating color and brightness information. The method achieves 112% improvement in detection accuracy and can be easily integrated into existing AI detection systems with minimal computational overhead.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers propose FAST-DIPS, a new training-free diffusion prior method for solving inverse problems that achieves up to 19.5x speedup while maintaining competitive image quality metrics. The method replaces computationally expensive inner optimization loops with closed-form projections and analytic step sizes, significantly reducing the number of required denoiser evaluations.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers introduce SkeleGuide, a new AI framework that uses explicit skeletal reasoning to generate more realistic human images in existing scenes. The system addresses common issues like distorted limbs and unnatural poses by incorporating structural priors based on human skeletal structure.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers introduce DualSentinel, a lightweight framework for detecting targeted attacks on Large Language Models by identifying 'Entropy Lull' patterns - periods of abnormally low token probability entropy that indicate when LLMs are being coercively controlled. The system uses dual-check verification to accurately detect backdoor and prompt injection attacks with near-zero false positives while maintaining minimal computational overhead.
$NEAR
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers propose Likelihood-Free Policy Optimization (LFPO), a new framework for improving Diffusion Large Language Models by bypassing likelihood computation issues that plague existing methods. LFPO uses geometric velocity rectification to optimize denoising logits directly, achieving better performance on code and reasoning tasks while reducing inference time by 20%.
AIBearisharXiv – CS AI · Mar 37/108
🧠Researchers have identified significant privacy risks in Large Language Model-based Task-Oriented Dialogue Systems, demonstrating that these AI systems can memorize and leak sensitive training data including phone numbers and complete dialogue exchanges. The study proposes new attack methods that can extract thousands of training dialogue states with over 70% precision in best-case scenarios.
$RNDR
AINeutralarXiv – CS AI · Mar 37/106
🧠Researchers have created the Sentience Readiness Index (SRI) to measure how prepared 31 countries are for the possibility of AI achieving consciousness. No nation scored above 'Partially Prepared,' with the UK leading at 49/100, revealing significant gaps in institutional, professional, and cultural infrastructure needed to handle potentially sentient AI systems.
AIBullisharXiv – CS AI · Mar 37/108
🧠Researchers propose GAC (Gradient Alignment Control), a new method to stabilize asynchronous reinforcement learning training for large language models. The technique addresses training instability issues that arise when scaling RL to modern AI workloads by regulating gradient alignment and preventing overshooting.
$NEAR
AIBullisharXiv – CS AI · Mar 36/106
🧠Researchers propose BiCAM, a new method for interpreting Vision Transformer (ViT) decisions that captures both positive and negative contributions to predictions. The approach improves explanation quality and enables adversarial example detection across multiple ViT variants without requiring model retraining.
AIBullisharXiv – CS AI · Mar 36/106
🧠Researchers introduce 3R, a new RAG-based framework that optimizes prompts for text-to-video generation models without requiring model retraining. The system uses three key strategies to improve video quality: RAG-based modifier extraction, diffusion-based preference optimization, and temporal frame interpolation for better consistency.
AIBullisharXiv – CS AI · Mar 37/1010
🧠Researchers developed a new inference-time safety mechanism for code-generating AI models that uses retrieval-augmented generation to identify and fix security vulnerabilities in real-time. The approach leverages Stack Overflow discussions to guide AI code revision without requiring model retraining, improving security while maintaining interpretability.
AIBullisharXiv – CS AI · Mar 37/107
🧠Researchers introduce Pri4R, a new approach that enhances Vision-Language-Action (VLA) models by incorporating 4D spatiotemporal understanding during training. The method adds a lightweight point track head that predicts 3D trajectories, improving physical world understanding while maintaining the original architecture during inference with no computational overhead.
AIBullisharXiv – CS AI · Mar 37/106
🧠Researchers have developed AloePri, the first privacy-preserving LLM inference method designed for industrial applications. The system uses collaborative obfuscation to protect input/output data while maintaining 96.5-100% accuracy and resisting state-of-the-art attacks, successfully tested on a 671B parameter model.
AIBullisharXiv – CS AI · Mar 37/109
🧠Researchers have developed MM-Mem, a new pyramidal multimodal memory architecture that enables AI systems to better understand long-horizon videos by mimicking human cognitive memory processes. The system addresses current limitations in multimodal large language models by creating a hierarchical memory structure that progressively distills detailed visual information into high-level semantic understanding.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers developed a Mean-Flow based One-Step Vision-Language-Action (VLA) approach that dramatically improves robotic manipulation efficiency by eliminating iterative sampling requirements. The new method achieves 8.7x faster generation than SmolVLA and 83.9x faster than Diffusion Policy in real-world robotic experiments.