y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#machine-learning News & Analysis

2484 articles tagged with #machine-learning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

2484 articles
AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

PlugMem: A Task-Agnostic Plugin Memory Module for LLM Agents

Researchers propose PlugMem, a task-agnostic plugin memory module for LLM agents that structures episodic memories into knowledge-centric graphs for efficient retrieval. The system consistently outperforms existing memory designs across multiple benchmarks while maintaining transferability between different tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

PhyPrompt: RL-based Prompt Refinement for Physically Plausible Text-to-Video Generation

Researchers developed PhyPrompt, a reinforcement learning framework that automatically refines text prompts to generate physically realistic videos from AI models. The system uses a two-stage approach with curriculum learning to improve both physical accuracy and semantic fidelity, outperforming larger models like GPT-4o with only 7B parameters.

๐Ÿง  GPT-4
AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Agentics 2.0: Logical Transduction Algebra for Agentic Data Workflows

Researchers have introduced Agentics 2.0, a Python framework for building enterprise-grade AI agent workflows using logical transduction algebra. The framework addresses reliability, scalability, and observability challenges in deploying agentic AI systems beyond research prototypes.

AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Why Do Unlearnable Examples Work: A Novel Perspective of Mutual Information

Researchers propose a new method called Mutual Information Unlearnable Examples (MI-UE) to protect data privacy by preventing unauthorized AI models from learning from scraped data. The approach uses mutual information theory to create more effective data poisoning techniques that impede deep learning model generalization.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

AriadneMem: Threading the Maze of Lifelong Memory for LLM Agents

Researchers have developed AriadneMem, a new memory system for long-horizon LLM agents that addresses challenges in maintaining accurate memory under fixed context budgets. The system uses a two-phase pipeline with entropy-aware gating and conflict-aware coarsening to improve multi-hop reasoning while reducing runtime by 77.8% and using only 497 context tokens.

๐Ÿง  GPT-4
AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

A Rubric-Supervised Critic from Sparse Real-World Outcomes

Researchers propose a new framework called Critic Rubrics to bridge the gap between academic coding agent benchmarks and real-world applications. The system learns from sparse, noisy human interaction data using 24 behavioral features and shows significant improvements in code generation tasks including 15.9% better reranking performance on SWE-bench.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

ZipMap: Linear-Time Stateful 3D Reconstruction with Test-Time Training

Researchers introduce ZipMap, a new AI model for 3D reconstruction that achieves linear-time processing while maintaining accuracy comparable to slower quadratic-time methods. The system can reconstruct over 700 frames in under 10 seconds on a single H100 GPU, making it more than 20x faster than current state-of-the-art approaches like VGGT.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

RoboCasa365: A Large-Scale Simulation Framework for Training and Benchmarking Generalist Robots

Researchers have released RoboCasa365, a large-scale simulation benchmark featuring 365 household tasks across 2,500 kitchen environments with over 600 hours of human demonstration data. The platform is designed to train and evaluate generalist robots for everyday tasks, providing insights into factors affecting robot performance and generalization capabilities.

AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Learning Approximate Nash Equilibria in Cooperative Multi-Agent Reinforcement Learning via Mean-Field Subsampling

Researchers propose ALTERNATING-MARL, a new framework for cooperative multi-agent reinforcement learning that enables a global agent to learn with massive populations under communication constraints. The method achieves approximate Nash equilibrium convergence while only observing a subset of local agent states, with applications in multi-robot control and federated optimization.

$MKR
AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Can Large Language Models Derive New Knowledge? A Dynamic Benchmark for Biological Knowledge Discovery

Researchers have developed DBench-Bio, a dynamic benchmark system that automatically evaluates AI's ability to discover new biological knowledge using a three-stage pipeline of data acquisition, question-answer extraction, and quality filtering. The benchmark addresses the critical problem of data contamination in static datasets and provides monthly updates across 12 biomedical domains, revealing current limitations in state-of-the-art AI models' knowledge discovery capabilities.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

DIALEVAL: Automated Type-Theoretic Evaluation of LLM Instruction Following

Researchers introduce DIALEVAL, a new automated framework that uses dual LLM agents to evaluate how well AI models follow instructions. The system achieves 90.38% accuracy by breaking down instructions into verifiable components and applying type-specific evaluation criteria, showing 26.45% error reduction over existing methods.

AINeutralarXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Automated Concept Discovery for LLM-as-a-Judge Preference Analysis

Researchers developed automated methods to discover biases in Large Language Models when used as judges, analyzing over 27,000 paired responses. The study found LLMs exhibit systematic biases including preference for refusing sensitive requests more than humans, favoring concrete and empathetic responses, and showing bias against certain legal guidance.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

SPRINT: Semi-supervised Prototypical Representation for Few-Shot Class-Incremental Tabular Learning

Researchers introduce SPRINT, the first Few-Shot Class-Incremental Learning (FSCIL) framework designed specifically for tabular data domains like cybersecurity and healthcare. The system achieves 77.37% accuracy in 5-shot learning scenarios, outperforming existing methods by 4.45% through novel semi-supervised techniques that leverage unlabeled data and confidence-based pseudo-labeling.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Efficient Point Cloud Processing with High-Dimensional Positional Encoding and Non-Local MLPs

Researchers developed HPENets, a new suite of MLP networks for point cloud processing that uses High-dimensional Positional Encoding (HPE) and non-local MLPs. The approach delivers significant performance improvements while reducing computational costs by 50-80% compared to existing methods across multiple benchmark datasets.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Towards Self-Robust LLMs: Intrinsic Prompt Noise Resistance via CoIPO

Researchers propose CoIPO (Contrastive Learning-based Inverse Direct Preference Optimization), a new method to improve Large Language Model robustness against noisy or imperfect user prompts. The approach enhances LLMs' intrinsic ability to handle prompt variations without relying on external preprocessing tools, showing significant accuracy improvements on benchmark tests.

AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Old Habits Die Hard: How Conversational History Geometrically Traps LLMs

Researchers introduce History-Echoes, a framework revealing how large language models become trapped by their conversational history, with past interactions creating geometric constraints in latent space that bias future responses. The study demonstrates that behavioral persistence in LLMs manifests as mathematical traps where previous hallucinations and responses influence subsequent model behavior across multiple model families and datasets.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

AgentSelect: Benchmark for Narrative Query-to-Agent Recommendation

Researchers introduce AgentSelect, a comprehensive benchmark for recommending AI agent configurations based on narrative queries. The benchmark aggregates over 111,000 queries and 107,000 deployable agents from 40+ sources to address the critical gap in selecting optimal LLM agent setups for specific tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

What Does Flow Matching Bring To TD Learning?

Researchers demonstrate that flow matching improves reinforcement learning through enhanced TD learning mechanisms rather than distributional modeling. The approach achieves 2x better final performance and 5x improved sample efficiency compared to standard critics by enabling test-time error recovery and more plastic feature learning.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Data-Aware Random Feature Kernel for Transformers

Researchers introduce DARKFormer, a new transformer architecture that reduces computational complexity from quadratic to linear while maintaining performance. The model uses data-aware random feature kernels to address efficiency issues in pretrained transformer models with anisotropic query-key distributions.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Ethical and Explainable AI in Reusable MLOps Pipelines

Researchers developed a unified MLOps framework that integrates ethical AI principles, reducing demographic bias from 0.31 to 0.04 while maintaining predictive accuracy. The system automatically blocks deployments and triggers retraining based on fairness metrics, demonstrating practical implementation of ethical AI in production environments.

AINeutralarXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Escaping the BLEU Trap: A Signal-Grounded Framework with Decoupled Semantic Guidance for EEG-to-Text Decoding

Researchers propose SemKey, a novel framework that addresses key limitations in EEG-to-text decoding by preventing hallucinations and improving semantic fidelity through decoupled guidance objectives. The system redesigns neural encoder-LLM interaction and introduces new evaluation metrics beyond BLEU scores to achieve state-of-the-art performance in brain-computer interfaces.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

TTSR: Test-Time Self-Reflection for Continual Reasoning Improvement

Researchers introduce TTSR, a new framework that enables AI models to improve their reasoning abilities during test time by having a single model alternate between student and teacher roles. The system allows models to learn from their mistakes by analyzing failed reasoning attempts and generating targeted practice questions for continuous improvement.

AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

PlaneCycle: Training-Free 2D-to-3D Lifting of Foundation Models Without Adapters

PlaneCycle introduces a training-free method to convert 2D AI foundation models to 3D without requiring retraining or architectural changes. The technique enables pretrained 2D models like DINOv3 to process 3D volumetric data by cyclically distributing spatial aggregation across orthogonal planes, achieving competitive performance on 3D classification and segmentation tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

TATRA: Training-Free Instance-Adaptive Prompting Through Rephrasing and Aggregation

Researchers introduce TATRA, a training-free prompting method for Large Language Models that creates instance-specific few-shot prompts without requiring labeled training data. The method achieves state-of-the-art performance on mathematical reasoning benchmarks like GSM8K and DeepMath, matching or outperforming existing prompt optimization methods that rely on expensive training processes.