AIBullisharXiv – CS AI · 6h ago4
🧠Researchers introduce RF-Agent, a framework that uses Large Language Models as agents to automatically design reward functions for control tasks through Monte Carlo Tree Search. The method improves upon existing approaches by better utilizing historical feedback and enhancing search efficiency across 17 diverse low-level control tasks.
AIBullisharXiv – CS AI · 6h ago3
🧠Researchers developed a deep reinforcement learning approach using heterogeneous graph networks to solve Flexible Job Shop Scheduling Problems with limited buffers and material kitting constraints. The method outperforms traditional heuristics by improving buffer utilization and decision quality through better modeling of complex dependencies in production scheduling.
AIBullisharXiv – CS AI · 6h ago3
🧠Researchers have developed a new quantum machine learning optimization technique using ternary encodings that significantly improves frequency tuning efficiency. The method achieves 22.8% better performance than existing approaches while requiring exponentially fewer encoding gates than traditional fixed-frequency methods.
AIBullisharXiv – CS AI · 6h ago4
🧠Researchers introduced Rudder, a software module that uses Large Language Models (LLMs) to optimize data prefetching in distributed Graph Neural Network training. The system shows up to 91% performance improvement over baseline training and 82% over static prefetching by autonomously adapting to dynamic conditions.
AIBullisharXiv – CS AI · 6h ago5
🧠Researchers from PKU-SEC-Lab have developed KEEP, a new memory management system that significantly improves the efficiency of AI-powered embodied planning by optimizing KV cache usage. The system achieves 2.68x speedup compared to text-based memory methods while maintaining accuracy, addressing a key bottleneck in memory-augmented Large Language Models for complex planning tasks.
AIBullisharXiv – CS AI · 6h ago3
🧠Researchers developed UPath, a universal AI-powered pathfinding algorithm that improves A* search performance by up to 2.2x across diverse grid environments. The deep learning model generalizes across different map types without retraining, achieving near-optimal solutions within 3% of optimal cost on unseen tasks.
AIBullisharXiv – CS AI · 6h ago4
🧠Researchers propose FedNSAM, a new federated learning algorithm that improves global model performance by addressing the inconsistency between local and global flatness in distributed training environments. The algorithm uses global Nesterov momentum to harmonize local and global optimization, showing superior performance compared to existing FedSAM approaches.
AIBullisharXiv – CS AI · 6h ago9
🧠Researchers introduce LoRA-Pre, a memory-efficient optimizer that reduces memory overhead in training large language models by using low-rank approximation of momentum states. The method achieves superior performance on Llama models from 60M to 1B parameters while using only 1/8 the rank of baseline methods.
AIBullisharXiv – CS AI · 6h ago6
🧠Researchers introduce SAGE (Self-Aware Guided Efficient Reasoning), a novel sampling paradigm that improves AI reasoning efficiency by helping large reasoning models know when to stop thinking. The approach addresses the problem of redundant, lengthy reasoning chains that don't improve accuracy while reducing computational costs and response times.
AINeutralarXiv – CS AI · 6h ago8
🧠Research reveals that reward model accuracy alone doesn't determine effectiveness in RLHF systems. The study proves that low reward variance can create flat optimization landscapes, making even perfectly accurate reward models inefficient teachers that underperform less accurate models with higher variance.
AIBullisharXiv – CS AI · 6h ago8
🧠Researchers developed Agentic Predictor, a lightweight AI system that uses multi-view encoding to optimize LLM-based agent workflows without expensive trial-and-error evaluations. The system incorporates code architecture, textual prompts, and interaction graphs to predict task success rates and select optimal configurations across different domains.
AIBullisharXiv – CS AI · 6h ago8
🧠Researchers developed ToSFiT (Thompson Sampling via Fine-Tuning), a new Bayesian optimization method that uses fine-tuned large language models to improve search efficiency in complex discrete spaces. The approach eliminates computational bottlenecks by directly parameterizing reward probabilities and demonstrates superior performance across diverse applications including protein search and quantum circuit design.
AIBullisharXiv – CS AI · 6h ago7
🧠Researchers propose Generalized Primal Averaging (GPA), a new optimization method that improves training speed for large language models by 8-10% over standard AdamW while using less memory. GPA unifies and enhances existing averaging-based optimizers like DiLoCo by enabling smooth iterate averaging at every step without complex two-loop structures.
AIBullisharXiv – CS AI · 6h ago5
🧠Researchers propose Trust Region Masking (TRM) to address off-policy mismatch problems in Large Language Model reinforcement learning pipelines. The method provides the first non-vacuous monotonic improvement guarantees for long-horizon LLM-RL tasks by masking entire sequences that violate trust region constraints.
AIBullisharXiv – CS AI · 6h ago5
🧠Researchers developed Score Matched Actor-Critic (SMAC), a new offline reinforcement learning method that enables smooth transition to online RL algorithms without performance drops. SMAC achieved successful transfer in all 6 D4RL tasks tested and reduced regret by 34-58% in 4 of 6 environments compared to best baselines.
AINeutralarXiv – CS AI · 6h ago1
🧠Researchers developed RL-CMSA, a hybrid reinforcement learning approach for solving the min-max Multiple Traveling Salesman Problem that combines probabilistic clustering, exact optimization, and solution refinement. The method outperforms existing algorithms by balancing exploration and exploitation to minimize the longest tour across multiple salesmen.
$NEAR
AIBullisharXiv – CS AI · 6h ago1
🧠Researchers developed a bi-level AI optimization framework using reinforcement learning to improve winter road maintenance operations on UK highway networks. The system strategically partitions road networks and optimizes vehicle routing while reducing travel times below two hours and minimizing carbon emissions.
AINeutralarXiv – CS AI · 6h ago1
🧠Researchers developed a new approach to minimize cost functions in shallow ReLU neural networks through explicit construction rather than gradient descent. The study provides mathematical upper bounds for cost minimization and characterizes the geometric structure of network minimizers in classification tasks.
AINeutralarXiv – CS AI · 6h ago1
🧠Researchers propose a new framework for feature selection that uses permutation-invariant embedding and reinforcement learning to address limitations in current methods. The approach combines an encoder-decoder paradigm to preserve feature relationships without order bias and employs policy-based RL to explore embedding spaces without convexity assumptions.
AINeutralarXiv – CS AI · 6h ago1
🧠Researchers introduce FLOP, a new causal discovery algorithm for linear models that significantly reduces computation time through fast parent selection and Cholesky-based score updates. The algorithm achieves near-perfect accuracy in standard benchmarks and makes discrete search approaches viable for causal structure learning.
$NEAR