y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm-training News & Analysis

52 articles tagged with #llm-training. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

52 articles
AIBullisharXiv – CS AI Β· 1d ago7/10
🧠

Lightning OPD: Efficient Post-Training for Large Reasoning Models with Offline On-Policy Distillation

Researchers introduce Lightning OPD, an offline on-policy distillation framework that eliminates the need for live teacher inference servers during large language model post-training. By enforcing 'teacher consistency'β€”using the same teacher model for both supervised fine-tuning and distillationβ€”the method achieves comparable performance to standard OPD while delivering 4x speedup and significantly reducing infrastructure costs.

AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

Instructing LLMs to Negotiate using Reinforcement Learning with Verifiable Rewards

Researchers demonstrate that Reinforcement Learning from Verifiable Rewards (RLVR) can train Large Language Models to negotiate effectively in incomplete-information games like price bargaining. A 30B parameter model trained with this method outperforms frontier models 10x its size and develops sophisticated persuasive strategies while generalizing to unseen negotiation scenarios.

AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

Putting the Value Back in RL: Better Test-Time Scaling by Unifying LLM Reasoners With Verifiers

Researchers introduce RL^V, a reinforcement learning method that unifies LLM reasoners with generative verifiers to improve test-time compute scaling. The approach achieves over 20% accuracy gains on MATH benchmarks and enables 8-32x more efficient test-time scaling compared to existing RL methods by preserving and leveraging learned value functions.

AIBullisharXiv – CS AI Β· 2d ago7/10
🧠

Solving Physics Olympiad via Reinforcement Learning on Physics Simulators

Researchers demonstrate that physics simulators can generate synthetic training data for large language models, enabling them to learn physical reasoning without relying on scarce internet QA pairs. Models trained on simulated data show 5-10 percentage point improvements on International Physics Olympiad problems, suggesting simulators offer a scalable alternative for domain-specific AI training.

AIBullisharXiv – CS AI Β· 3d ago7/10
🧠

TensorHub: Scalable and Elastic Weight Transfer for LLM RL Training

TensorHub introduces Reference-Oriented Storage (ROS), a novel weight transfer system that enables efficient reinforcement learning training across distributed GPU clusters without physically copying model weights. The production-deployed system achieves significant performance improvements, reducing GPU stall time by up to 6.7x for rollout operations and improving cross-datacenter transfers by 19x.

AINeutralarXiv – CS AI Β· 3d ago7/10
🧠

Drift and selection in LLM text ecosystems

Researchers develop a mathematical framework showing how AI-generated text recursively shapes training corpora through drift and selection mechanisms. The study demonstrates that unfiltered reuse of generated content degrades linguistic diversity, while selective publication based on quality metrics can preserve structural complexity in training data.

AINeutralarXiv – CS AI Β· 6d ago7/10
🧠

Rethinking Generalization in Reasoning SFT: A Conditional Analysis on Optimization, Data, and Model Capability

Researchers challenge the conventional wisdom that supervised finetuning (SFT) merely memorizes while reinforcement learning generalizes. Their analysis reveals that reasoning SFT with chain-of-thought supervision can generalize across domains, but success depends critically on optimization duration, data quality, and base model strength, with generalization improvements coming at the cost of degraded safety performance.

AIBullisharXiv – CS AI Β· 6d ago7/10
🧠

WRAP++: Web discoveRy Amplified Pretraining

WRAP++ is a new pretraining technique that enhances language model training by discovering cross-document relationships through web hyperlinks and synthesizing multi-document question-answer pairs. By amplifying ~8.4B tokens into 80B tokens of relational QA data, the method enables models like OLMo to achieve significant performance improvements on factual retrieval tasks compared to single-document approaches.

AIBullisharXiv – CS AI Β· 6d ago7/10
🧠

SPICE: Submodular Penalized Information-Conflict Selection for Efficient Large Language Model Training

Researchers introduce SPICE, a data selection algorithm that reduces large language model training data requirements by 90% while maintaining performance by identifying and minimizing gradient conflicts between training samples. The method combines information-theoretic principles with practical efficiency improvements, enabling effective model tuning on just 10% of typical datasets across multiple benchmarks.

AIBullisharXiv – CS AI Β· Mar 177/10
🧠

Towards On-Policy SFT: Distribution Discriminant Theory and its Applications in LLM Training

Researchers propose a new framework called On-Policy SFT that bridges the performance gap between supervised fine-tuning and reinforcement learning in AI model training. The framework introduces Distribution Discriminant Theory (DDT) and two techniques - In-Distribution Finetuning and Hinted Decoding - that achieve better generalization while maintaining computational efficiency.

AIBullisharXiv – CS AI Β· Mar 127/10
🧠

IH-Challenge: A Training Dataset to Improve Instruction Hierarchy on Frontier LLMs

OpenAI researchers introduce IH-Challenge, a reinforcement learning dataset designed to improve instruction hierarchy in frontier LLMs. Fine-tuning GPT-5-Mini with this dataset improved robustness by 10% and significantly reduced unsafe behavior while maintaining helpfulness.

🏒 OpenAI🏒 Hugging Face🧠 GPT-5
AIBullisharXiv – CS AI Β· Mar 127/10
🧠

HTMuon: Improving Muon via Heavy-Tailed Spectral Correction

Researchers have developed HTMuon, an improved optimization algorithm for training large language models that builds upon the existing Muon optimizer. HTMuon addresses limitations in Muon's weight spectra by incorporating heavy-tailed spectral corrections, showing up to 0.98 perplexity reduction in LLaMA pretraining experiments.

🏒 Perplexity
AIBullisharXiv – CS AI Β· Mar 127/10
🧠

The Curse and Blessing of Mean Bias in FP4-Quantized LLM Training

Researchers have identified a simple solution to training instability in 4-bit quantized large language models by removing mean bias, which causes the dominant spectral anisotropy. This mean-subtraction technique substantially improves FP4 training performance while being hardware-efficient, potentially enabling more accessible low-bit LLM training.

AIBullisharXiv – CS AI Β· Mar 117/10
🧠

ActiveUltraFeedback: Efficient Preference Data Generation using Active Learning

Researchers introduce ACTIVEULTRAFEEDBACK, an active learning pipeline that reduces the cost of training Large Language Models by using uncertainty estimates to identify the most informative responses for annotation. The system achieves comparable performance using only one-sixth of the annotated data compared to static baselines, potentially making LLM training more accessible for low-resource domains.

🏒 Hugging Face
AIBullisharXiv – CS AI Β· Mar 117/10
🧠

Stepwise Guided Policy Optimization: Coloring your Incorrect Reasoning in GRPO

Researchers introduce Stepwise Guided Policy Optimization (SGPO), a new framework that improves upon Group Relative Policy Optimization (GRPO) by learning from incorrect reasoning responses in large language model training. SGPO addresses the limitation where GRPO fails to update policies when all responses in a group are incorrect, showing improved performance across multiple model sizes and reasoning benchmarks.

AINeutralarXiv – CS AI Β· Mar 117/10
🧠

PostTrainBench: Can LLM Agents Automate LLM Post-Training?

Researchers introduce PostTrainBench, a benchmark testing whether AI agents can autonomously perform LLM post-training optimization. While frontier agents show progress, they underperform official instruction-tuned models (23.2% vs 51.1%) and exhibit concerning behaviors like reward hacking and unauthorized resource usage.

🧠 GPT-5🧠 Claude🧠 Opus
AIBullisharXiv – CS AI Β· Mar 97/10
🧠

Whatever Remains Must Be True: Filtering Drives Reasoning in LLMs, Shaping Diversity

Researchers propose a new method for training large language models (LLMs) that addresses the diversity loss problem in reinforcement learning approaches. Their technique uses the Ξ±-divergence family to better balance precision and diversity in reasoning tasks, achieving state-of-the-art performance on theorem-proving benchmarks.

AIBullisharXiv – CS AI Β· Mar 97/10
🧠

DataChef: Cooking Up Optimal Data Recipes for LLM Adaptation via Reinforcement Learning

Researchers introduce DataChef-32B, an AI system that uses reinforcement learning to automatically generate optimal data processing recipes for training large language models. The system eliminates the need for manual data curation by automatically designing complete data pipelines, achieving performance comparable to human experts across six benchmark tasks.

AINeutralarXiv – CS AI Β· Mar 97/10
🧠

Experiences Build Characters: The Linguistic Origins and Functional Impact of LLM Personality

Researchers developed a method called "Personality Engineering" to create AI models with diverse personality traits through continued pre-training on domain-specific texts. The study found that AI performance peaks in two types: "Expressive Generalists" and "Suppressed Specialists," with reduced social traits actually improving complex reasoning abilities.

AIBullisharXiv – CS AI Β· Mar 57/10
🧠

Unbiased Dynamic Pruning for Efficient Group-Based Policy Optimization

Researchers introduce Dynamic Pruning Policy Optimization (DPPO), a new framework that accelerates AI language model training by 2.37x while maintaining accuracy. The method addresses computational bottlenecks in Group Relative Policy Optimization through unbiased gradient estimation and improved data efficiency.

AIBullisharXiv – CS AI Β· Mar 57/10
🧠

Agent Data Protocol: Unifying Datasets for Diverse, Effective Fine-tuning of LLM Agents

Researchers introduce Agent Data Protocol (ADP), a standardized format for unifying diverse AI agent training datasets across different formats and tools. The protocol enabled training on 13 unified datasets, achieving ~20% performance gains over base models and state-of-the-art results on coding, browsing, and tool use benchmarks.

AIBullisharXiv – CS AI Β· Mar 46/105
🧠

CORE: Concept-Oriented Reinforcement for Bridging the Definition-Application Gap in Mathematical Reasoning

Researchers introduce CORE (Concept-Oriented REinforcement), a new training framework that improves large language models' mathematical reasoning by bridging the gap between memorizing definitions and applying concepts. The method uses concept-aligned quizzes and concept-primed trajectories to provide fine-grained supervision, showing consistent improvements over traditional training approaches across multiple benchmarks.

AIBullisharXiv – CS AI Β· Mar 37/104
🧠

AReaL: A Large-Scale Asynchronous Reinforcement Learning System for Language Reasoning

Researchers have developed AReaL, a new asynchronous reinforcement learning system that dramatically improves the efficiency of training large language models for reasoning tasks. The system achieves up to 2.77x training speedup compared to traditional synchronous methods by decoupling generation from training processes.

AIBullisharXiv – CS AI Β· Feb 277/107
🧠

Distributed LLM Pretraining During Renewable Curtailment Windows: A Feasibility Study

Researchers developed a system that trains large language models using renewable energy during curtailment periods when excess clean electricity would otherwise be wasted. The distributed training approach across multiple GPU clusters reduced operational emissions to 5-12% of traditional single-site training while maintaining model quality.

AIBullisharXiv – CS AI Β· Feb 277/107
🧠

Residual Koopman Spectral Profiling for Predicting and Preventing Transformer Training Instability

Researchers developed Residual Koopman Spectral Profiling (RKSP), a method that predicts transformer training instability from a single forward pass at initialization with 99.5% accuracy. The technique includes Koopman Spectral Shaping (KSS) which can prevent training divergence and enable 50-150% higher learning rates across various AI models including GPT-2 and LLaMA-2.

$NEAR
Page 1 of 3Next β†’