y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm News & Analysis

954 articles tagged with #llm. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

954 articles
AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

Scaf-GRPO: Scaffolded Group Relative Policy Optimization for Enhancing LLM Reasoning

Researchers introduced Scaf-GRPO, a new training framework that overcomes the 'learning cliff' problem in LLM reasoning by providing strategic hints when models plateau. The method boosted Qwen2.5-Math-7B performance on the AIME24 benchmark by 44.3% relative to baseline GRPO methods.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

ZeroDVFS: Zero-Shot LLM-Guided Core and Frequency Allocation for Embedded Platforms

Researchers developed ZeroDVFS, a system that uses Large Language Models to optimize power management in embedded systems without requiring extensive profiling. The system achieves 7.09 times better energy efficiency and enables zero-shot deployment for new workloads in under 5 seconds through LLM-based code analysis.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Learning from Synthetic Data Improves Multi-hop Reasoning

Researchers demonstrated that large language models can improve multi-hop reasoning performance by training on rule-generated synthetic data instead of expensive human annotations or frontier LLM outputs. The study found that LLMs trained on synthetic fictional data performed better on real-world question-answering benchmarks by learning fundamental knowledge composition skills.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

GenDB: The Next Generation of Query Processing -- Synthesized, Not Engineered

Researchers propose GenDB, a revolutionary database system that uses Large Language Models to synthesize query execution code instead of relying on traditional engineered query processors. Early prototype testing shows GenDB outperforms established systems like DuckDB, Umbra, and PostgreSQL on OLAP workloads.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

CharacterFlywheel: Scaling Iterative Improvement of Engaging and Steerable LLMs in Production

Meta presents CharacterFlywheel, an iterative process for improving large language models in production social chat applications across Instagram, WhatsApp, and Messenger. Starting from LLaMA 3.1, the system achieved significant improvements through 15 generations of refinement, with the best models showing up to 8.8% improvement in engagement breadth and 19.4% in engagement depth while substantially improving instruction following capabilities.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

RLP: Reinforcement as a Pretraining Objective

Researchers introduce RLP (Reinforcement Learning Pretraining), a new training method that incorporates reinforcement learning exploration into the pretraining phase rather than only post-training. The approach treats chain-of-thought reasoning as exploratory actions and achieved 19% performance improvements on math and science benchmarks across different model architectures.

$COMP
AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Doctor-R1: Mastering Clinical Inquiry with Experiential Agentic Reinforcement Learning

Doctor-R1 is a new AI agent that combines accurate medical decision-making with strategic, empathetic patient consultation skills through reinforcement learning. The system outperforms existing open-source medical LLMs and proprietary models on clinical benchmarks while demonstrating superior communication quality and patient-centric performance.

AI ร— CryptoBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿค–

SymGPT: Auditing Smart Contracts via Combining Symbolic Execution with Large Language Models

Researchers have developed SymGPT, a new tool that combines large language models with symbolic execution to automatically audit smart contracts for ERC rule violations. The tool identified 5,783 violations in 4,000 real-world contracts, including 1,375 with clear attack paths for financial theft, outperforming existing automated analysis methods.

$ETH
AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

GEM: A Gym for Agentic LLMs

Researchers introduced GEM (General Experience Maker), an open-source environment simulator designed for training large language models through experience-based learning rather than static datasets. The framework provides a standardized interface similar to OpenAI-Gym but specifically optimized for LLMs, featuring diverse environments, integrated tools, and compatibility with popular RL training frameworks.

$MKR
AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Rewriting Pre-Training Data Boosts LLM Performance in Math and Code

Researchers released two open-source datasets, SwallowCode and SwallowMath, that significantly improve large language model performance in coding and mathematics through systematic data rewriting rather than filtering. The datasets boost Llama-3.1-8B performance by +17.0 on HumanEval for coding and +12.4 on GSM8K for math tasks.

AINeutralarXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

When Agents "Misremember" Collectively: Exploring the Mandela Effect in LLM-based Multi-Agent Systems

Researchers have identified and studied the 'Mandela effect' in AI multi-agent systems, where groups of AI agents collectively develop false memories or misremember information. The study introduces MANBENCH, a benchmark to evaluate this phenomenon, and proposes mitigation strategies that achieved a 74.40% reduction in false collective memories.

AIBullisharXiv โ€“ CS AI ยท Mar 37/102
๐Ÿง 

The FM Agent

Researchers have developed FM Agent, a multi-agent AI framework that combines large language models with evolutionary search to autonomously solve complex research problems. The system achieved state-of-the-art results across multiple domains including operations research, machine learning, and GPU optimization without human intervention.

AINeutralarXiv โ€“ CS AI ยท Mar 37/105
๐Ÿง 

DAG-Math: Graph-of-Thought Guided Mathematical Reasoning in LLMs

Researchers introduce DAG-Math, a new framework for evaluating mathematical reasoning in Large Language Models that models Chain-of-Thought as rule-based processes over directed acyclic graphs. The framework includes a 'logical closeness' metric that reveals significant differences in reasoning quality between LLM families, even when final answer accuracy appears comparable.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

AceGRPO: Adaptive Curriculum Enhanced Group Relative Policy Optimization for Autonomous Machine Learning Engineering

Researchers introduce AceGRPO, a new reinforcement learning framework for Autonomous Machine Learning Engineering that addresses behavioral stagnation in current LLM-based agents. The Ace-30B model trained with this method achieves 100% valid submission rate on MLE-Bench-Lite and matches performance of proprietary frontier models while outperforming larger open-source alternatives.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

A Convergence Analysis of Adaptive Optimizers under Floating-point Quantization

Researchers introduce the first theoretical framework analyzing convergence of adaptive optimizers like Adam and Muon under floating-point quantization in low-precision training. The study shows these algorithms maintain near full-precision performance when mantissa length scales logarithmically with iterations, with Muon proving more robust than Adam to quantization errors.

AINeutralarXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Not All Models Suit Expert Offloading: On Local Routing Consistency of Mixture-of-Expert Models

Researchers analyzed 20 Mixture-of-Experts (MoE) language models to study local routing consistency, finding a trade-off between routing consistency and local load balance. The study introduces new metrics to measure how well expert offloading strategies can optimize memory usage on resource-constrained devices while maintaining inference speed.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Distribution-Aligned Decoding for Efficient LLM Task Adaptation

Researchers introduce SVDecode, a new method for adapting large language models to specific tasks without extensive fine-tuning. The technique uses steering vectors during decoding to align output distributions with task requirements, improving accuracy by up to 5 percentage points while adding minimal computational overhead.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

FROGENT: An End-to-End Full-process Drug Design Multi-Agent System

Researchers have developed FROGENT, an AI multi-agent system that uses large language models to automate the entire drug discovery pipeline from target identification to synthesis planning. The system outperformed existing AI approaches across eight benchmarks and demonstrated practical applications in real-world drug design scenarios.

AIBullisharXiv โ€“ CS AI ยท Mar 37/102
๐Ÿง 

Reasoning on Time-Series for Financial Technical Analysis

Researchers introduce Verbal Technical Analysis (VTA), a framework that combines Large Language Models with time-series analysis to produce interpretable stock forecasts. The system converts stock price data into textual annotations and uses natural language reasoning to achieve state-of-the-art forecasting accuracy across U.S., Chinese, and European markets.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Train Once, Answer All: Many Pretraining Experiments for the Cost of One

Researchers developed a method to conduct multiple AI training experiments simultaneously within a single pretraining run, reducing computational costs while maintaining research validity. The approach was validated across ten experiments using models up to 2.7B parameters trained on 210B tokens, with minimal impact on training dynamics.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

LongWriter-Zero: Mastering Ultra-Long Text Generation via Reinforcement Learning

Researchers introduce LongWriter-Zero, a reinforcement learning approach that enables large language models to generate ultra-long, high-quality text without relying on synthetic training data. The 32B parameter model outperforms traditional supervised fine-tuning methods and even surpasses larger 100B+ models on long-form writing benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs

Researchers propose TRIM-KV, a novel approach that learns token importance for memory-bounded LLM inference through lightweight retention gates, addressing the quadratic cost of self-attention and growing key-value cache issues. The method outperforms existing eviction baselines across multiple benchmarks and provides insights into LLM interpretability through learned retention scores.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Scaling with Collapse: Efficient and Predictable Training of LLM Families

Researchers demonstrate that training loss curves for large language models can collapse onto universal trajectories when hyperparameters are optimally set, enabling more efficient LLM training. They introduce Celerity, a competitive LLM family developed using these insights, and show that deviation from collapse can serve as an early diagnostic for training issues.

AIBullisharXiv โ€“ CS AI ยท Mar 37/102
๐Ÿง 

Model Collapse Is Not a Bug but a Feature in Machine Unlearning for LLMs

Researchers propose Partial Model Collapse (PMC), a novel machine unlearning method for large language models that removes private information without directly training on sensitive data. The approach leverages model collapse - where models degrade when trained on their own outputs - as a feature to deliberately forget targeted information while preserving general utility.