y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#llm-optimization News & Analysis

45 articles tagged with #llm-optimization. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

45 articles
AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

Transferable Expertise for Autonomous Agents via Real-World Case-Based Learning

Researchers propose a case-based learning framework enabling LLM-based autonomous agents to extract and reuse knowledge from past tasks, improving performance on complex real-world problems. The method outperforms traditional zero-shot, few-shot, and prompt-based baselines across six task categories, with gains increasing as task complexity rises.

AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

AdaMCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Multilingual Chain-of-Thought

Researchers introduce AdaMCoT, a framework that improves multilingual reasoning in large language models by dynamically routing intermediate thoughts through optimal 'thinking languages' before generating target-language responses. The approach achieves significant performance gains in low-resource languages without requiring additional pretraining, addressing a key limitation in current multilingual AI systems.

AIBullisharXiv โ€“ CS AI ยท 1d ago7/10
๐Ÿง 

OSC: Hardware Efficient W4A4 Quantization via Outlier Separation in Channel Dimension

Researchers present OSC, a hardware-efficient framework that addresses the challenge of deploying Large Language Models with 4-bit quantization by intelligently separating activation outliers into a high-precision processing path while maintaining low-precision computation for standard values. The technique achieves 1.78x speedup over standard 8-bit approaches while limiting accuracy degradation to under 2.2% on state-of-the-art models.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

ExecTune: Effective Steering of Black-Box LLMs with Guide Models

Researchers introduce ExecTune, a training methodology for optimizing black-box LLM systems where a guide model generates strategies executed by a core model. The approach improves accuracy by up to 9.2% while reducing inference costs by 22.4%, enabling smaller models like Claude Haiku to match larger competitors at significantly lower computational expense.

๐Ÿง  Claude๐Ÿง  Haiku๐Ÿง  Sonnet
AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Think in Sentences: Explicit Sentence Boundaries Enhance Language Model's Capabilities

Researchers demonstrate that inserting sentence boundary delimiters in LLM inputs significantly enhances model performance across reasoning tasks, with improvements up to 12.5% on specific benchmarks. This technique leverages the natural sentence-level structure of human language to enable better processing during inference, tested across model scales from 7B to 600B parameters.

AIBullisharXiv โ€“ CS AI ยท 2d ago7/10
๐Ÿง 

Disco-RAG: Discourse-Aware Retrieval-Augmented Generation

Researchers introduce Disco-RAG, a discourse-aware framework that enhances Retrieval-Augmented Generation (RAG) systems by explicitly modeling discourse structures and rhetorical relationships between retrieved passages. The method achieves state-of-the-art results on question answering and summarization tasks without fine-tuning, demonstrating that structural understanding of text significantly improves LLM performance on knowledge-intensive tasks.

AIBullisharXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

MoBiE: Efficient Inference of Mixture of Binary Experts under Post-Training Quantization

Researchers introduce MoBiE, a novel binarization framework designed specifically for Mixture-of-Experts large language models that achieves significant efficiency gains through weight compression while maintaining model performance. The method addresses unique challenges in quantizing MoE architectures and demonstrates over 2ร— inference speedup with substantial perplexity reductions on benchmark models.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

AgentOpt v0.1 Technical Report: Client-Side Optimization for LLM-Based Agent

AgentOpt v0.1, a new Python framework, addresses client-side optimization for AI agents by intelligently allocating models, tools, and API budgets across pipeline stages. Using search algorithms like Arm Elimination and Bayesian Optimization, the tool reduces evaluation costs by 24-67% while achieving near-optimal accuracy, with cost differences between model combinations reaching up to 32x at matched performance levels.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

StatePlane: A Cognitive State Plane for Long-Horizon AI Systems Under Bounded Context

Researchers introduce StatePlane, a model-agnostic cognitive state management system that enables AI systems to maintain coherent reasoning over long interaction horizons without expanding context windows or retraining models. The system uses episodic, semantic, and procedural memory mechanisms inspired by cognitive psychology to overcome current limitations in large language models.

AIBullisharXiv โ€“ CS AI ยท Mar 47/104
๐Ÿง 

You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Models

Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a novel approach that significantly improves large language model performance by treating multiple in-context examples as supervised training targets rather than just prompts. The method narrows the performance gap between in-context learning and dedicated fine-tuning while reducing catastrophic forgetting issues.

AIBullisharXiv โ€“ CS AI ยท Mar 46/102
๐Ÿง 

ScaleDoc: Scaling LLM-based Predicates over Large Document Collections

ScaleDoc is a new system that enables efficient semantic analysis of large document collections using LLMs by combining offline document representation with lightweight online filtering. The system achieves 2x speedup and reduces expensive LLM calls by up to 85% through contrastive learning and adaptive cascade mechanisms.

AIBullisharXiv โ€“ CS AI ยท Mar 37/105
๐Ÿง 

Arbor: A Framework for Reliable Navigation of Critical Conversation Flows

Researchers introduce Arbor, a framework that decomposes large language model decision-making into specialized node-level tasks for critical applications like healthcare triage. The system improves accuracy by 29.4 percentage points while reducing latency by 57.1% and costs by 14.4x compared to single-prompt approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

Bridging Draft Policy Misalignment: Group Tree Optimization for Speculative Decoding

Researchers introduce Group Tree Optimization (GTO), a new training method that improves speculative decoding for large language models by aligning draft model training with actual decoding policies. GTO achieves 7.4% better acceptance length and 7.7% additional speedup over existing state-of-the-art methods across multiple benchmarks and LLMs.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

FreeKV: Boosting KV Cache Retrieval for Efficient LLM Inference

Researchers introduce FreeKV, a training-free optimization framework that dramatically improves KV cache retrieval efficiency for large language models with long context windows. The system achieves up to 13x speedup compared to existing methods while maintaining near-lossless accuracy through speculative retrieval and hybrid memory layouts.

$NEAR
AIBullisharXiv โ€“ CS AI ยท Feb 277/105
๐Ÿง 

Towards Autonomous Memory Agents

Researchers introduce U-Mem, an autonomous memory agent system that actively acquires and validates knowledge for large language models. The system uses cost-aware knowledge extraction and semantic Thompson sampling to improve performance, showing significant gains on benchmarks like HotpotQA and AIME25.

AINeutralarXiv โ€“ CS AI ยท 1d ago6/10
๐Ÿง 

Leveraging Weighted Syntactic and Semantic Context Assessment Summary (wSSAS) Towards Text Categorization Using LLMs

Researchers introduce wSSAS, a deterministic framework that enhances Large Language Model text categorization by combining hierarchical classification with signal-to-noise filtering to improve accuracy and reproducibility. Testing across Google Business, Amazon Product, and Goodreads reviews demonstrates significant improvements in clustering integrity and reduced categorization entropy.

๐Ÿง  Gemini
AINeutralarXiv โ€“ CS AI ยท 1d ago6/10
๐Ÿง 

Local-Splitter: A Measurement Study of Seven Tactics for Reducing Cloud LLM Token Usage on Coding-Agent Workloads

Researchers present a systematic study of seven tactics for reducing cloud LLM token consumption in coding-agent workloads, demonstrating that local routing combined with prompt compression can achieve 45-79% token savings on certain tasks. The open-source implementation reveals that optimal cost-reduction strategies vary significantly by workload type, offering practical guidance for developers deploying AI coding agents at scale.

๐Ÿข OpenAI
AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

TInR: Exploring Tool-Internalized Reasoning in Large Language Models

Researchers propose Tool-Internalized Reasoning (TInR), a framework that embeds tool knowledge directly into Large Language Models rather than relying on external tool documentation during reasoning. The TInR-U model uses a three-phase training pipeline combining knowledge alignment, supervised fine-tuning, and reinforcement learning to improve reasoning efficiency and performance across various tasks.

AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Advancing Polish Language Modeling through Tokenizer Optimization in the Bielik v3 7B and 11B Series

Researchers have optimized the Bielik v3 language models (7B and 11B parameters) by replacing universal tokenizers with Polish-specific vocabulary, addressing inefficiencies in morphological representation. This optimization reduces token fertility, lowers inference costs, and expands effective context windows while maintaining multilingual capabilities through advanced training techniques including supervised fine-tuning and reinforcement learning.

AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

The Past Is Not Past: Memory-Enhanced Dynamic Reward Shaping

Researchers introduce MEDS, a memory-enhanced reward shaping framework that addresses a critical reinforcement learning failure mode where language models repeatedly generate similar errors. By tracking historical behavioral patterns and penalizing recurring mistake clusters, the method achieves consistent performance improvements across multiple datasets and models while increasing sampling diversity.

AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Low-rank Optimization Trajectories Modeling for LLM RLVR Acceleration

Researchers propose NExt, a nonlinear extrapolation framework that accelerates reinforcement learning with verifiable rewards (RLVR) for large language models by modeling low-rank parameter trajectories. The method reduces computational overhead by approximately 37.5% while remaining compatible with various RLVR algorithms, addressing a key bottleneck in scaling LLM training.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Characterizing Performance-Energy Trade-offs of Large Language Models in Multi-Request Workflows

Researchers present the first systematic study of performance-energy trade-offs in multi-request LLM inference workflows, using NVIDIA A100 GPUs and vLLM/Parrot serving systems. The study identifies batch size as the most impactful optimization lever, though effectiveness varies by workload type, and reveals that workflow-aware scheduling can reduce energy consumption under power constraints.

๐Ÿข Nvidia
AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

An Iterative Utility Judgment Framework Inspired by Philosophical Relevance via LLMs

Researchers propose ITEM, an iterative utility judgment framework that enhances retrieval-augmented generation (RAG) systems by aligning with philosophical principles of relevance. The framework improves how large language models prioritize and process information from retrieval results, demonstrating measurable improvements across multiple benchmarks in ranking, utility assessment, and answer generation.

Page 1 of 2Next โ†’