45 articles tagged with #llm-optimization. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv โ CS AI ยท 1d ago7/10
๐ง Researchers propose a case-based learning framework enabling LLM-based autonomous agents to extract and reuse knowledge from past tasks, improving performance on complex real-world problems. The method outperforms traditional zero-shot, few-shot, and prompt-based baselines across six task categories, with gains increasing as task complexity rises.
AIBullisharXiv โ CS AI ยท 1d ago7/10
๐ง Researchers introduce AdaMCoT, a framework that improves multilingual reasoning in large language models by dynamically routing intermediate thoughts through optimal 'thinking languages' before generating target-language responses. The approach achieves significant performance gains in low-resource languages without requiring additional pretraining, addressing a key limitation in current multilingual AI systems.
AIBullisharXiv โ CS AI ยท 1d ago7/10
๐ง Researchers present OSC, a hardware-efficient framework that addresses the challenge of deploying Large Language Models with 4-bit quantization by intelligently separating activation outliers into a high-precision processing path while maintaining low-precision computation for standard values. The technique achieves 1.78x speedup over standard 8-bit approaches while limiting accuracy degradation to under 2.2% on state-of-the-art models.
AIBullisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers introduce ExecTune, a training methodology for optimizing black-box LLM systems where a guide model generates strategies executed by a core model. The approach improves accuracy by up to 9.2% while reducing inference costs by 22.4%, enabling smaller models like Claude Haiku to match larger competitors at significantly lower computational expense.
๐ง Claude๐ง Haiku๐ง Sonnet
AIBullisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers demonstrate that inserting sentence boundary delimiters in LLM inputs significantly enhances model performance across reasoning tasks, with improvements up to 12.5% on specific benchmarks. This technique leverages the natural sentence-level structure of human language to enable better processing during inference, tested across model scales from 7B to 600B parameters.
AIBullisharXiv โ CS AI ยท 2d ago7/10
๐ง Researchers introduce Disco-RAG, a discourse-aware framework that enhances Retrieval-Augmented Generation (RAG) systems by explicitly modeling discourse structures and rhetorical relationships between retrieved passages. The method achieves state-of-the-art results on question answering and summarization tasks without fine-tuning, demonstrating that structural understanding of text significantly improves LLM performance on knowledge-intensive tasks.
AIBullisharXiv โ CS AI ยท 6d ago7/10
๐ง Researchers introduce MoBiE, a novel binarization framework designed specifically for Mixture-of-Experts large language models that achieves significant efficiency gains through weight compression while maintaining model performance. The method addresses unique challenges in quantizing MoE architectures and demonstrates over 2ร inference speedup with substantial perplexity reductions on benchmark models.
๐ข Perplexity
AIBullisharXiv โ CS AI ยท 6d ago7/10
๐ง AgentOpt v0.1, a new Python framework, addresses client-side optimization for AI agents by intelligently allocating models, tools, and API budgets across pipeline stages. Using search algorithms like Arm Elimination and Bayesian Optimization, the tool reduces evaluation costs by 24-67% while achieving near-optimal accuracy, with cost differences between model combinations reaching up to 32x at matched performance levels.
AIBullisharXiv โ CS AI ยท Mar 177/10
๐ง Researchers introduce StatePlane, a model-agnostic cognitive state management system that enables AI systems to maintain coherent reasoning over long interaction horizons without expanding context windows or retraining models. The system uses episodic, semantic, and procedural memory mechanisms inspired by cognitive psychology to overcome current limitations in large language models.
AINeutralarXiv โ CS AI ยท Mar 117/10
๐ง Research analyzes FP4 quantization sensitivity across different layers in large language models using NVFP4 and MXFP4 formats on Qwen2.5 models. The study finds MLP projection layers are most sensitive to quantization, while attention layers show substantial robustness to FP4 precision reduction.
AIBullisharXiv โ CS AI ยท Mar 97/10
๐ง Researchers propose Traversal-as-Policy, a method that distills AI agent execution logs into Gated Behavior Trees (GBTs) to create safer, more efficient autonomous agents. The approach significantly improves success rates while reducing safety violations and computational costs across multiple benchmarks.
AIBullisharXiv โ CS AI ยท Mar 47/104
๐ง Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a novel approach that significantly improves large language model performance by treating multiple in-context examples as supervised training targets rather than just prompts. The method narrows the performance gap between in-context learning and dedicated fine-tuning while reducing catastrophic forgetting issues.
AIBullisharXiv โ CS AI ยท Mar 46/102
๐ง ScaleDoc is a new system that enables efficient semantic analysis of large document collections using LLMs by combining offline document representation with lightweight online filtering. The system achieves 2x speedup and reduces expensive LLM calls by up to 85% through contrastive learning and adaptive cascade mechanisms.
AIBullisharXiv โ CS AI ยท Mar 37/105
๐ง Researchers introduce Arbor, a framework that decomposes large language model decision-making into specialized node-level tasks for critical applications like healthcare triage. The system improves accuracy by 29.4 percentage points while reducing latency by 57.1% and costs by 14.4x compared to single-prompt approaches.
AIBullisharXiv โ CS AI ยท Mar 37/104
๐ง Researchers introduce Group Tree Optimization (GTO), a new training method that improves speculative decoding for large language models by aligning draft model training with actual decoding policies. GTO achieves 7.4% better acceptance length and 7.7% additional speedup over existing state-of-the-art methods across multiple benchmarks and LLMs.
AIBullisharXiv โ CS AI ยท Mar 37/103
๐ง Researchers introduce FreeKV, a training-free optimization framework that dramatically improves KV cache retrieval efficiency for large language models with long context windows. The system achieves up to 13x speedup compared to existing methods while maintaining near-lossless accuracy through speculative retrieval and hybrid memory layouts.
$NEAR
AIBullisharXiv โ CS AI ยท Feb 277/105
๐ง Researchers introduce U-Mem, an autonomous memory agent system that actively acquires and validates knowledge for large language models. The system uses cost-aware knowledge extraction and semantic Thompson sampling to improve performance, showing significant gains on benchmarks like HotpotQA and AIME25.
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง Researchers introduce wSSAS, a deterministic framework that enhances Large Language Model text categorization by combining hierarchical classification with signal-to-noise filtering to improve accuracy and reproducibility. Testing across Google Business, Amazon Product, and Goodreads reviews demonstrates significant improvements in clustering integrity and reduced categorization entropy.
๐ง Gemini
AINeutralarXiv โ CS AI ยท 1d ago6/10
๐ง Researchers present a systematic study of seven tactics for reducing cloud LLM token consumption in coding-agent workloads, demonstrating that local routing combined with prompt compression can achieve 45-79% token savings on certain tasks. The open-source implementation reveals that optimal cost-reduction strategies vary significantly by workload type, offering practical guidance for developers deploying AI coding agents at scale.
๐ข OpenAI
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose Tool-Internalized Reasoning (TInR), a framework that embeds tool knowledge directly into Large Language Models rather than relying on external tool documentation during reasoning. The TInR-U model uses a three-phase training pipeline combining knowledge alignment, supervised fine-tuning, and reinforcement learning to improve reasoning efficiency and performance across various tasks.
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers have optimized the Bielik v3 language models (7B and 11B parameters) by replacing universal tokenizers with Polish-specific vocabulary, addressing inefficiencies in morphological representation. This optimization reduces token fertility, lowers inference costs, and expands effective context windows while maintaining multilingual capabilities through advanced training techniques including supervised fine-tuning and reinforcement learning.
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers introduce MEDS, a memory-enhanced reward shaping framework that addresses a critical reinforcement learning failure mode where language models repeatedly generate similar errors. By tracking historical behavioral patterns and penalizing recurring mistake clusters, the method achieves consistent performance improvements across multiple datasets and models while increasing sampling diversity.
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose NExt, a nonlinear extrapolation framework that accelerates reinforcement learning with verifiable rewards (RLVR) for large language models by modeling low-rank parameter trajectories. The method reduces computational overhead by approximately 37.5% while remaining compatible with various RLVR algorithms, addressing a key bottleneck in scaling LLM training.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers present the first systematic study of performance-energy trade-offs in multi-request LLM inference workflows, using NVIDIA A100 GPUs and vLLM/Parrot serving systems. The study identifies batch size as the most impactful optimization lever, though effectiveness varies by workload type, and reveals that workflow-aware scheduling can reduce energy consumption under power constraints.
๐ข Nvidia
AIBullisharXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose ITEM, an iterative utility judgment framework that enhances retrieval-augmented generation (RAG) systems by aligning with philosophical principles of relevance. The framework improves how large language models prioritize and process information from retrieval results, demonstrating measurable improvements across multiple benchmarks in ranking, utility assessment, and answer generation.