53 articles tagged with #performance-optimization. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AIBullisharXiv β CS AI Β· Apr 77/10
π§ Researchers have developed Combee, a new framework that enables parallel prompt learning for AI language model agents, achieving up to 17x speedup over existing methods. The system allows multiple AI agents to learn simultaneously from their collective experiences without quality degradation, addressing scalability limitations in current single-agent approaches.
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers introduce directional routing, a lightweight mechanism for transformer models that adds only 3.9% parameter cost but significantly improves performance. The technique gives attention heads learned suppression directions controlled by a shared router, reducing perplexity by 31-56% and becoming the dominant computational pathway in the model.
π’ Perplexity
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers propose pΒ²RAG, a new privacy-preserving Retrieval-Augmented Generation system that supports arbitrary top-k retrieval while being 3-300x faster than existing solutions. The system uses an interactive bisection method instead of sorting and employs secret sharing across two servers to protect user prompts and database content.
$RAG
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers introduce Orla, a new library that simplifies the development and deployment of LLM-based multi-agent systems by providing a serving layer that separates workflow execution from policy decisions. The library offers stage mapping, workflow orchestration, and memory management capabilities that improve performance and reduce costs compared to single-model baselines.
AIBullisharXiv β CS AI Β· Mar 177/10
π§ Researchers developed MegaScale-Data, an industrial-grade distributed data loading architecture that significantly improves training efficiency for large foundation models using multiple data sources. The system achieves up to 4.5x training throughput improvement and 13.5x reduction in CPU memory usage through disaggregated preprocessing and centralized data orchestration.
AIBullisharXiv β CS AI Β· Mar 167/10
π§ Researchers introduce OnlineSpec, a framework that uses online learning to continuously improve draft models in speculative decoding for large language model inference acceleration. The approach leverages verification feedback to evolve draft models dynamically, achieving up to 24% speedup improvements across seven benchmarks and three foundation models.
AIBullisharXiv β CS AI Β· Mar 127/10
π§ Researchers developed HyMEM, a brain-inspired hybrid memory system that significantly improves GUI agents' ability to interact with computers. The system uses graph-based structured memory combining symbolic nodes with trajectory embeddings, enabling smaller 7B/8B models to match or exceed performance of larger closed-source models like GPT-4o.
π§ GPT-4
AIBullisharXiv β CS AI Β· Mar 127/10
π§ Researchers developed KernelSkill, a multi-agent framework that optimizes GPU kernel performance using expert knowledge rather than trial-and-error approaches. The system achieved 100% success rates and significant speedups (1.92x to 5.44x) over existing methods, addressing a critical bottleneck in AI system efficiency.
AIBullisharXiv β CS AI Β· Mar 117/10
π§ Researchers have developed Zipage, a new high-concurrency inference engine for large language models that uses Compressed PagedAttention to solve memory bottlenecks. The system achieves 95% performance of full KV inference engines while delivering over 2.1x speedup on mathematical reasoning tasks.
AIBullisharXiv β CS AI Β· Mar 117/10
π§ Researchers propose SEER (Self-Enhancing Efficient Reasoning), a framework that compresses Chain-of-Thought reasoning in Large Language Models while maintaining accuracy. The study found that longer reasoning chains don't always improve performance and can increase latency by up to 5x, leading to a 42.1% reduction in CoT length while improving accuracy.
AIBullisharXiv β CS AI Β· Mar 57/10
π§ Researchers have released mlx-snn, the first spiking neural network library built natively for Apple's MLX framework, targeting Apple Silicon hardware. The library demonstrates 2-2.5x faster training and 3-10x lower GPU memory usage compared to existing PyTorch-based solutions, achieving 97.28% accuracy on MNIST classification tasks.
AIBullisharXiv β CS AI Β· Mar 57/10
π§ Researchers introduce Visual Attention Score (VAS) to analyze multimodal reasoning models, discovering that higher visual attention correlates strongly with better performance (r=0.9616). They propose AVAR framework that achieves 7% performance gains on Qwen2.5-VL-7B across multimodal reasoning benchmarks.
AIBullisharXiv β CS AI Β· Mar 46/104
π§ Researchers have developed EvoSkill, an automated framework that enables AI agents to discover and refine domain-specific skills through iterative failure analysis. The system demonstrated significant performance improvements on specialized tasks, with accuracy gains of 7.3% on financial data analysis and 12.1% on search-augmented QA, while showing transferable capabilities across different domains.
AIBullisharXiv β CS AI Β· Mar 46/103
π§ Researchers introduce TΒ³, a new method to improve large language model (LLM) agents' reasoning abilities by tracking and correcting 'belief deviation' - when AI agents lose accurate understanding of problem states. The technique achieved up to 30-point performance gains and 34% token cost reduction across challenging tasks.
$COMP
AIBullisharXiv β CS AI Β· Mar 46/102
π§ ScaleDoc is a new system that enables efficient semantic analysis of large document collections using LLMs by combining offline document representation with lightweight online filtering. The system achieves 2x speedup and reduces expensive LLM calls by up to 85% through contrastive learning and adaptive cascade mechanisms.
AIBullisharXiv β CS AI Β· Mar 46/102
π§ Researchers developed GPUTOK, a GPU-accelerated tokenizer for large language models that processes text significantly faster than existing CPU-based solutions. The optimized version shows 1.7x speed improvement over tiktoken and 7.6x over HuggingFace's GPT-2 tokenizer while maintaining output quality.
AIBullisharXiv β CS AI Β· Mar 37/104
π§ Researchers have developed Hierarchical Speculative Decoding (HSD), a new method that significantly improves AI inference speed while maintaining accuracy by solving joint intractability problems in verification processes. The technique shows over 12% performance gains when integrated with existing frameworks like EAGLE-3, establishing new state-of-the-art efficiency standards.
AINeutralarXiv β CS AI Β· Mar 37/104
π§ New research analyzing 92 open-source language models reveals that factors beyond model size and training data significantly impact performance. The study shows that incorporating design features like data composition and architectural choices can improve performance prediction by 3-28% compared to using scale alone.
AIBullisharXiv β CS AI Β· Mar 37/103
π§ Researchers developed a new scaling law for large language models that optimizes both accuracy and inference efficiency by examining architectural factors like hidden size, MLP-to-attention ratios, and grouped-query attention. Testing over 200 models from 80M to 3B parameters, they found optimized architectures achieve 2.1% higher accuracy and 42% greater inference throughput compared to LLaMA-3.2.
AIBullisharXiv β CS AI Β· Feb 277/107
π§ Researchers introduce GUIPruner, a training-free framework that addresses efficiency bottlenecks in high-resolution GUI agents by eliminating spatiotemporal redundancy. The system achieves 3.4x reduction in computational operations and 3.3x speedup while maintaining 94% of original performance, enabling real-time navigation with minimal resource consumption.
AIBullisharXiv β CS AI Β· Feb 277/107
π§ Researchers have released LLMServingSim 2.0, a unified simulator that models the complex interactions between heterogeneous hardware and disaggregated software in large language model serving infrastructures. The simulator achieves 0.97% average error compared to real deployments while maintaining 10-minute simulation times for complex configurations.
$NEAR
AIBullisharXiv β CS AI Β· Feb 277/108
π§ Researchers propose Generalized On-Policy Distillation (G-OPD), a new AI training framework that improves upon standard on-policy distillation by introducing flexible reference models and reward scaling factors. The method, particularly ExOPD with reward extrapolation, enables smaller student models to surpass their teacher's performance in math reasoning and code generation tasks.
AIBullisharXiv β CS AI Β· Feb 277/109
π§ ArchAgent, an AI-driven system built on AlphaEvolve, has achieved breakthrough results in automated computer architecture discovery by designing state-of-the-art cache replacement policies. The system achieved 5.3% performance improvements in just 2 days and 0.9% improvements in 18 days, working 3-5x faster than human-developed solutions.
AIBullisharXiv β CS AI Β· Feb 277/106
π§ Researchers introduce veScale-FSDP, a redesigned Fully Sharded Data Parallel system that overcomes limitations of current FSDP implementations used for training large-scale AI models. The new system features flexible RaggedShard format and structure-aware planning, achieving 5-66% higher throughput and 16-30% lower memory usage while supporting advanced training methods and scaling to tens of thousands of GPUs.
AIBullishOpenAI News Β· Jul 287/106
π§ OpenAI has released Triton 1.0, an open-source Python-like programming language that allows researchers without CUDA expertise to write highly efficient GPU code for neural networks. The tool aims to democratize GPU programming by making it accessible to those without specialized hardware programming knowledge while maintaining performance comparable to expert-level code.