y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#inference-optimization News & Analysis

69 articles tagged with #inference-optimization. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

69 articles
AINeutralarXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Restoring Linguistic Grounding in VLA Models via Train-Free Attention Recalibration

Researchers have identified a critical failure mode in Vision-Language-Action (VLA) robotic models called 'linguistic blindness,' where robots prioritize visual cues over language instructions when they contradict. They developed ICBench benchmark and proposed IGAR, a train-free solution that recalibrates attention to restore language instruction influence without requiring model retraining.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

VLMQ: Token Saliency-Driven Post-Training Quantization for Vision-language Models

Researchers introduced VLMQ, a post-training quantization framework specifically designed for vision-language models that addresses visual over-representation and modality gaps. The method achieves significant performance improvements, including 16.45% better results on MME-RealWorld under 2-bit quantization compared to existing approaches.

AINeutralarXiv โ€“ CS AI ยท Mar 45/103
๐Ÿง 

The Price of Prompting: Profiling Energy Use in Large Language Models Inference

Researchers introduce MELODI, a framework for monitoring energy consumption during large language model inference, revealing substantial disparities in energy efficiency across different deployment scenarios. The study creates a comprehensive dataset analyzing how prompt attributes like length and complexity correlate with energy expenditure, highlighting significant opportunities for optimization in LLM deployment.

AIBullisharXiv โ€“ CS AI ยท Mar 37/106
๐Ÿง 

Draft-Thinking: Learning Efficient Reasoning in Long Chain-of-Thought LLMs

Researchers propose Draft-Thinking, a new approach to improve the efficiency of large language models' reasoning processes by reducing unnecessary computational overhead. The method achieves an 82.6% reduction in reasoning budget with only a 2.6% performance drop on mathematical problems, addressing the costly overthinking problem in current chain-of-thought reasoning.

AIBullisharXiv โ€“ CS AI ยท Mar 36/109
๐Ÿง 

GAM-RAG: Gain-Adaptive Memory for Evolving Retrieval in Retrieval-Augmented Generation

Researchers introduce GAM-RAG, a training-free framework that improves Retrieval-Augmented Generation by building adaptive memory from past queries instead of relying on static indices. The system uses uncertainty-aware updates inspired by cognitive neuroscience to balance stability and adaptability, achieving 3.95% better performance while reducing inference costs by 61%.

AIBullisharXiv โ€“ CS AI ยท Mar 37/107
๐Ÿง 

QuickGrasp: Responsive Video-Language Querying Service via Accelerated Tokenization and Edge-Augmented Inference

Researchers propose QuickGrasp, a video-language querying system that combines local processing with edge computing to achieve both fast response times and high accuracy. The system achieves up to 12.8x reduction in response delay while maintaining the accuracy of large video-language models through accelerated tokenization and adaptive edge augmentation.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

AdaFocus: Knowing When and Where to Look for Adaptive Visual Reasoning

AdaFocus is a new training-free framework for adaptive visual reasoning in Multimodal Large Language Models that addresses perceptual redundancy and spatial attention issues. The system uses a two-stage pipeline with confidence-based cropping decisions and semantic-guided localization, achieving 4x faster inference than existing methods while improving accuracy.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

AlignVAR: Towards Globally Consistent Visual Autoregression for Image Super-Resolution

Researchers introduced AlignVAR, a new visual autoregressive framework for image super-resolution that delivers 10x faster inference with 50% fewer parameters than leading diffusion-based approaches. The system addresses key challenges in image reconstruction through improved spatial consistency and hierarchical constraints, establishing a more efficient paradigm for high-quality image enhancement.

AIBullisharXiv โ€“ CS AI ยท Mar 36/108
๐Ÿง 

ATA: Bridging Implicit Reasoning with Attention-Guided and Action-Guided Inference for Vision-Language Action Models

Researchers propose ATA, a training-free framework that improves Vision-Language-Action (VLA) models through implicit reasoning without requiring additional data or annotations. The approach uses attention-guided and action-guided strategies to enhance visual inputs, achieving better task performance while maintaining inference efficiency.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

EasySteer: A Unified Framework for High-Performance and Extensible LLM Steering

Researchers have developed EasySteer, a unified framework for controlling large language model behavior at inference time that achieves 10.8-22.3x speedup over existing frameworks. The system offers modular architecture with pre-computed steering vectors for eight application domains and transforms steering from a research technique into production-ready capability.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

Distillation of Large Language Models via Concrete Score Matching

Researchers propose Concrete Score Distillation (CSD), a new knowledge distillation method that improves efficiency of large language models by better preserving logit information compared to traditional softmax-based approaches. CSD demonstrates consistent performance improvements across multiple models including GPT-2, OpenLLaMA, and GEMMA while maintaining training stability.

AIBullisharXiv โ€“ CS AI ยท Mar 36/104
๐Ÿง 

TTOM: Test-Time Optimization and Memorization for Compositional Video Generation

Researchers introduce TTOM (Test-Time Optimization and Memorization), a training-free framework that improves compositional video generation in Video Foundation Models during inference. The system uses layout-attention optimization and parametric memory to better align text prompts with generated video outputs, showing strong transferability across different scenarios.

AIBullisharXiv โ€“ CS AI ยท Mar 26/1012
๐Ÿง 

Task-Centric Acceleration of Small-Language Models

Researchers propose TASC (Task-Adaptive Sequence Compression), a framework for accelerating small language models through two methods: TASC-ft for fine-tuning with expanded vocabularies and TASC-spec for training-free speculative decoding. The methods demonstrate improved inference efficiency while maintaining task performance across low output-variability generation tasks.

AIBullisharXiv โ€“ CS AI ยท Mar 27/1016
๐Ÿง 

DiffuMamba: High-Throughput Diffusion LMs with Mamba Backbone

Researchers introduce DiffuMamba, a new diffusion language model using Mamba backbone architecture that achieves up to 8.2x higher inference throughput than Transformer-based models while maintaining comparable performance. The model demonstrates linear scaling with sequence length and represents a significant advancement in efficient AI text generation systems.

AIBullishHugging Face Blog ยท Nov 76/106
๐Ÿง 

Make your llama generation time fly with AWS Inferentia2

AWS announces Inferentia2 chip optimization for Llama model inference, promising significant performance improvements for AI workloads. This represents AWS's continued push into specialized AI hardware to compete with NVIDIA's dominance in the AI acceleration market.

AINeutralHugging Face Blog ยท Jun 44/108
๐Ÿง 

KV Cache from scratch in nanoVLM

The article discusses the implementation of KV (Key-Value) cache mechanisms in nanoVLM, a lightweight vision-language model framework. This technical implementation focuses on optimizing memory usage and inference speed for multimodal AI applications.

AIBullishHugging Face Blog ยท Apr 34/105
๐Ÿง 

Blazing Fast SetFit Inference with ๐Ÿค— Optimum Intel on Xeon

The article appears to discuss optimizing SetFit inference performance using Hugging Face's Optimum Intel library on Intel Xeon processors. This represents a technical advancement in AI model optimization and deployment efficiency on enterprise hardware.

โ† PrevPage 3 of 3