y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#inference News & Analysis

73 articles tagged with #inference. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

73 articles
AIBullisharXiv โ€“ CS AI ยท Apr 77/10
๐Ÿง 

Diagonal-Tiled Mixed-Precision Attention for Efficient Low-Bit MXFP Inference

Researchers have developed a new low-bit mixed-precision attention kernel called Diagonal-Tiled Mixed-Precision Attention (DMA) that significantly speeds up large language model inference on NVIDIA B200 GPUs while maintaining generation quality. The technique uses microscaling floating-point (MXFP) data format and kernel fusion to address the high computational costs of transformer-based models.

๐Ÿข Nvidia
AIBullisharXiv โ€“ CS AI ยท Mar 267/10
๐Ÿง 

QUARK: Quantization-Enabled Circuit Sharing for Transformer Acceleration by Exploiting Common Patterns in Nonlinear Operations

Researchers have developed QUARK, a quantization-enabled FPGA acceleration framework that significantly improves Transformer model performance by optimizing nonlinear operations through circuit sharing. The system achieves up to 1.96x speedup over GPU implementations while reducing hardware overhead by more than 50% compared to existing approaches.

AIBullisharXiv โ€“ CS AI ยท Mar 267/10
๐Ÿง 

You only need 4 extra tokens: Synergistic Test-time Adaptation for LLMs

Researchers developed SyTTA, a test-time adaptation framework that improves large language models' performance in specialized domains without requiring additional labeled data. The method achieved over 120% improvement on agricultural question answering tasks using just 4 extra tokens per query, addressing the challenge of deploying LLMs in domains with limited training data.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท Mar 267/10
๐Ÿง 

PLDR-LLMs Reason At Self-Organized Criticality

Researchers demonstrate that PLDR-LLMs trained at self-organized criticality exhibit enhanced reasoning capabilities at inference time. The study shows that reasoning ability can be quantified using an order parameter derived from global model statistics, with models performing better when this parameter approaches zero at criticality.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Reducing Cost of LLM Agents with Trajectory Reduction

Researchers introduce AgentDiet, a trajectory reduction technique that cuts computational costs for LLM-based agents by 39.9%-59.7% in input tokens and 21.1%-35.9% in total costs while maintaining performance. The approach removes redundant and expired information from agent execution trajectories during inference time.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Inference-time Alignment in Continuous Space

Researchers propose Simple Energy Adaptation (SEA), a new algorithm for aligning large language models with human feedback at inference time. SEA uses gradient-based sampling in continuous latent space rather than searching discrete response spaces, achieving up to 77.51% improvement on AdvBench and 16.36% on MATH benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Justitia: Fair and Efficient Scheduling of Task-parallel LLM Agents with Selective Pampering

Justitia is a new scheduling system for task-parallel LLM agents that optimizes GPU server performance through selective resource allocation based on completion order prediction. The system uses memory-centric cost quantification and virtual-time fair queuing to achieve both efficiency and fairness in LLM serving environments.

๐Ÿข Meta
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Rationale-Enhanced Decoding for Multi-modal Chain-of-Thought

Researchers have developed rationale-enhanced decoding (RED), a new inference-time strategy that improves chain-of-thought reasoning in large vision-language models. The method addresses the problem where LVLMs ignore generated rationales by harmonizing visual and rationale information during decoding, showing consistent improvements across multiple benchmarks.

AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

Unveiling the Potential of Quantization with MXFP4: Strategies for Quantization Error Reduction

Researchers have developed two software techniques (OAS and MBS) that dramatically improve MXFP4 quantization accuracy for Large Language Models, reducing the performance gap with NVIDIA's NVFP4 from 10% to below 1%. This breakthrough makes MXFP4 a viable alternative while maintaining 12% hardware efficiency advantages in tensor cores.

๐Ÿข Nvidia
AIBullisharXiv โ€“ CS AI ยท Mar 117/10
๐Ÿง 

ARKV: Adaptive and Resource-Efficient KV Cache Management under Limited Memory Budget for Long-Context Inference in LLMs

Researchers propose ARKV, a new framework for managing memory in large language models that reduces KV cache memory usage by 4x while preserving 97% of baseline accuracy. The adaptive system dynamically allocates precision levels to cached tokens based on attention patterns, enabling more efficient long-context inference without requiring model retraining.

AIBullisharXiv โ€“ CS AI ยท Mar 67/10
๐Ÿง 

Thin Keys, Full Values: Reducing KV Cache via Low-Dimensional Attention Selection

Researchers propose asymmetric transformer attention where keys use fewer dimensions than queries and values, achieving 75% key cache reduction with minimal quality loss. The technique enables 60% more concurrent users for large language models by saving 25GB of KV cache per user for 7B parameter models.

๐Ÿข Perplexity
AIBullisharXiv โ€“ CS AI ยท Mar 57/10
๐Ÿง 

Draft-Conditioned Constrained Decoding for Structured Generation in LLMs

Researchers introduce Draft-Conditioned Constrained Decoding (DCCD), a training-free method that improves structured output generation in large language models by up to 24 percentage points. The technique uses a two-step process that first generates an unconstrained draft, then applies constraints to ensure valid outputs like JSON and API calls.

AIBullisharXiv โ€“ CS AI ยท Mar 46/103
๐Ÿง 

Concept Heterogeneity-aware Representation Steering

Researchers introduce CHaRS (Concept Heterogeneity-aware Representation Steering), a new method for controlling large language model behavior that uses optimal transport theory to create context-dependent steering rather than global directions. The approach models representations as Gaussian mixture models and derives input-dependent steering maps, showing improved behavioral control over existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 46/104
๐Ÿง 

xLLM Technical Report

xLLM is a new open-source Large Language Model inference framework that delivers significantly improved performance for enterprise AI deployments. The framework achieves 1.7-2.2x higher throughput compared to existing solutions like MindIE and vLLM-Ascend through novel architectural optimizations including decoupled service-engine design and intelligent scheduling.

AIBullisharXiv โ€“ CS AI ยท Mar 37/104
๐Ÿง 

LightMem: Lightweight and Efficient Memory-Augmented Generation

Researchers introduce LightMem, a new memory system for Large Language Models that mimics human memory structure with three stages: sensory, short-term, and long-term memory. The system achieves up to 7.7% better QA accuracy while reducing token usage by up to 106x and API calls by up to 159x compared to existing methods.

AIBullisharXiv โ€“ CS AI ยท Mar 37/103
๐Ÿง 

Cache What Lasts: Token Retention for Memory-Bounded KV Cache in LLMs

Researchers propose TRIM-KV, a novel approach that learns token importance for memory-bounded LLM inference through lightweight retention gates, addressing the quadratic cost of self-attention and growing key-value cache issues. The method outperforms existing eviction baselines across multiple benchmarks and provides insights into LLM interpretability through learned retention scores.

AIBullisharXiv โ€“ CS AI ยท Feb 277/108
๐Ÿง 

UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

Researchers introduce UniQL, a unified framework for quantizing and compressing large language models to run efficiently on mobile devices. The system achieves 4x-5.7x memory reduction and 2.7x-3.4x speed improvements while maintaining accuracy within 5% of original models.

AIBullisharXiv โ€“ CS AI ยท Feb 277/108
๐Ÿง 

RAGdb: A Zero-Dependency, Embeddable Architecture for Multimodal Retrieval-Augmented Generation on the Edge

Researchers introduce RAGdb, a revolutionary architecture that consolidates Retrieval-Augmented Generation into a single SQLite container, eliminating the need for cloud infrastructure and GPUs. The system achieves 100% entity retrieval accuracy while reducing disk footprint by 99.5% compared to traditional Docker-based RAG stacks, enabling truly portable AI applications for edge computing and privacy-sensitive environments.

AIBullishHugging Face Blog ยท Mar 77/108
๐Ÿง 

LLM Inference on Edge: A Fun and Easy Guide to run LLMs via React Native on your Phone!

The article provides a guide for running Large Language Models (LLMs) directly on mobile devices using React Native, enabling edge inference capabilities. This development represents a significant step toward decentralized AI processing, reducing reliance on cloud-based services and improving privacy and latency for mobile AI applications.

AIBullishHugging Face Blog ยท Jan 187/107
๐Ÿง 

How we sped up transformer inference 100x for ๐Ÿค— API customers

Hugging Face announced they achieved a 100x speed improvement for transformer inference in their API services. The optimization breakthrough significantly enhances performance for AI model deployment and reduces latency for customers using their platform.

AINeutralarXiv โ€“ CS AI ยท 6d ago6/10
๐Ÿง 

Reasoning Fails Where Step Flow Breaks

Researchers introduce Step-Saliency, a diagnostic tool that reveals how large reasoning models fail during multi-step reasoning tasks by identifying two critical information-flow breakdowns: shallow layers that ignore context and deep layers that lose focus on reasoning. They propose StepFlow, a test-time intervention that repairs these flows and improves model accuracy without retraining.

AIBullisharXiv โ€“ CS AI ยท Apr 76/10
๐Ÿง 

Memory Intelligence Agent

Researchers have developed Memory Intelligence Agent (MIA), a new AI framework that improves deep research agents through a Manager-Planner-Executor architecture with advanced memory systems. The framework enables continuous learning during inference and demonstrates superior performance across eleven benchmarks through enhanced cooperation between parametric and non-parametric memory systems.

AIBullisharXiv โ€“ CS AI ยท Apr 66/10
๐Ÿง 

QAPruner: Quantization-Aware Vision Token Pruning for Multimodal Large Language Models

Researchers developed QAPruner, a new framework that simultaneously optimizes vision token pruning and post-training quantization for Multimodal Large Language Models (MLLMs). The method addresses the problem where traditional token pruning can discard important activation outliers needed for quantization stability, achieving 2.24% accuracy improvement over baselines while retaining only 12.5% of visual tokens.

Page 1 of 3Next โ†’