y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#token-pruning News & Analysis

8 articles tagged with #token-pruning. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

8 articles
AIBullisharXiv โ€“ CS AI ยท 6d ago7/10
๐Ÿง 

SVD-Prune: Training-Free Token Pruning For Efficient Vision-Language Models

SVD-Prune introduces a training-free token pruning method for Vision-Language Models using Singular Value Decomposition to reduce computational overhead. The approach maintains model performance while drastically reducing vision tokens to 16-32, addressing efficiency challenges in multimodal AI systems without requiring retraining.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

EvoPrune: Early-Stage Visual Token Pruning for Efficient MLLMs

Researchers developed EvoPrune, a new method that prunes visual tokens during the encoding stage of Multimodal Large Language Models (MLLMs) rather than after encoding. The technique achieves 2x inference speedup with less than 1% performance loss on video datasets, addressing efficiency bottlenecks in AI models processing high-resolution images and videos.

AIBullisharXiv โ€“ CS AI ยท Mar 56/10
๐Ÿง 

Index-Preserving Lightweight Token Pruning for Efficient Document Understanding in Vision-Language Models

Researchers have developed a lightweight token pruning framework that reduces computational costs for vision-language models in document understanding tasks by filtering out non-informative background regions before processing. The approach uses a binary patch-level classifier and max-pooling refinement to maintain accuracy while substantially lowering compute demands.

AIBullisharXiv โ€“ CS AI ยท 5d ago6/10
๐Ÿง 

CLASP: Class-Adaptive Layer Fusion and Dual-Stage Pruning for Multimodal Large Language Models

Researchers introduce CLASP, a token reduction framework that optimizes Multimodal Large Language Models by intelligently pruning visual tokens through class-adaptive layer fusion and dual-stage pruning. The approach addresses computational inefficiency in MLLMs while maintaining performance across diverse benchmarks and architectures.

AIBullisharXiv โ€“ CS AI ยท Apr 66/10
๐Ÿง 

Efficient3D: A Unified Framework for Adaptive and Debiased Token Reduction in 3D MLLMs

Researchers have developed Efficient3D, a framework that accelerates 3D Multimodal Large Language Models (MLLMs) while maintaining accuracy through adaptive token pruning. The system uses a Debiased Visual Token Importance Estimator and Adaptive Token Rebalancing to reduce computational overhead without sacrificing performance, showing +2.57% CIDEr improvement on benchmarks.

AIBullisharXiv โ€“ CS AI ยท Apr 66/10
๐Ÿง 

QAPruner: Quantization-Aware Vision Token Pruning for Multimodal Large Language Models

Researchers developed QAPruner, a new framework that simultaneously optimizes vision token pruning and post-training quantization for Multimodal Large Language Models (MLLMs). The method addresses the problem where traditional token pruning can discard important activation outliers needed for quantization stability, achieving 2.24% accuracy improvement over baselines while retaining only 12.5% of visual tokens.

AIBullisharXiv โ€“ CS AI ยท Mar 96/10
๐Ÿง 

Energy-Driven Adaptive Visual Token Pruning for Efficient Vision-Language Models

Researchers developed E-AdaPrune, an energy-driven adaptive pruning framework that optimizes Vision-Language Models by dynamically allocating visual tokens based on image information density. The method shows up to 0.6% average improvement across benchmarks, with a notable 5.1% boost on reasoning tasks, while adding only 8ms latency per image.

AIBullisharXiv โ€“ CS AI ยท Mar 37/107
๐Ÿง 

What Do Visual Tokens Really Encode? Uncovering Sparsity and Redundancy in Multimodal Large Language Models

Researchers developed EmbedLens, a tool to analyze how multimodal large language models process visual information, finding that only 60% of visual tokens carry meaningful image-specific information. The study reveals significant inefficiencies in current MLLM architectures and proposes optimizations through selective token pruning and mid-layer injection.