y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#parallel-computing News & Analysis

5 articles tagged with #parallel-computing. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AIBullisharXiv โ€“ CS AI ยท Mar 177/10
๐Ÿง 

Why Inference in Large Models Becomes Decomposable After Training

Researchers have discovered that large AI models develop decomposable internal structures during training, with many parameter dependencies remaining statistically unchanged from initialization. They propose a post-training method to identify and remove unsupported dependencies, enabling parallel inference without modifying model functionality.

AIBullisharXiv โ€“ CS AI ยท Feb 277/106
๐Ÿง 

veScale-FSDP: Flexible and High-Performance FSDP at Scale

Researchers introduce veScale-FSDP, a redesigned Fully Sharded Data Parallel system that overcomes limitations of current FSDP implementations used for training large-scale AI models. The new system features flexible RaggedShard format and structure-aware planning, achieving 5-66% higher throughput and 16-30% lower memory usage while supporting advanced training methods and scaling to tens of thousands of GPUs.

AIBullisharXiv โ€“ CS AI ยท Mar 176/10
๐Ÿง 

NCCL EP: Towards a Unified Expert Parallel Communication API for NCCL

Researchers have developed NCCL EP, a new communication library for Mixture-of-Experts (MoE) AI model architectures that improves GPU-initiated communication performance. The library provides unified APIs supporting both low-latency inference and high-throughput training modes, built entirely on NVIDIA's NCCL Device API.

๐Ÿข Nvidia
AINeutralarXiv โ€“ CS AI ยท Feb 274/106
๐Ÿง 

From Prompts to Performance: Evaluating LLMs for Task-based Parallel Code Generation

Researchers evaluated Large Language Models' ability to generate parallel code across three programming frameworks (OpenMP, C++, HPX) using different input prompts. The study found LLMs show varying performance depending on problem complexity and framework, revealing both capabilities and limitations in high-performance computing applications.

AINeutralarXiv โ€“ CS AI ยท Mar 34/104
๐Ÿง 

Rethinking Policy Diversity in Ensemble Policy Gradient in Large-Scale Reinforcement Learning

Researchers propose Coupled Policy Optimization (CPO), a new reinforcement learning method that regulates policy diversity through KL constraints to improve exploration efficiency in large-scale parallel environments. The method outperforms existing baselines like PPO and SAPG across multiple tasks, demonstrating that controlled diverse exploration is key to stable and sample-efficient learning.