AINeutralarXiv – CS AI · 8h ago6/10
🧠
CommFuse: Hiding Tail Latency via Communication Decomposition and Fusion for Distributed LLM Training
Researchers introduce CommFuse, a novel communication-computation overlap technique that eliminates tail latency in distributed LLM training by decomposing collective operations into peer-to-peer communications. The method improves efficiency for both tensor parallelism and data parallelism across GPU/TPU/NPU clusters, achieving higher throughput and model FLOPS utilization compared to existing solutions.