AIBullisharXiv โ CS AI ยท 7h ago6/10
๐ง
Mixed Precision Training of Neural ODEs
Researchers present a mixed precision training framework for neural ODEs that reduces memory usage by ~50% and achieves up to 2x speedup while maintaining accuracy. The approach uses low-precision computations for velocity evaluations and intermediate states while preserving high precision for weights and gradient accumulation, addressing computational and memory bottlenecks in continuous-time neural network architectures.