y0news
AnalyticsDigestsSourcesRSSAICrypto
#adversarial-robustness1 article
1 articles
AINeutralarXiv โ€“ CS AI ยท 4d ago7/103
๐Ÿง 

On the Rate of Convergence of GD in Non-linear Neural Networks: An Adversarial Robustness Perspective

Researchers prove that gradient descent in neural networks converges to optimal robustness margins at an extremely slow rate of ฮ˜(1/ln(t)), even in simplified two-neuron settings. This establishes the first explicit lower bound on convergence rates for robustness margins in non-linear models, revealing fundamental limitations in neural network training efficiency.