AIBullisharXiv – CS AI · 8h ago6/10
🧠
Revisiting Adam for Streaming Reinforcement Learning
Researchers challenge the conventional wisdom that deep reinforcement learning requires replay buffers by demonstrating that classical update methods like C51 perform competitively in streaming online settings when paired with proper optimization techniques. The study identifies two critical properties—bounded objective derivatives and variance-adjusted weight updates—as essential for stable learning, leading to a new algorithm called Adaptive Q(λ) that substantially outperforms existing streaming approaches.