Gradient Boosting within a Single Attention Layer
Researchers introduce gradient-boosted attention, a new method that improves transformer performance by applying gradient boosting principles within a single attention layer. The technique uses a second attention pass to correct errors from the first pass, achieving lower perplexity (67.9 vs 72.2) on WikiText-103 compared to standard attention mechanisms.