AINeutralarXiv โ CS AI ยท 3d ago4/10
๐ง
Correction of Transformer-Based Models with Smoothing Pseudo-Projector
Researchers have developed a pseudo-projector technique that can be integrated into existing transformer-based language models to improve their robustness and training dynamics without changing core architecture. The method, inspired by multigrid paradigms, acts as a hidden-representation corrector that reduces sensitivity to noise by suppressing directions from label-irrelevant input content.