AIBullisharXiv – CS AI · 5h ago6/10
🧠
Mitigating Label Shift in Tabular In-Context Learning via Test-Time Posterior Adjustment
Researchers introduce DistPFN, a test-time adjustment method that improves TabPFN's vulnerability to label shift—a common problem where machine learning models overfit to majority classes. The solution rescales predicted probabilities without requiring architectural changes or retraining, demonstrating significant improvements across 250+ datasets while maintaining performance in standard settings.