AINeutralarXiv โ CS AI ยท 4h ago6/10
๐ง
GF-Score: Certified Class-Conditional Robustness Evaluation with Fairness Guarantees
Researchers introduce GF-Score, a framework that evaluates neural network robustness across individual classes while measuring fairness disparities, eliminating the need for expensive adversarial attacks through self-calibration. Testing across 22 models reveals consistent vulnerability patterns and shows that more robust models paradoxically exhibit greater class-level fairness disparities.