Adaptive auditing of AI systems with anytime-valid guarantees
Researchers introduce an adaptive auditing framework for AI systems that maintains statistical rigor while evaluating generative AI failure modes with limited observations. Using Safe Anytime-Valid Inference, the method enables auditors to draw reliable conclusions from as few as 20 test cases through sequential hypothesis testing, addressing a critical bottleneck in AI safety evaluation.