InterveneBench: Benchmarking LLMs for Intervention Reasoning and Causal Study Design in Real Social Systems
Researchers introduced InterveneBench, a new benchmark comprising 744 peer-reviewed studies to evaluate large language models' ability to reason about policy interventions and causal inference in social science contexts. Current state-of-the-art LLMs struggle with this type of reasoning, prompting the development of STRIDES, a multi-agent framework that significantly improves performance on these tasks.