Can We Formally Verify Neural PDE Surrogates? SMT Compilation of Small Fourier Neural Operators
Researchers demonstrate that Fourier Neural Operators (FNOs) used for PDE simulation can be formally verified using SMT solvers by exploiting their piecewise-linear structure once weights are fixed. While exact encoding provides sound proofs and counterexamples on small models, scalability remains limited, revealing a fundamental tradeoff between formal verification rigor and practical applicability for production neural operators.
This research addresses a critical gap in neural network reliability for scientific computing. FNOs accelerate partial differential equation simulations but typically lack formal guarantees that outputs preserve physical constraints like positivity or mass conservation. By recognizing that FNOs become piecewise-linear once trained weights are fixed, researchers leveraged Z3's SMT solver to compile these operators into linear real arithmetic, enabling exact formal verification.
The work represents an important methodological advance for neural surrogate models in scientific domains. Traditional machine learning focuses on classification accuracy, but physics-informed neural networks must maintain invariant properties reflecting conservation laws and physical bounds. The ability to generate sound counterexamples—proving violations of expected properties—is as valuable as proving compliance, as it reveals when surrogates fail in critical ways.
The results expose the core challenge facing neural verification at scale. On tiny 1D models with 85-117 parameters, exact encoding proves positivity for linear variants but timeouts on ReLU-based models. The frozen encoding approximation scales better but abandons soundness guarantees, providing only heuristic checking. This soundness-scalability tradeoff directly impacts whether neural operators can be trusted in safety-critical applications like computational fluid dynamics or climate modeling.
For practitioners deploying FNO surrogates, this work provides both hope and caution. Formal verification frameworks exist and produce valuable guarantees, but current tools struggle with production-scale models. The research points toward hybrid approaches: exact verification for critical components, approximate checking for others, and better solver techniques specifically designed for neural operators rather than general nonlinear systems.
- →FNOs become piecewise-linear systems once trained, enabling SMT-based formal verification impossible for general neural networks.
- →Exact encoding produced sound proofs and counterexamples on small models but timed out on ReLU variants, revealing scalability limits.
- →Approximate frozen encoding verified models 8-10x faster at grid size 64 but sacrifices formal guarantees for the original FNO.
- →SMT solvers found weaker counterexamples than gradient-based methods on 70% of mass-violation queries, suggesting algorithm-specific limitations.
- →Current formal verification tools are viable for tiny surrogates but require fundamental advances for production-scale neural operators in scientific computing.