The Weight of a Bit: EMFI Sensitivity Analysis of Embedded Deep Learning Models
Researchers demonstrate that embedded neural network models using integer representations (8-bit and 4-bit) are significantly more resilient to electromagnetic fault injection attacks than floating-point formats (32-bit and 16-bit). The study reveals that floating-point models experience near-complete accuracy degradation from a single fault, while 8-bit integer representations maintain robust performance, with implications for securing AI systems deployed on edge devices.
This research addresses a critical vulnerability in edge AI deployment by systematically evaluating how numerical representation choices affect resistance to electromagnetic fault injection attacks. The work bridges an important gap in embedded security literature by providing the first comprehensive comparison across multiple representation formats, moving beyond previous isolated resilience studies.
The findings carry significant implications for hardware security in AI applications. As neural networks proliferate on embedded devices—from IoT sensors to autonomous systems—their susceptibility to physical attacks becomes increasingly relevant. The stark performance difference between floating-point and integer representations stems from fundamental architectural differences: floating-point numbers' complex bit structure makes them more vulnerable to cascading failures when corrupted, while integer representations exhibit better fault tolerance through redundancy and simpler bit patterns.
For industry practitioners, these results suggest quantization serves dual purposes beyond model compression and efficiency gains. The 70% Top-1 accuracy retention in 8-bit VGG-11 models after EMFI attacks demonstrates that aggressive quantization paradoxically enhances security. This creates an attractive optimization landscape where developers can reduce model size, improve inference speed, and simultaneously strengthen physical attack resistance.
The research indicates that organizations deploying sensitive AI systems should prioritize integer-based models, particularly in hostile environments where electromagnetic attacks remain feasible. Future work should explore whether this resilience extends across different architectures and attack vectors, and whether intentional design modifications could further harden models against fault injection while maintaining accuracy thresholds.
- →Integer quantization (8-bit, 4-bit) provides superior electromagnetic fault injection resistance compared to floating-point representations
- →Floating-point models lose nearly all accuracy from single EMFI attacks while 8-bit models retain 70% Top-1 accuracy on large networks
- →The relationship between model compression and physical attack resilience suggests quantization offers multi-dimensional security benefits
- →Fault pattern analysis reveals specific vulnerabilities in 0xFE/0xFF byte occurrences that differ across number representations
- →Edge AI security requires quantization as a co-design consideration alongside inference optimization