AIBearisharXiv – CS AI · 7h ago7/10
🧠
On the (In-)Security of the Shuffling Defense in the Transformer Secure Inference
Researchers demonstrate that the shuffling defense mechanism used to protect Transformer model weights during secure inference can be broken through an alignment attack, allowing adversaries to recover weights with minimal cost. The attack exploits multiple shuffled activations by finding a common permutation, undermining a key security assumption in privacy-preserving machine learning.