←Back to feed
🧠 AI🔴 BearishImportance 7/10Actionable
Sirens' Whisper: Inaudible Near-Ultrasonic Jailbreaks of Speech-Driven LLMs
arXiv – CS AI|Zijian Ling, Pingyi Hu, Xiuyong Gao, Xiaojing Ma, Man Zhou, Jun Feng, Songfeng Lu, Dongmei Zhang, Bin Benjamin Zhu|
🤖AI Summary
Researchers developed SWhisper, a framework that uses near-ultrasonic audio to deliver covert jailbreak attacks against speech-driven AI systems. The technique is inaudible to humans but can successfully bypass AI safety measures with up to 94% effectiveness on commercial models.
Key Takeaways
- →SWhisper enables inaudible prompt injection attacks against speech-driven LLMs using near-ultrasound frequencies.
- →The framework achieves up to 94% success rate in bypassing commercial AI safety systems.
- →Attacks use commodity hardware and work under realistic black-box conditions.
- →Human listeners cannot distinguish the malicious audio from background noise in controlled studies.
- →The vulnerability affects both commercial and open-source speech-driven AI models.
#ai-security#jailbreak#speech-recognition#vulnerability#ultrasonic#prompt-injection#llm-safety#acoustic-attacks
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Related Articles