AIBullisharXiv โ CS AI ยท 5h ago
๐ง
Safety Guardrails for LLM-Enabled Robots
Researchers developed RoboGuard, a two-stage safety architecture to protect LLM-enabled robots from harmful behaviors caused by AI hallucinations and adversarial attacks. The system reduced unsafe plan execution from over 92% to below 3% in testing while maintaining performance on safe operations.