y0news
AnalyticsDigestsSourcesRSSAICrypto
#biological-safety1 article
1 articles
AINeutralOpenAI News ยท Sep 57/106
๐Ÿง 

GPT-5 bio bug bounty call

OpenAI has launched a Bio Bug Bounty program inviting researchers to test GPT-5's safety protocols using universal jailbreak prompts. The program offers rewards up to $25,000 for successfully identifying vulnerabilities in the upcoming AI model's biological safety measures.