ChatGPT5 jailbroken, safeguards bypassed using creative language to build explosives

·

,

Just 24 hours after OpenAI launched its highly anticipated GPT?5 model with promises of “significantly more sophisticated” prompt safety, exposure management company Tenable has successfully jailbroken the platform, compelling it to provide detailed instructions on how to build a Molotov cocktail.

On August 7, 2025, OpenAI unveiled GPT?5, touting its enhanced guardrails designed to prevent the model from being used for illegal or harmful purposes. However, using a social engineering method known as the crescendo technique, Tenable researchers bypassed these safety protocols in just four simple prompts by posing as a history student interested in the historical context and recipe of the incendiary device.

The successful jailbreak highlights a critical security gap in the latest generation of AI models, demonstrating that despite developer claims, they remain vulnerable to manipulation for malicious purposes. The findings from Tenable documented in this blog join a growing chorus of reports from other researchers and users documenting similar jailbreaks, hallucinations, and other quality issues with GPT?5 since its release.

“The ease with which we bypassed GPT?5’s new safety protocols proves that even the most advanced AI is not foolproof,” said Tomer Avni, VP, Product Management at Tenable. “This creates a significant danger for organisations where these tools are being rapidly adopted by employees, often without oversight. Without proper visibility and governance, businesses are unknowingly exposed to serious security, ethical, and compliance risks. This incident is a clear call for a dedicated AI exposure management strategy to secure every model in use.” While OpenAI has stated it is implementing fixes, the immediate vulnerability of its flagship product proves that organisations cannot rely solely on AI models’ built-in safety features. It provides further evidence that solutions like Tenable AI Exposure are important for gaining control over the AI platforms organisations use, consume, and build, ensuring that all AI use is responsible, secure, and compliant with global regulations.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

A Bugged Life

since 2004

Archives

© 2025 A Bugged Life

Powered by WordPress