Researchers demonstrate how multi-turn “storytelling” attacks bypass prompt-level filters, exposing systemic weaknesses in GPT-5’s defenses.
The post Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise appeared first on SecurityWeek.