Researchers have uncovered a method to bypass restrictions on GPT-5 by exploiting its storytelling c
Researchers have uncovered a method to bypass restrictions on GPT-5 by exploiting its storytelling capabilities to extract prohibited responses, referred to as the "Echo Chamber" technique. This revelation has renewed concerns about the vulnerabilities of advanced AI models. Additionally, zero-click prompt injection methods have been identified, enabling unauthorized access to sensitive data from platforms like Google Drive, Jira, and Microsoft Copilot Studio, as well as potential control over smart home systems. For more information, visit the full report at the provided link: https://thehackernews.com/2025/08/researchers-uncover-gpt-5-jailbreak-and.html