The National Information Technology Development Agency (NITDA) has issued a warning about newly discovered vulnerabilities in ChatGPT, a popular AI-powered chatbot. These vulnerabilities could expose users to data-leakage attacks, according to the agency’s advisory. Researchers have identified seven vulnerabilities affecting GPT-4 and GPT-5 models, which can be exploited through indirect prompt injection. This allows attackers to manipulate ChatGPT by embedding hidden instructions in webpages, comments, or URLs, triggering unintended commands during regular browsing, summarization, or search actions.
The warning comes amid growing concerns about the interaction between AI-powered tools and unsafe web content, as well as the increasing reliance on ChatGPT for business, research, and public-sector tasks. NITDA notes that some flaws enable attackers to bypass safety controls by masking malicious content behind trusted domains, while others take advantage of markdown rendering bugs to pass hidden instructions undetected. In severe cases, attackers can poison ChatGPT’s memory, forcing the system to retain malicious instructions that influence future conversations.
The agency warns that these vulnerabilities could lead to a range of cybersecurity threats, including unauthorized actions carried out by the model, unintended exposure of user information, manipulated or misleading outputs, and long-term behavioral changes caused by memory poisoning. Users may unknowingly trigger these attacks without clicking or interacting with anything, especially when ChatGPT processes search results or webpages containing hidden malicious instructions.
While OpenAI has fixed parts of the issue, NITDA notes that large language models (LLMs) still struggle to reliably separate genuine user intent from malicious data. To stay safe, the agency advises Nigerians, businesses, and government institutions to adopt precautionary steps, such as limiting or disabling the browsing and summarization of untrusted websites within enterprise environments and enabling features like browsing or memory only when necessary. Regular updates to deployed GPT-4 and GPT-5 models are also recommended to ensure known vulnerabilities are patched.
The discovery of these vulnerabilities highlights the importance of cybersecurity measures in the development and use of AI-powered tools. As the use of ChatGPT and similar technologies continues to grow, it is essential for users to be aware of the potential risks and take steps to protect themselves. By taking a proactive approach to cybersecurity, individuals and organizations can minimize the risks associated with these vulnerabilities and ensure the safe and effective use of AI-powered tools.