Nigeria’s National Information Technology Development Agency (NITDA) has issued a warning to citizens regarding newly discovered vulnerabilities in OpenAI’s GPT-4.0 and GPT-5 series. The agency’s Director of Corporate Affairs and External Relations, Hadiza Umar, announced that seven critical weaknesses have been identified in the models, which could expose users to data leakage.
These vulnerabilities allow attackers to manipulate the system through indirect prompt injection by embedding hidden instructions in webpages, comments, or crafted URLs. This can cause ChatGPT to execute unintended commands during normal browsing, summarization, or search actions. Furthermore, some flaws enable attackers to bypass safety filters using trusted domains and exploit markdown rendering bugs to hide malicious content. This can even “poison” ChatGPT’s memory, allowing injected instructions to persist across future interactions.
Although OpenAI has addressed part of the issue, large language models still struggle to distinguish between genuine user intent and malicious embedded data. The technique involves embedding hidden instructions in webpages, online comments, or crafted URLs, which can mislead ChatGPT into executing unintended actions during routine browsing or search activities. The vulnerabilities pose substantial risks, including unauthorized actions, information leakage, manipulated outputs, and long-term behavioral influence due to memory poisoning.
To mitigate these risks, NITDA urges organizations to limit or disable the browsing and summarization of untrusted websites within enterprise environments. Additionally, the agency recommends only enabling ChatGPT capabilities like browsing or memory when operationally necessary. Regular updates and patches of the GPT-4.0 and GPT-5 models are also essential to address any known vulnerabilities.
The discovery of these vulnerabilities highlights the ongoing challenges in ensuring the security and reliability of artificial intelligence systems. As AI technology continues to evolve and become more integrated into daily life, it is crucial for developers, organizations, and users to prioritize cybersecurity and take proactive measures to protect against potential threats. By taking these precautions, individuals and organizations can minimize the risks associated with using AI models like ChatGPT and ensure a safer online experience.