Reports have emerged that X, a social media platform, may still be allowing the posting of AI-generated, sexualized images of real individuals, despite a recent announcement banning such content. According to an investigation by The Guardian, journalists were able to create and upload videos depicting real women being undressed into bikinis using images of fully clothed individuals. These images were created using the standalone Grok app and then successfully posted to X without any apparent moderation.
The incident raises concerns about the platform’s ability to regulate and remove explicit content. Earlier this week, X announced that it had implemented measures to prevent the editing of images of real people in revealing clothing, including bikinis, on its platform. The company stated that this restriction applies to all users, including paid subscribers. X has faced criticism in recent weeks over the circulation of sexualized, AI-generated images on the platform, with governments in multiple countries investigating or moving to restrict Grok.
The issue has sparked widespread concern, particularly with regards to the creation of sexualized images of minors. X has emphasized its commitment to maintaining a “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The company’s safety update aims to reassure users that it is taking steps to address the issue.
However, the recent findings by The Guardian suggest that more needs to be done to prevent the creation and dissemination of explicit content on the platform. The ability to create and upload such content without moderation raises questions about the effectiveness of X’s safety measures. As the platform continues to face scrutiny over its handling of AI-generated content, it remains to be seen what further steps will be taken to address the issue.
In the context of growing concerns about the use of AI to create explicit content, the incident highlights the need for social media platforms to prioritize user safety and develop effective measures to regulate and remove such content. With governments and regulatory bodies increasingly focusing on the issue, X and other platforms will likely face ongoing scrutiny and pressure to ensure that their safety policies and practices are adequate.