AI Image Site GenNomis Exposed 47GB of Underage Deepfakes
A major data leak incident at GenNomis, a platform run by South Korean AI firm AI-NOMIS, has brought serious concerns about the risks of unmonitored AI-generated content to the forefront.
For your information, GenNomis is an AI-powered image-generation platform that allows users to create unrestricted images from text prompts, generate AI personas, and perform face-swapping, with over 45 artistic styles and a marketplace for buying and selling user-generated images.
What Data Was Exposed?
According to vpnMentor’s report, shared with Hackread.com, cybersecurity researcher Jeremiah Fowler discovered a publicly accessible and misconfigured database containing a whopping 47.8 gigabytes of data, including 93,485 images and JSON files.
This trove of information revealed a disturbing collection of explicit AI-generated material, face-swapped images, and depictions involving what appeared to be underage individuals. A limited examination of the exposed records showed a prevalence of x-rated content, including AI-generated imagery that raised red flags about the exploitation of minors.
The incident supports warnings from a UK-based internet watchdog, which reported that dark web pedophiles are using open-source AI tools to produce child sexual abuse material (CSAM).
Fowler reported seeing numerous images that appeared to depict minors in explicit situations, as well as celebrities portrayed as children, including figures like Ariana Grande and Michelle Obama. The database also contained JSON files that logged command prompts and links to generated images, offering a glimpse into the platform’s inner workings.
Aftermath and Dangers
Fowler discovered that the database lacked basic security measures such as password protection or encryption but explicitly stated he implies no wrongdoing by GenNomis or AI-NOMIS for the incident.
He promptly sent a responsible disclosure notice to the company, and the database was deleted after GenNomis and AI-NOMIS websites went offline. However, a folder in the database labelled “Face Swap” disappeared before he sent the disclosure notice.

This incident highlights the growing problem of “nudify” or Deepfake pornography, where AI is used to create realistic explicit images without consent. Fowler noted, elaborating that an estimated 96% of Deepfakes online are pornographic, with 99% of those involving women who did not consent.
The potential for misuse of the exposed data in extortion, reputation damage, and revenge scenarios is substantial. Moreover, this exposure contradicts the platform’s stated guidelines, which explicitly prohibit explicit content involving children.
Fowler described the data exposure as a “wake-up call” regarding the potential for abuse within the AI image generation industry, highlighting the need for greater developer responsibility. He advocates for the implementation of detection systems to flag and block the creation of explicit Deepfakes, particularly those involving minors and stresses the importance of identity verification and watermarking technologies to prevent misuse and facilitate accountability.
At the time of writing, the GenNomis website was offline.