Opinion | Deepfake porn scandal in Hong Kong exposes need to update laws

Opinion | Deepfake porn scandal in Hong Kong exposes need to update laws
The deepfake scandal at the University of Hong Kong – hundreds of non-consensual, sexually explicit composites reportedly found on a student’s laptop – feels like a sequel to a much older story from 2008, when intimate photographs copied from actor Edison Chen’s computer were leaked on the internet.

Then, the harm was privacy invasion: real images, created in private, thrust into public view. Now, the harm arrives through fabricated images: generated by artificial intelligence (AI) with enough likeness to stain reputations. The difference? Anyone today with a social media footprint can be targeted.

The Edison Chen episode marked a turning point in our internet culture and gender discussion. Police pursued distributors of the stolen photos under computer misuse laws. Reflecting the gender double standards, the celebrity women victims bore the brunt of the fallout. Framed as a question of morality rather than theft, the women’s consent to having their photos taken in private was conflated with consent to their public display.

Deepfakes invert that logic. When synthetic images leak, the knee-jerk dismissal is that they are not real. Yet their mere existence is enough to tar reputations. All it takes is a single selfie scraped from social media, which can be repurposed offline.

Hong Kong’s legal architecture, including a privacy statute drafted in the 1990s, does not map perfectly onto this shift. Some argue that the Personal Data (Privacy) Ordinance, designed for conventional data-processing models, may not neatly address hyperrealistic fabrications using machine-learning techniques, and that harvesting publicly available photos may not breach collection rules, with lack of distribution creating a legal grey zone.

Hong Kong does not recognise a general right to control one’s likeness, and the traditional legal tools against harassment and defamation were developed for conduct predating AI. Therefore, harm-inflicting actions could escape sanction until they spill into distribution, by which time the damage is often irreparable.

The city’s privacy watchdog has launched a criminal investigation into the deepfake case, but declined to comment further. Its ultimate stance will clarify whether pre‑distribution, creation or possession is covered by existing law and what remedies are available.

Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.