
Then, the harm was privacy invasion: real images, created in private, thrust into public view. Now, the harm arrives through fabricated images: generated by artificial intelligence (AI) with enough likeness to stain reputations. The difference? Anyone today with a social media footprint can be targeted.
The Edison Chen episode marked a turning point in our internet culture and gender discussion. Police pursued distributors of the stolen photos under computer misuse laws. Reflecting the gender double standards, the celebrity women victims bore the brunt of the fallout. Framed as a question of morality rather than theft, the women’s consent to having their photos taken in private was conflated with consent to their public display.
Deepfakes invert that logic. When synthetic images leak, the knee-jerk dismissal is that they are not real. Yet their mere existence is enough to tar reputations. All it takes is a single selfie scraped from social media, which can be repurposed offline.
Hong Kong does not recognise a general right to control one’s likeness, and the traditional legal tools against harassment and defamation were developed for conduct predating AI. Therefore, harm-inflicting actions could escape sanction until they spill into distribution, by which time the damage is often irreparable.
Source link