A complaint lodged by privacy advocacy group Noyb with the Austrian data protection authority (DSB) alleged that ChatGPT’s generation of inaccurate information violates the European Union’s privacy regulations.
The Vienna-based digital rights group Noyb, founded by known activist Max Schrems, said in its complaint that ChatGPT’s failure to provide accurate personal data and instead guessing it, violates the GDPR requirements.
Under GDPR, an individual’s personal details, including date of birth, are considered personal data and are subject to stringent handling requirements.
The complaint contends that ChatGPT breaches GDPR provisions on privacy, data accuracy, and the right to rectify inaccurate information. Noyb claimed that OpenAI, the company behind ChatGPT, refused to correct or delete erroneous responses and has withheld information about its data processing, sources, and recipients.
Noyb’s data protection lawyer, Maartje de Graaf said, “If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.”
Citing a report from The New York Times, which found that “chatbots invent information at least 3% of the time – and as high as 27%,” noyb emphasized the prevalence of inaccurate responses generated by AI systems like ChatGPT.
OpenAI’s ‘Privacy by Pressure’ Approach
Luiza Jarovsky, chief executive officer of Implement Privacy, has previously said that artificial intelligence-based large language models follow a “privacy by pressure” approach. Meaning: “only acting when something goes wrong, when there is a public backlash, or when it is legally told to do so,” Jarovsky said.
She explained this further citing an incident involving ChatGPT in which people’s chat histories were exposed to other users. Jarovsky immediately noticed a warning being displayed to everyone accessing ChatGPT, thereafter.
Jarovsky at the beginning of 2023, prompted ChatGPT to give information about her and even shared the link to her LinkedIn profile. But the only correct information that the chat bot responded with was that she was Brazilian.
Although the fake bio seems inoffensive, “showing wrong information about people can lead to various types of harm, including reputational harm,” Jarovsky said. “This is not acceptable,” she tweeted.
She argued that if ChatGPT has “hallucinations,” then prompts about individuals should come back empty, and there should be no output containing personal data.
“This is especially important given that core data subjects’ rights established by the GDPR, such as the right of access (Article 15), right to rectification (Article 16), and right to erasure (Article 17), don’t seem feasible/applicable in the context of generative AI/LLMs, due to the way these systems are trained,” Jarovsky said.
Investigate ChatGPT’s GDPR Violations
The complaint urges the Austrian authority to investigate OpenAI’s handling of personal data to ensure compliance with GDPR. It also demands that OpenAI disclose individuals’ personal data upon request and seeks imposition of an “effective, proportionate, dissuasive, administrative fine.”
The potential consequences of GDPR violations are significant, with penalties amounting to up to 4% of a company’s global revenue.
OpenAI’s response to the allegations remains pending, and the company faces scrutiny from other European regulators as well. Last year, Italy’s data protection authority temporarily banned ChatGPT’s operations in the country over similar GDPR concerns, following which the European Data Protection Board established a task force to coordinate efforts among national privacy regulators regarding ChatGPT.
Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.