Openai faces another privacy complaint in Europe about the trend of virus AI chatbots hallucinating misinformation. This may be difficult for regulators to ignore.
Privacy Rights Advocacy Group Noyb supports a scary Norwegian individual by finding ChatGpt returning Make Up Information claiming he was convicted of killing two children and attempting to kill a third.
Previous privacy complaints regarding the generation of CHATGPT’s incorrect personal data include issues such as incorrect date of birth and incorrect biographical details. One concern is that Openai does not provide a way for AI to fix the incorrect information it generates about them. Normally Openai offers to block responses to such prompts. However, under the European Union’s General Data Protection Regulation (GDPR), Europeans have a set of data access rights, including the right to modify personal data.
Another component of this data protection law requires the data controller to ensure that personal data about individuals is accurate. This is concerning that NOYB is flagging it with its latest ChatGPT complaint.
“The GDPR is clear. Personal data must be accurate,” Joakim Söderberg, a data protection lawyer for NOYB, said in a statement. “If not, the user has the right to change it to reflect the truth. It’s not enough to show ChatGpt users a small disclaimer that a chatbot can make a mistake. You can’t just spread the misinformation and add a small disclaimer that in the end everything you said may not be true.”
A confirmed violation of GDPR could lead to penalties of up to 4% of global annual revenue.
Enforcement can also force changes to AI products. In particular, early GDPR interventions by Italian Data Protection Watchdog, which temporarily blocked ChatGpt access in the country in spring 2023, allowed for changes to information to be disclosed to users, for example. Watchdog has since tweaked 15 million euros to process people’s data without proper legal basis.
However, it is fair to say that since then, privacy watchdogs around Europe have taken a more careful approach to Genai as they try to find the best way to apply GDPR to these topical AI tools.
Two years ago, the Irish Data Protection Commission (DPC) led the role of GDPR enforcement in a previous NOYB ChatGPT complaint – was urged against rushing to ban Genai tools, for example. This suggests that regulators should take time to determine how the law applies instead.
And it is worth noting that since September 2023, privacy complaints against ChatGPT, which are being investigated by Poland’s Data Protection Watchdog, have yet to make a decision.
Noyb’s new ChatGPT complaint appears to be intended to awaken privacy regulators when it comes to the risk of hallucination.
The nonprofit shared the screenshot (below) with TechCrunch. This shows the interaction with ChatGpt where AI responds to a question asking, “Who is Hjalmar Holmen?” – The name of the individual who brings the complaint – by creating a tragic fiction in which he was convicted of child murder and sentenced to 21 years in prison for killing two sons.

While the claim of honor loss that Hjalmar Holmen is a child murderer is completely wrong, Noyb points out that ChatGpt’s response contains some truths. This is because the individual in question has three children. The chatbot also corrected the gender of his child. And his hometown is correctly named. But it makes the AI hallucinated such a horrifying falsehood even more strange and unsettling.
A NOYB spokesman said it was unable to determine why the chatbot created such a particular false history for this individual. “We did our research to make sure this wasn’t a confusion with others,” the spokesman said he looked into the newspaper archives but could not find an explanation of why the AI killed them.
Because large-scale language models such as the underlying ChatGpt essentially do the following word predictions at a vast scale, we can speculate that the dataset used to train the tool contained many stories of suicide that influenced word selection in response to questions about named men.
Whatever the explanation, it is clear that such output is completely unacceptable.
NOYB’s claim is also illegal under the EU data protection regulations. Openai also displays a small disclaimer at the bottom of the screen saying, “ChatGpt could make a mistake. Please check for important information,” which says under the GDPR, it cannot escape AI developers from generating terrible falsehoods about people in the first place.
Openai has been contacted to respond to complaints.
While this GDPR complaint involves one name individual, NOYB points to other cases of ChatGPT that produce legally compromised information – it’s clear that this is not an isolated issue for AI tools, including an Australian major who said he was involved in bribery and corruption scandals, and a German journalist who was misnamed as a child abuser.
One important thing to note is that following an update powered by the underlying AI model ChatGPT, Noyb says the chatbot has stopped generating dangerous falsehoods about the Hjalmar Holmen.
In our own test, ChatGpt responded with a slightly strange combo by displaying photos of different people sourced from sites such as Instagram, SoundCloud, and Discogs. The second attempt was a response identifying Alvermer Holmen as a “Norwegian musician and songwriter” that included “Honky Tonk Inferno” on the album.

The dangerous falsehood generated by ChatGpt about Hjalmar Holmen appears to have stopped, but both Noyb and Hjalmar Holmen are concerned that false, honor-loss information about him may be held within the AI model.
“Additional disclaimers that are not compliant with the law will not disappear,” said Kleanthi Sardeli, another data protection lawyer for Noyb, in a statement. “AI companies don’t just “hide” false information from users while processing false information internally. ”
“AI companies should stop acting as if the GDPR doesn’t apply to them when it clearly does,” she added. “If hallucinations don’t stop, people can easily suffer reputational damage.”
NOYB has filed a complaint against Openai with the Norwegian Data Protection Agency against Openai as OYB claims that its Irish office is not responsible solely for affecting Europeans as it targets complaints from Openai’s US entities.
However, a previous NOYB-supported GDPR complaint filed in Austria in April 2024 was introduced by the regulatory authorities to the Irish DPC for changes made by Openai to designate the Irish sector as a provider of ChatGPT services to local users at the beginning of the year.
Where is that complaint now? Still sitting on an Irish desk.
“The DPC received a complaint from the Austrian supervisory authority in September 2024 and began handling the complaints, but it is still ongoing,” the DPC’s communications assistant told TechCrunch when asked to update.
He did not offer maneuvering when DPC’s investigation into ChatGpt hallucinations was expected to be over.
Source link