Italy’s data protection watchdog fined OpenAI €15 million for ChatGPT’s improper collection of personal data.
Italy’s privacy watchdog, Garante Privacy, fined OpenAI €15M after investigating ChatGPT’s personal data collection practices. The Italian Garante Priacy also obliges OpenAI to conduct a six-month informational campaign over ChatGPT’s data management violations. The decision stems from a March 2023 investigation and aligns with the EDPB’s guidance on AI-driven services and personal data processing.
“According to the Italian Data Protection Authority, the US company, which created and manages the generative artificial intelligence chatbot, did not notify the Authority of the data breach it underwent in March 2023, it has processed users’ personal data to train ChatGPT without first identifying an appropriate legal basis and has violated the principle of transparency and the related information obligations toward users.” reads the press release published by Italy’s Garante. “Furthermore, OpenAI has not provided for mechanisms for age verification, which could lead to the risk of exposing children under 13 to inappropriate responses with respect to their degree of development and self-awareness.”
The content of the informational campaign, to be approved by the Authority, must raise public awareness about ChatGPT’s data collection for AI training and user rights (objection, rectification, deletion). A communication campaign will inform users and non-users on how to oppose the use of their personal data for AI training, ensuring they can exercise their GDPR rights.
OpenAI claims the fine is “disproportionate” and announced it will appeal.
“They’ve since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period.” OpenAI spokesperson told Reuters.
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, ChatGPT)