ChatGPT faces new legal challenges in Europe
OpenAI has been accused of allegedly breaching data protection rules under GDPR, inlcuding transparency, fairness, data access rights, and adherence to privacy by design principles.
A security and privacy researcher has accused OpenAI, the maker of ChatGPT, of a series of data protection breaches under the EU General Data Protection Regulation (GDPR). Lukasz Olejnik filed the complaint with the Polish Data Protection Authority, arguing that OpenAI processed his data ‘unlawfully, unfairly, and in a non-transparent manner.’
The complaint alleges that OpenAI is in violation of the GDPR on multiple counts, including lawful basis, transparency, fairness, data access rights, and privacy by design. Furthermore, it suggests that OpenAI failed to fulfil another GDPR requirement by not conducting prior consultations with regulators (Article 36). If OpenAI had proactively assessed and identified high risks to individuals’ rights, it should have engaged with local regulators before launching ChatGPT in Europe. However, it appears that OpenAI proceeded without such engagement.
Why does it matter?
These new allegations could have serious consequences for OpenAI, including hefty fines and damage to its reputation. It also puts into question its ability to comply with current and upcoming European rules.
OpenAI has already faced legal action in the US and the EU related to data privacy. The company is under a US federal class-action lawsuit for allegedly scraping personal information without consent or compensation. OpenAI is also under investigation by the US Federal Trade Commission (FTC). In the past, the Japanese privacy watchdog issued a warning over data privacy, and the Italian Data Protection Authority accused OpenAI of violating GDPR, briefly banning ChatGPT in Italy.