ChatGPT faces scrutiny from EU privacy watchdog over data accuracy
National privacy watchdogs have raised concerns about the widely used AI service, leading to ongoing investigations.
The EU’s privacy watchdog task force has raised concerns over OpenAI’s ChatGPT chatbot, stating that the measures taken to ensure transparency are insufficient to comply with data accuracy principles. In a report released on Friday, the task force emphasised that while efforts to prevent misinterpretation of ChatGPT’s output are beneficial, they still need to address concerns regarding data accuracy fully.
The task force was established by Europe’s national privacy watchdogs following concerns raised by authorities in Italy regarding ChatGPT’s usage. Despite ongoing investigations by national regulators, a comprehensive overview of the results has yet to be provided. The findings presented in the report represent a common understanding among national authorities.
Data accuracy is a fundamental principle of the data protection regulations in the EU. The report highlights the probabilistic nature of ChatGPT’s system, which can lead to biassed or false outputs. Furthermore, the report warns that users may perceive ChatGPT’s outputs as factually accurate, regardless of their actual accuracy, posing potential risks, especially concerning information about individuals.