Privacy and data protection

AI, privacy, and data protection

Though AI can be used to improve processes and policies to safeguard privacy, it can also be misused to breach privacy rights and data protection. 

How can AI tools and techniques infringe on people’s privacy?

When personal data is analysed with the help of AI, entities holding that data are able to learn much more about individuals than ever before. Data can also be gathered from a larger but more accurately defined body with the help of AI. Existing ways in which personal data can be misused – such as discriminatory practices – risk being amplified intentionally or inadvertently. Algorithms can also be used for profiling and targeting certain groups of people (including children) with customised ads and messages.

In addition, one of the biggest concerns is the use of facial recognition technology (FRT) by governments, the private sector, or malicious actors to track people’s locations or activities without the user’s knowledge or consent, raising serious legal and ethical questions. This information can also be used for profiling, advertising, and even discriminating against users. FRT can be used to create databases of biometric data that can then be sold or shared with third parties, leading to further privacy concerns. As more data is collected, the risk of being stolen or misused also increases.

Can AI safeguard privacy? 

But it is not all doom and gloom. AI can also help strengthen anonymisation techniques that protect individuals’ privacy by removing personally identifiable information from datasets, thus minimising the risk of re-identification. AI can be used to develop and improve privacy-enhancing technologies such as encryption, secure protocols, and privacy-preserving algorithms. These technologies help protect sensitive information and prevent unauthorised access or breaches, thereby safeguarding privacy. 

AI algorithms can also analyse and summarise privacy policies to help individuals understand how their data will be used, shared, and protected. This empowers individuals to make informed decisions about sharing their data and enhances transparency in data handling practices.

Learn more on AI Governance.

Privacy and data protection are two interrelated internet governance issues. Data protection is a legal mechanism that ensures privacy. Privacy is usually defined as the right of any citizen to control their own personal information and to decide about it (to disclose information or not). Privacy is a fundamental human right. It is recognised in the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights (ICCPR), and in many other international and regional human rights conventions. The July 2015 appointment of the first UN Special Rapporteur on the Right to Privacy in the Digital Age reflects the rising importance of privacy in global digital policy, and the recognition of the need to address privacy rights issues on global, as well as national levels.

Frameworks for safeguarding the right to privacy and data protection

The ICCPR is the main global legal instrument for the protection of privacy. At a regional level, the main instruments on privacy and data protection in Europe is the Council of Europe (CoE) Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data of 1981. Although it was adopted by a regional organisation (CoE), it is open for accession by non-European states. Since the convention is technology neutral, it has withstood the test of time. The EU Data Protection Directive (Directive 95/46/EC) has also formed an important legislative framework for the processing of personal data in the EU and has had a vast impact on the development of national legislation not only in Europe but also globally. This regulation has also entered a reform process in order to cope with the new developments and to ensure an effective privacy protection in the current technological environment.

Another key international – non-binding – document on privacy and data protection is the OECD Guidelines on Protection of Privacy and Transborder Flows of Personal Data from 1980. These guidelines and the OECD’s subsequent work have inspired many international, regional, and national regulations on privacy and data protection. Today, virtually all OECD countries have enacted privacy laws and empowered authorities to enforce those laws.

While the principles of the OECD guidelines have been widely accepted, the main difference is in the way they are implemented, notably between the European and US approaches. In Europe there is comprehensive data protection legislation, while in the USA the privacy regulation is developed for each sector of the economy including financial privacy (the Graham-Leach-Bliley Act), children’s privacy (the Children’s Online Privacy Protection Act) and medical privacy (under the Health Insurance Portability and Accountability Act).

Another major difference is that, in Europe, privacy legislation is enforced by public authorities, while in the USA enforcement principally rests on the private sector and self-regulation. Businesses set privacy policies. It is up to companies and individuals to decide about privacy policies themselves. The main criticism of the US approach is that individuals are placed in a comparatively weak position as they are seldom aware of the importance of options offered by privacy policies and commonly agree to them without informing themselves.

These two approaches – US and EU – to privacy protection have generated conflict. The main problem stems from the use of personal data by business companies. How can the EU ensure that data about its citizens is protected according to the rules specified in its Directive on Data Protection? According to whose rules (the EU’s or the USA’s) is data transferred through a company’s network from the EU to the USA handled?

A working solution was found in 2000 when the European Commission decided that EU regulations could be applied to US companies inside a legal ‘safe harbour’. US companies handling EU citizens’ data could voluntarily sign up to observe the EU’s privacy protection requirements. Having signed, companies were required to observe the formal enforcement mechanisms agreed upon between the EU and the USA.

The so-called Safe Harbor Agreement was received with a great hope as the legal tool that could solve similar problems with other countries. However, it was criticised by the European Parliament for not sufficiently protecting the privacy of EU citizens.

In a turning point for data transfers between the EU and the USA, in October 2015, the Court of the Justice of the European Union (CJEU) struck down this long-standing agreement and declared the Safe Harbour Agreement to be invalid. The Court found that the European Commission had failed to examine whether the USA afforded an adequate level of protection equivalent to that guaranteed in EU, but simply examined the safe harbor scheme. It found that in the US, the scheme is applicable only to undertakings that adhere to it, whereas public authorities are not subject to it, and national security, public interest and law enforcement requirements prevail over scheme. The US scheme therefore enables interference by public authorities, whereas no such limitations exist under EU law.The Court also found that the powers of national supervisory authorities could not be diminished other than by the Court.

Given the high importance of privacy and data protection in the relations between the USA and the EU after the Snowden revelations, it is likely to expect higher pressure to find a post-Safe Harbour Agreement solution.

See also: