The FTC warns of misuse of biometric data with the help of AI
The US Federal Trade Commission (FTC) issues a policy statement highlighting the risks of biometric data misuse through AI, emphasising the potential threats to privacy, civil rights and the unlawful handling of such data.
The US Federal Trade Commission (FTC) has issued a policy statement to alert consumers and businesses about the potential risks of biometric data being misused with the help of emerging technologies such as generative AI and machine learning. Biometric data describe or depict a person’s physical, biological, or behavioural characteristics, such as their body measurements or voiceprint. Biometric surveillance technologies are becoming more advanced and widespread, posing new threats to consumers’ privacy and civil rights.
The FTC listed several examples of how consumers’ biometric data could be misused to violate their privacy and civil rights, such as revealing sensitive personal information about their health, religion, or political affiliation based on their location. Large databases of biometric data could be attractive targets for hackers. The policy statement of the FTC further provided examples of unlawful handling of biometric information by businesses. This includes the failure to assess foreseeable harm, monitoring technologies, train staff, reduce risk or evaluate third-party practices. The FTC also warned that deepfakes could ‘allow bad actors to convincingly impersonate individuals in order to commit fraud or to defame or harass the individuals depicted.’