Meta introduces new privacy controls for generative AI to safeguard user data
Meta’s Privacy Center now offers users the option to opt out of using their personal information for AI model training. This can be done through a simple form feedback process.
Meta has added additional controls in its Privacy Center that allow users to choose not to use their personal information for AI training. Users can remove their personal data from third parties using it for generative AI via a simple form feedback process. Meta has also added a generative AI overview in its Privacy Center, providing users with a comprehensive understanding of how AI models are trained and their data’s role in the process. The update is keeping up with the new the EU Digital Services Act (DSA) regulations that will soon be in effect, giving people more control over their personal data and how online platforms utilize it.
To train models, Meta combines information that is freely accessible online with licensed data and its own goods and services. It is possible to gather personal data, but it is not connected to any Meta accounts. Meta claims that through a Privacy Review process, it seeks to promote knowledge and control over data usage while assuring privacy protection. They recognize possible privacy hazards and create solutions to lessen them.
The use of publicly available content in derivative creations poses a challenge in copyright law. Meta plans to prioritize allowing individuals to remove their information and work. The company is also investing in generative AI, citing the need for data from various sources. Evolving regulations are expected, given the involvement of the litigious record publishing industry.
Why does it matter?
Introducing new privacy controls and emphasizing transparency in generative AI training shows that Meta is taking steps toward protecting user privacy and compliance with evolving regulations. What is now to be seen is whether such actions would maintain the balance between innovation and data protection.