US White House Science Adviser calls for stronger AI safeguards

The White House science adviser expresses concerns about various AI applications, emphasizing issues such as biassed facial recognition and privacy concerns while highlighting the need for collaboration with major tech companies to address AI safety standards.

 American Flag, Flag

Arati Prabhakar, the White House science adviser and head of the Office of Science and Technology Policy, expressed worries about a range of AI applications, including biassed facial recognition systems and privacy issues stemming from the aggregation of personal data. President Joe Biden frequently consults with Prabhakar to delve into the implications of AI and stress the importance of practical solutions.

When addressing AI-related concerns, Prabhakar highlights the technical challenge posed by the inherent opacity of AI models due to their deep learning nature. She draws a comparison to pharmaceuticals, which achieved safety and effectiveness through a process of clinical trials. Therefore, Prabhakar underscores the importance of collaboration with major tech companies like Amazon, Google, Microsoft, and Meta, who have voluntarily committed to upholding AI safety standards.

Prabhakar further alluded to imminent government actions aimed at ensuring accountability in AI, driven by the president’s sense of urgency.

Why does this matter?

This highlights the crucial efforts being made at the highest levels of the US government to address the potential risks and challenges associated with AI. Arati Prabhakar, the White House Science Adviser, is playing a pivotal role in shaping the nation’s strategy for managing AI-related concerns. As AI technology becomes increasingly integrated into various aspects of society, ensuring its responsible and safe deployment is paramount. The collaboration between Prabhakar and major tech companies underscores the importance of industry-government cooperation in establishing AI safety standards.