Bipartisan effort in US House of Representatives to forge AI regulatory path
Despite multiple high-level meetings, hearings, and legislative initiatives over the last year, efforts to adopt AI legislation in Congress have not been successful so far.
The leaders of the United States House of Representatives announced the creation of a bipartisan task group to investigate potential legislation to address concerns about AI. The US House of Representatives is taking significant steps to give attention to the fast-evolving field of artificial intelligence (AI) through a bipartisan task force and new legislation aimed at regulating the use of AI by federal agencies and protecting individuals’ rights.
Despite multiple high-level meetings, hearings, and legislative initiatives over the last year, efforts to adopt AI legislation in Congress have not been successful so far.
Bipartisan AI Task Force
As efforts to pass laws governing AI technology have stalled, leaders of the House, including Speaker Mike Johnson and Minority Leader Hakeem Jeffries, have launched a nonpartisan AI task force. The initiative seeks to explore how the US can lead in AI innovation and development while addressing potential risks such as fake content, misinformation, and job losses. The task force is co-chaired by Rep. Jay Obernolte and Ted Lieu, both with a background in computer science. They will produce a comprehensive report with guiding principles, recommendations, and policy proposals developed within House committees of jurisdiction. Rep. Jay Obernolte, the Republican leader of the 24-member task group, said that the report will include “the regulatory standards and Congressional actions needed to both protect consumers and foster continued investment and innovation in AI.”
Legislative Efforts
In addition to the new task force, recent legislative efforts include the introduction of the Federal Artificial Intelligence Risk Management Act by a bipartisan group of representatives. The act requires that federal agencies incorporate the National Institute for Standards and Technology’s (NIST) ‘AI Risk Management Framework‘ into their processes. It also mandates the Office of Management and Budget (OMB) to publish advice to agencies and contractors on adopting elements of the framework, aiming to mitigate AI risks and ensure safety.
Another legislative proposal is the ‘No AI Fake Replicas And Unauthorized Duplications Act’ (“No AI FRAUD” Act), which seeks to regulate the use of AI in cloning voices. The bill would establish federal guardrails for protecting individuals’ voice and likeness rights, addressing concerns over AI’s potential to create “fakes and forgeries.”
Why does it matter?
After the White House unveiled President Biden’s landmark AI executive order last October, these initiatives reflect Congress’s increasing recognition of the importance of regulating AI technology to safeguard against its potential threats while harnessing its benefits. This month, Commerce Secretary Gina Raimondo announced that major AI companies, such as OpenAI, Microsoft, Nvidia, Google, Apple, Anthropic, Amazon, and Meta, were among 200 organisations part of a new consortium endorsing safe AI deployment.
The formation of the AI task group and the introduction of new bills show bipartisan attempts to build a comprehensive approach to AI governance. This includes ensuring responsible and ethical use of AI by federal agencies, protecting individuals’ rights, and upholding the US’s leadership in AI innovation.
The task force and legislative proposals come amid wider debates on AI risks and a global race to regulate the new technology. With AI applications progressing rapidly, these efforts underscore the urgency of establishing clear guidelines and frameworks to address the ethical, legal, and societal implications of AI.