US Congress members introduce AI risk mitigation measures for federal agencies
A bipartisan group of congressmen has introduced legislation in the US Congress to regulate the risks associated with AI. The bill, if enacted, will require federal agencies to adopt AI guidelines developed by the Commerce Department and set specific standards for AI suppliers.
A group of US bipartisan Congress members introduced legislation addressing the risks associated with AI within federal agencies and their vendors. Sponsored by Democrats Ted Lieu and Don Beyer, alongside Republicans Zach Nunn and Marcus Molinaro, the proposed bill outlines measures that could become law, given a previous Senate version introduced by Republican Jerry Moran and Democrat Mark Warner in November.
The legislation, if approved, mandates federal agencies to adopt AI guidelines established by the Commerce Department in the previous year. Additionally, it requires the Commerce Department to develop specific standards for AI suppliers to the US government. The bill emphasises the need for AI suppliers to provide adequate access to data, models, and parameters to facilitate thorough testing and evaluation.
The focus on AI risk mitigation aligns with the broader context of concerns surrounding generative AI, which can create content like text, photos, and videos based on open-ended prompts. While generative AI holds promise, there are fears that it could render specific jobs obsolete and contribute to challenges in discerning factual information and misinformation, especially during elections. Extreme concerns include the potential for bad actors to exploit AI to access critical infrastructure.
Why does it matter?
The proposed legislation reflects a modest yet tangible step in the US government’s journey towards regulating AI. While Europe has made more significant strides, the United States has taken incremental measures. In October, President Joe Biden signed an executive order to enhance AI safety, requiring developers of AI systems with potential risks to national security, the economy, or public health to share safety test results with the government before public release. This legislative move underscores the ongoing efforts to strike a balance between harnessing the potential benefits of AI and mitigating associated risks.