AI researchers call for access to generative AI systems to ensure safety testing
The letter calls for a legal and technical safe harbor and direct channels for researchers to report problems. Independent investigations are crucial for uncovering vulnerabilities in AI systems.
Over 100 prominent AI researchers have signed an open letter urging generative AI companies to grant investigators access to their systems. The researchers argue that strict company policies are impeding safety testing of tools. The letter contends that while the protocols are intended to prevent AI system misuse, they negatively impact independent research. Researchers are concerned that their accounts may be banned or that they may be sued for testing AI models without a company’s permission.
The signatories include experts in AI research, policy, and law, such as Stanford University’s Percy Liang, Pulitzer Prize-winning journalist Julia Angwin, and former member of the European Parliament Marietje Schaake. The letter was sent to companies including OpenAI, Meta, Anthropic, Google, and Midjourney, calling for a legal and technical safe harbour for researchers to examine their products. The researchers argue that generative AI companies should avoid the errors made by social media platforms, many of which have banned research intending to hold them accountable. AI firms have recently become more aggressive in excluding external auditors from their systems.
OpenAI, for example, claimed that The New York Times’ attempt to identify copyright violations constituted ‘hacking’ its ChatGPT chatbot. Meta’s updated terms suggest revoking the license to its latest large language model, LLaMA 2, if users allege intellectual property infringement.
The open letter is part of a wider effort to increase transparency and collaboration between AI companies and independent researchers, as well as to protect academic safety research.
The letter was signed by experts in AI research, policy, and law, including Stanford University’s Percy Liang; Pulitzer Prize-winning journalist Julia Angwin; RenĂ©e DiResta from the Stanford Internet Observatory; Mozilla fellow Deb Raji, who has pioneered research into auditing AI models; ex-government official Marietje Schaake, a former member of European Parliament; and Brown University professor Suresh Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy.