Japan to Establish AI Safety Institute in January 2024
The new institute’s envisioned functions include offering assistance in developing software that assesses AI risks, serving as an accreditation body for AI companies, and participating in international coordination with the United States and the United Kingdom, which plan on setting up similar institutes.
Japan Prime Minister Fumio Kishida announced plans to establish an AI safety institute in January 2024 at an expert panel meeting of the AI Strategy Council, chaired by Professor Yutaka Matsuo from the University of Tokyo.
‘International awareness of AI safety has been growing,’ Kishida said, ‘We need an organization to conduct research on safety evaluation methods, create standards and carry out other matters.’
The new institute’s envisioned functions include offering assistance in developing software that assesses AI risks, serving as an accreditation body for AI companies, and participating in international coordination with the United States and the United Kingdom, which plan on setting up similar institutes.
During the meeting, experts drafted a list of 10 principles as guidelines for AI companies, highlighting ‘human-centredness,’ ‘safety,’ ‘transparency,’ and ‘human controllability.’ Making a connection to the Group of 7’s (G7) comprehensive guidelines for generative AI, the draft also calls for generative AI developers to introduce technology that helps identify AI-created content and the history of information.
Why does it matter?
The G7 has been taking concrete actions to lead the discussions on AI regulations, from the Hiroshima AI Process in May and the 11-point voluntary Code of Conduct in October for companies developing AI systems to the comprehensive guidelines for generative AI in December. Japan’s next step to launch an AI Safety Institute will complement such a trend and lay down the necessary infrastructures, such as third-party certification systems, that pave the way to meaningful adoption of voluntary guidelines and effective regulations.