South Korea to boost trust in AI with watermarking initiative
The Korean strategy is part of a wider global AI regulation trend, with, among a flurry of recent policy developments.
The South Korean government has announced plans to enhance the trustworthiness of AI-generated content with watermarks. The government wants to set up a framework to encourage voluntary AI reliability verification and certification by private organizations. By December, pilot projects will first involve companies developing high-risk AI applications.
The Korean Ministry of Science and ICT made the announcement at the LG Science Park in Seoul during the ‘4th AI High-Level Strategic Dialogue.’ This initiative is part of the government’s broader ‘AI Ethics and Trustworthiness Assurance Plan,’ which aims to strengthen AI’s role in promoting the country’s competitive position in AI technology. The plan includes several key aspects, such as technological and institutional foundations, private sector self-regulation, and across-the-board AI awareness.
Why does it matter?
Tech companies continue to explore watermarking as a viable solution to fight misinformation, AI-generated deepfakes, and copyright infringements. Watermarking is a technique where a signal is hidden in a piece of content to identify it as AI-generated. For instance, Google DeepMind has launched a watermarking tool that labels AI-generated images. South Korea’s move to enhance AI safety and trustworthiness with watermarking aligns with its National Strategy for AI. Announced in 2019, it aims to promote ‘trustworthy AI’ to increase the benefits of responsible use of the technology and address its risk factors.
The Korean strategy is part of a wider global AI regulation trend, with, among a flurry of recent policy developments, the UN establishing a high-level body on AI, the EU close to finalizing its AI Act, the White House unveiling a landmark AI executive order, and the UK Prime Minister hosting an international AI safety summit.