South Korea ramps up AI chip investment

South Korean President Yoon Suk Yeol has unveiled plans to invest $6.94 billion in AI by 2027, aiming to solidify the nation’s position as a leader in cutting-edge semiconductor chips. The commitment also includes a substantial $1.32 trillion fund for nurturing AI semiconductor firms.

The investment comes as South Korea faces pressure to keep pace with other major players like the US, China, and Japan, who are also bolstering their semiconductor supply chains through significant policy support.

Semiconductors are a crucial pillar of South Korea’s export-oriented economy. In March, chip exports reached a 21-month high of $11.7 billion, comprising nearly one-fifth of the country’s total exports. President Yoon emphasised the intense global competition in the semiconductor industry, characterising it as an industrial and national battleground during a meeting with policymakers and chip industry executives.

South Korea plans to channel investments and funding towards expanding research and development in AI chips, including artificial Neural Processing Units (NPUs) and next-generation high-bandwidth memory chips to advance its semiconductor capabilities. Additionally, the government aims to promote the development of next-generation Artificial General Intelligence (AGI) and safety technologies beyond current models.

Why does it matter?

President Yoon has set ambitious goals for South Korea to rank among the top three countries in AI technology, including chips, and secure a 10% or higher share of the global system semiconductor market by 2030. He envisions South Korea writing a new semiconductor narrative with AI chips, akin to its dominant position in memory chips over the past three decades. Despite potential disruptions like the recent earthquake in Taiwan, a key semiconductor hub, Yoon emphasised the importance of thorough preparation to mitigate uncertainties for South Korean companies.

Microsoft reveals Chinese groups use AI content to undermine US elections

Microsoft Corp. has identified Chinese groups using social media and AI-generated images to incite controversy and gain insights into American perspectives on divisive issues during the election year. According to a report by Microsoft, these groups have spread conspiratorial content, such as blaming the US government for the 2023 wildfires on the Hawaiian island of Maui. The disinformation campaign involved posts in 31 languages, alleging that the US government intentionally caused the blaze, accompanied by AI-generated images of burning coastal roads.

The investigation into the Maui wildfires is ongoing, with a focus on whether power lines owned by Hawaiian Electric Industries Inc. may have sparked the flames. Microsoft noted that these fabricated images demonstrate how Chinese government-affiliated groups are adopting new tactics to advance geopolitical priorities through disinformation and cyberattacks. However,  it remains to be seen whether AI has significantly amplified the effectiveness of these efforts.

Microsoft’s report suggests that the accounts responsible for spreading this disinformation are likely operated by the Chinese government or entities aligned with state interests. Despite these findings, the Chinese Embassy did not respond to requests for comment, which is consistent with the government’s denial of involvement in such activities. Researchers have noted the use of AI to create convincing images and manipulated videos, although Microsoft’s assessment suggests that the impact of such content in influencing audiences remains limited.

Why does it matter?

Since last fall, Microsoft has observed a gradual increase in social media accounts linked to China disseminating inflammatory narratives. These influence campaigns have targeted Taiwan’s election and exacerbated rifts in the Asia-Pacific region. On Taiwan’s election day, a Chinese-associated propaganda group reportedly used an AI-generated audio recording to imply an endorsement from Terry Gou, owner of Foxconn Technology Group and former presidential candidate, for another candidate.

Microsoft’s efforts coincide with US government warnings about Chinese hacking groups targeting critical infrastructure, including communications and transportation systems. Microsoft has also been subject to criticism in a recent US government report regarding its response to suspected Chinese cyberespionage campaigns.

Musicians demand protection against AI

A coalition of over 200 high-profile musicians, which includes icons like Billie Eilish, Nicki Minaj, and Stevie Wonder, has come together to address the growing concerns surrounding the use of artificial intelligence (AI) in the music industry. Spearheaded by the Artist Rights Alliance, this diverse group represents various musical genres and eras, highlighting the widespread impact of AI on the creative landscape. Their collective effort underscores a shared commitment to protecting artists’ rights and preserving the integrity of musical expression in the face of evolving technological advancements.

Focusing primarily on advocating for safeguards against the predatory use of AI that mimics human artists’ voices, likenesses, and sound, the coalition issued an open letter through the Artist Rights Alliance. In this letter, the signatories emphasise the importance of preventing the unauthorised replication of artists’ work through AI tools. They call upon technology companies to pledge against developing AI technologies that could undermine or replace human creativity, highlighting the critical need to maintain the authenticity of artistic expression and safeguard artists’ intellectual property rights.

Why does it matter?

This initiative reflects a broader industry-wide pushback against the unethical use of generative AI, seen in recent legislative actions like Tennessee’s ‘Elvis Act‘ aimed at protecting musicians’ vocal identities. The coalition’s advocacy sparks discussions about AI’s ethical implications, prompting collaborative efforts to navigate its integration responsibly, ensuring alignment with artistic integrity and creators’ rights.

US and UK AI Safety Institutes partner for advanced model testing

The US and UK have announced a partnership on the science of AI safety, with a particular focus on developing tests for the most advanced AI models.

US Commerce Secretary Gina Raimondo and British Technology Secretary Michelle Donelan signed a memorandum of understanding in Washington to collaborate on advanced AI model testing after agreements during the AI Safety Summit at Bletchley Park last November. The joint program will involve the UK’s and US’s AI Safety Institutes working together on research, safety evaluations, and guidance for AI safety.

Why does it matter?


The partnership aims to accelerate the work of both institutes across the full spectrum of AI risks, from national security concerns to broader societal issues. The UK and US plan to conduct at least one joint testing exercise on a publicly accessible model and are considering staff exchanges between the institutes. The two partners are among several countries that have created public AI safety institutions.


In October, British Prime Minister Rishi Sunak said that its AI Safety Institute would investigate and test new AI models. The US announced in November that it was establishing its own institute to assess threats from frontier AI models, and in February, Secretary Raimondo launched the AI Safety Institute Consortium (AISIC) to partner with 200 firms and organisations. The US-UK partnership is intended to strengthen the special relationship between the two countries and contribute to the global effort to ensure the safe development of AI.

Adobe report reveals growing adoption of Generative AI among Americans

The most recent Adobe study found the growing acceptance of Generative AI (Gen AI) within the American population, with more than half of individuals already engaging with it and anticipating increased use across various facets of their lives. The research, conducted through a survey of 3,000 US consumers, indicates that Gen AI is predominantly utilised for personal purposes, although its application in professional and educational environments is on the rise.

Consumers are employing Gen AI for diverse tasks such as research, generating content, designing visuals, and even coding. A majority view Gen AI as a tool capable of augmenting creativity and productivity, with many foreseeing it streamlining their lives and fostering greater innovation in the future. However, there remains a prevalent belief that, despite its potency, Gen AI will never outmatch human creativity.

Looking forward, consumers express enthusiasm for Gen AI’s potential to facilitate skill acquisition, simplify shopping experiences, enhance customer support, and enable content creation on social media platforms. Additionally, a significant portion of consumers anticipates brands integrating Gen AI into their customer interactions, particularly through avenues like chatbots and tailored experiences. Despite current usage, there’s room for improvement in how brands leverage Gen AI to enhance customer experiences, as consumers desire swifter, more personalised, and more imaginative interactions.

Why does this matter?

The increasing acceptance of Gen AI reflects broader trends in technology adoption and societal attitudes towards AI. Understanding these trends can inform future developments and strategies in various industries.

Canada tightens foreign investment rules for AI, Space Tech

Canada has tightened regulations on foreign investments, particularly in critical sectors such as AI, quantum computing, space technology, and critical minerals. Non-Canadian entities are now required to notify the government before investing in these areas. The reason behind this regulation is to safeguard national interests.

The government has also enhanced its review process to assess potential risks, thereby protecting against threats to national security and economic sovereignty. While emphasising national security, Canada recognises the importance of foreign investment for economic growth and innovation. Consequently, efforts to strengthen partnerships with international allies, particularly in minerals production and technology innovation, aim to bolster economic resilience.

Canada is actively monitoring and regulating Chinese investment activities, implementing specific measures to prevent undue influence or control. This reflects broader concerns among Western nations regarding national security and economic sovereignty.

How will AI transform the UK’s job landscape?

The Institute for Public Policy Research (IPPR) released its report, ‘Transformed by AI’, signalling a potential structural change in the employment sector due to the advancements in artificial intelligence (AI). This analysis highlights the critical juncture with the rapid progression of generative AI technologies. 

The document ventures into the ongoing discussion regarding the effects of automation on employment, suggesting that the swift uptake of generative AI might pave the way for unprecedented changes.

The IPPR’s findings suggest that up to 8 million positions in the UK could be endangered by automation, with women, younger employees, and lower-wage earners as the most susceptible groups.

The first wave of AI adoption 

The emergence of generative AI, characterized by its ability in generating text, code, and demonstrating high-level reasoning, is already making significant inroads into the UK’s economic framework. The IPPR report highlights the susceptibility of entry-level, part-time, and administrative roles to automation facilitated by generative AI technologies. These technologies, capable of understanding and creating text, data, and software, present a formidable challenge to conventional job functions. Notably, the analysis warns of a gender disparity in automation’s impact, with women at a higher risk due to their overrepresentation in jobs most exposed to automation.

The second wave of AI adoption

The report also signals a looming, more pronounced impact during the next phase of AI integration. As businesses begin to integrate AI more deeply into their operations, covering non-routine tasks which would affect higher-earning jobs, an urgent discussion about the workforce’s future and the necessity for strategic interventions becomes pressing. 

Policy Responses to Mitigate AI’s Impact

The trajectory of these developments, as the IPPR report suggests, will largely depend on the immediate policy decisions and strategic planning. 

The IPPR report calls for a job-centric industrial strategy for AI, focusing on protecting jobs, fostering the creation of new roles, and addressing the fallout from automation. This strategy includes recommendations like ring-fencing certain tasks to ensure they remain human-centric, adjusting fiscal policies to encourage job augmentation over replacement, and exploring new avenues for job creation in sectors less susceptible to automation, such as green jobs and social care.

Carsten Jung, senior economist at IPPR, emphasizes that “Technology isn’t destiny, and a jobs apocalypse is not inevitable [..] Government, employers, and unions have the opportunity to make crucial design decisions now that ensure we manage this new technology well. “

China aims to establish advanced metaverse industrial cluster

China has unveiled a national plan to develop its own metaverse by 2025, with the goal of creating three to five globally influential metaverse companies. This plan was published by five Chinese ministries led by the Ministry of Industry and Information Technology in a policy document.

The policy blueprint covering the time period of 2023 to 2025 highlights the application of metaverse technology in various industries, such as home appliances, automotive, and aerospace.

The development of artificial intelligence, blockchain, and virtual reality technologies will be key to achieving the metaverse vision and the Chinese government aims to establish three to five industrial clusters around these emerging technologies. The document also suggests that manufacturing industries, including steel and textiles, can adopt related technologies to optimize scheduling, material calculation, and other parts of the production process.

Previously, some local authorities in China like Henan and Shanghai province have also issued their own policies to promote metaverse development, emphasizing on how it can support the economy and traditional industries.

Mexico City cafe introduces iris scanning for global digital ID project

Enthusiastic early adopters recently gathered at a café in Mexico City where they underwent iris scanning using a futuristic sphere. This sphere is part of an ambitious project that aims to establish a unique digital identity for every individual worldwide. This effort called the Worldcoin project, is active in around thirty countries, including Mexico.

Led by Sam Altman, CEO of Open AI, and his co-founded crypto company Tools for Humanity, the project aims to create distinct digital identities for individuals globally. The orb’s iris scanning, conducted in about thirty countries, seeks to differentiate humans from bots online, offering participants a cryptocurrency incentive. It captures users’ iris images and converts them into unique numerical codes known as iris codes. These codes are used solely for confirming a user’s identity, with the actual iris images being deleted by default. Despite privacy concerns, project leaders assure that only unique iris codes are retained for identity verification, but critics raise questions about data processing timelines and iris code ownership.

Why does it matter?

While the Worldcoin project could potentially revolutionize online interactions, it raises important ethical questions about data ownership, retention, and the potential for misuse of biometric information. Sharing his perspective on privacy, one of the participants of this project, Jose Incera, who exchanged his iris scan for approximately $54 worth of Worldcoin cryptocurrency, noted that the inevitability of sharing information prevails in the current digital era. This demonstrates that consumers are not opposed to sharing personal information as long as there are tangible benefits they can readily embrace. Ethical data collection, transparency, and ensuring that users fully comprehend the trade-offs are crucial for responsible data-driven practices in the digital age.

AI will have a significant impact on jobs, the OECD said

According to the Organisation for Economic Co-operation and Development (OECD), more than a quarter of jobs in their member countries rely on skills that could be easily automated with the rise of AI. Furthermore, the survey found that three out of five workers fear losing their jobs to AI in the next decade. While the impact of AI on jobs has not been significant yet, the OECD believes this may be due to the early stages of the AI revolution.

The OECD further highlighted that, on average, 27% of the labour force in their member countries holds jobs that have a high risk of automation. Particularly in Eastern European countries, there is a significant vulnerability to job automation. The OECD defines jobs at the highest risk as those involving the utilisation of more than 25 out of the 100 skills and abilities that AI experts consider easily automatable.

The aim of the survey was to assess the forthcoming AI revolution in OECD countries, as indicated by the rapid progress, falling costs, and availability of AI-skilled workers, which may lead to increased adoption of AI in firms. To effectively prepare for this revolution, gathering comprehensive data on AI implementation and its impact on jobs and skills is crucial.

Despite these concerns, two-thirds of workers who are already utilising AI indicated that automation had alleviated the monotony and risk associated with their jobs.