Google and University of Tokyo to launch AI initiative for regional solutions

Google LLC and the University of Tokyo are teaming up to leverage generative AI to tackle local challenges in Japan, such as the nation’s shrinking workforce. The initiative, featuring prominent AI researcher Professor Yutaka Matsuo, will be piloted in Osaka and Hiroshima prefectures, with plans to expand successful models nationwide by 2027.

In Osaka, the project aims to address employment mismatches by using AI to suggest job opportunities and career paths that job seekers might not have considered. That approach differs from traditional job placement agencies and will draw from extensive online data to offer more tailored job suggestions.

The specific focus for Hiroshima has yet to be determined. However, Hiroshima Governor Hidehiko Yuzaki expressed a vision for AI to provide detailed responses to relocating inquiries, signalling AI’s potential to shape the prefecture’s future.

Beyond these initial projects, Google suggests that generative AI could enhance medical care on remote islands and automate agriculture, forestry, and fisheries tasks. Professor Matsuo emphasised that effectively utilising generative AI presents a significant opportunity for Japan.

UNESCO warns of AI’s role in distorting Holocaust history

A new UNESCO report highlights the growing risk of Holocaust distortion through AI-generated content as young people increasingly rely on Generative AI for information. The report, published with the World Jewish Congress, warns that AI can amplify biases and spread misinformation, as many AI systems are trained on internet data that includes harmful content. Such content led to fabricated testimonies and distorted historical records, such as deepfake images and false quotes.

The report notes that Generative AI models can ‘hallucinate’ or invent events due to insufficient or incorrect data. Examples include ChatGPT fabricating Holocaust events that never happened and Google’s Bard generating fake quotes. These kinds of ‘hallucinations’ not only distort historical facts but also undermine trust in experts and simplify complex histories by focusing on a narrow range of sources.

UNESCO calls for urgent action to implement its Recommendation on the Ethics of Artificial Intelligence, emphasising fairness, transparency, and human rights. It urges governments to adopt these guidelines and tech companies to integrate them into AI development. UNESCO also stresses the importance of working with Holocaust survivors and historians to ensure accurate representation and educating young people to develop critical thinking and digital literacy skills.

Ukrainian student’s identity misused by AI on Chinese social media platforms

Olga Loiek, a 21-year-old University of Pennsylvania student from Ukraine, experienced a disturbing twist after launching her YouTube channel last November. Her image was hijacked and manipulated through AI to create digital alter egos on Chinese social media platforms. These AI-generated avatars, such as ‘Natasha,’ posed as Russian women fluent in Chinese, promoting pro-Russian sentiments and selling products like Russian candies. These fake accounts amassed hundreds of thousands of followers in China, far surpassing Loiek’s own online presence.

Loiek’s experience highlights a broader trend of AI-generated personas on Chinese social media, presenting themselves as supportive of Russia and fluent in Chinese while selling various products. Experts reveal that these avatars often use clips of real women without their knowledge, aiming to appeal to single Chinese men. Some posts include disclaimers about AI involvement, but the followers and sales figures remain significant.

Why does it matter?

These events underscore the ethical and legal concerns surrounding AI’s misuse. As generative AI systems like ChatGPT become more widespread, issues related to misinformation, fake news, and copyright violations are growing.

In response, governments are starting to regulate the industry. China proposed guidelines to standardise AI by 2026, while the EU’s new AI Act imposes strict transparency requirements. However, experts like Xin Dai from Peking University warn that regulations struggle to keep pace with rapid AI advancements, raising concerns about the unchecked proliferation of AI-generated content worldwide.

Anthropic unveils Claude 3.5 Sonnet AI model

Anthropic, a startup backed by Google and Amazon, has introduced a new AI model named Claude 3.5 Sonnet alongside a revamped user interface to enhance productivity. The release comes just three months after the launch of its Claude 3 family of AI models. Claude 3.5 Sonnet surpasses its predecessor, Claude 3 Opus, in benchmark exam performance, speed, and cost efficiency, being five times cheaper for developers.

CEO Dario Amodei emphasised AI’s flexibility and rapid advancement, noting that, unlike physical products, AI models can be quickly updated and improved. Anthropic’s latest technology is now available for free on Claude.ai and through an iOS app. Additionally, users can opt into a new feature called ‘Artifacts,’ which organises generated content in a side window, facilitating collaborative work and the production of finished products.

Anthropic’s rapid development cycle reflects the competitive nature of the AI industry, with companies like OpenAI and Google also pushing forward with significant AI advancements. The startup plans to release more models, including Claude 3.5 Opus, within the year while focusing on safety and usability.

US House of Representatives unlikely to pass broad AI regulations this year

The US House of Representatives is unlikely to pass broad AI regulation this year. House Majority Leader Steve Scalise said that he opposes extensive regulations, fearing they might hinder the US in AI development compared to China. Instead, he suggests focusing on existing laws and targeted fixes rather than creating new regulatory structures.

This stance contrasts with Senate Majority Leader Chuck Schumer’s proposal, whose bipartisan AI working group report recommended a $32 billion annual investment in non-defense AI innovation and a comprehensive regulatory framework. The House’s bipartisan AI task force is also cautious about large-scale regulations.

Chair Rep. Jay Obernolte suggests that some targeted AI legislation might be feasible, while Rep. French Hill advocates for a sector-specific review under existing laws rather than a broad regulatory framework. This division between the House and Senate reduces the likelihood of significant AI legislation this year, but the House may consider smaller, urgent AI-related bills to address immediate issues.

Why does it matter?

The US Congress has seen a surge in AI legislation from both the Senate and House, by the rise of advanced AI models like ChatGPT and DeepAI, and growing issues with ‘deepfake’ content, particularly around elections and scams. However, this division reduces the likelihood of significant AI legislation this year, though smaller, urgent AI-related bills may still be approved.

Amazon expands AI tools for European sellers

Amazon has expanded its generative AI tools for product listings to sellers in France, Germany, Italy, Spain, and the UK. These tools, designed to streamline the process of creating and enhancing product listings, can generate product descriptions, titles, and details and fill in any missing information. The rollout follows an initial introduction in the US and a quieter launch in the UK earlier this month.

The new AI tools aim to help sellers list products more quickly by allowing them to enter relevant keywords or upload product photos, after which the AI generates a product title, bullet points, and descriptions. While the AI-generated content can be edited, Amazon advises sellers to review the generated listings thoroughly to avoid inaccuracies. The company continuously improves these tools to make them more effective and user-friendly.

Earlier this year, Amazon also introduced a tool enabling sellers to create product listings by posting a URL to their existing website, though it remains uncertain when this feature will be available outside the US. The expansion of AI tools to European markets raises regulatory concerns, particularly regarding GDPR and the Digital Services Act, which require transparency in AI applications.

Why does it matter?

Despite these regulatory challenges, Amazon’s use of generative AI marks a significant advancement in e-commerce. By leveraging diverse sources of information, Amazon’s AI models can infer product details with high accuracy, improving the quality and efficiency of product listings at scale. However, the precise data used to train these models remains unclear, highlighting ongoing concerns about data privacy and usage.

African ministers endorse AI strategy for digital growth

African ICT and Communications Ministers have collectively endorsed the ‘Continental Artificial Intelligence (AI) Strategy and African Digital Compact’ to accelerate the continent’s digital evolution by harnessing the potential of new digital technologies. The decision came during the 2nd Extraordinary Specialized Technical Committee on Communication and ICT session, attended by over 130 African ministers and experts. The aim is to drive digital transformation amidst rapid advancements fueled by AI technology and applications.

The Continental AI Strategy aims to guide African nations in utilising AI to fulfil development goals while ensuring ethical use, mitigating risks, and capitalising on opportunities. It emphasises an Africa-owned, people-centred, and inclusive approach to enhance the continent’s AI capabilities across infrastructure, talent, datasets, innovation, and partnerships while prioritising safeguards against potential threats.

African Union Commissioner for Infrastructure and Energy, Dr Amani Abou-Zeid, highlighted AI’s significant opportunities for positive transformation, economic growth, and social progress in Africa. The strategy underscores the importance of AI-enabled systems in fostering homegrown solutions, driving economic growth, and sustainable development, aligning with the AU Agenda 2063 and the UN Sustainable Development Goals. Leaders stressed the need for collective efforts to leverage AI to advance Africa’s digital agenda and achieve long-term developmental objectives.

UK parliamentary candidate introduces AI lawmaker concept

In a bold move highlighting the intersection of technology and politics, businessman Steve Endacott is running in the 4 July national election in Britain, aiming to become a member of parliament (MP) with the aid of an AI-generated avatar. The campaign leaflet for Endacott features not his own face but that of an AI avatar dubbed ‘AI Steve.’ The initiative, if successful, would result in the world’s first AI-assisted lawmaker.

Endacott, founder of Neural Voice, presented his AI avatar to the public in Brighton, engaging with locals on various issues through real-time interactions. The AI discusses topics like LGBTQ rights, housing, and immigration and then offers policy ideas, seeking feedback from citizens. Endacott aims to demonstrate how AI can enhance voter access to their representatives, advocating for a reformed democratic process where people are more connected to their MPs.

Despite some scepticism, with concerns about the effectiveness and trustworthiness of an AI MP, Endacott insists that the AI will serve as a co-pilot, formulating policies reviewed by a group of validators to ensure security and integrity. The Electoral Commission clarified that the elected candidate would remain the official MP, not the AI. While public opinion is mixed, the campaign underscores the growing role of AI in various sectors and sparks an important conversation about its potential in politics.

SoftBank to expand US power generation for AI

Founder Masayoshi Son announced that Japan’s SoftBank Group plans to expand its power generation business in the US to support global generative AI projects. SB Energy, backed by SoftBank, focuses on developing and operating renewable energy projects across the US. The initiative aligns with SoftBank’s strategy to explore new investment opportunities outside Japan.

Why does it matter?

At the annual shareholder meeting of SoftBank Corp, the group’s telecom arm, Son highlighted the importance of seeking innovative investments. He emphasised that SoftBank’s future growth would rely on identifying and nurturing emerging technologies and markets beyond Japan.

The current strategy reflects SoftBank’s commitment to advancing its global presence and influence in the tech and renewable energy sectors.

OpenAI co-founder to launch new AI company

Ilya Sutskever, co-founder and former chief scientist at OpenAI, announced on Wednesday the launch of a new AI company named Safe Superintelligence. The company aims to create a secure AI environment amidst the competitive generative AI industry. Based in Palo Alto and Tel Aviv, Safe Superintelligence aims to prioritise safety and security over short-term commercial pressures.

Sutskever made the announcement on social media, emphasising the company’s focused approach without the distractions of traditional management overhead or product cycles. Joining him as co-founders are Daniel Levy, a former OpenAI researcher, and Daniel Gross, co-founder of Cue and former AI lead at Apple.

Sutskever’s departure from Microsoft-backed OpenAI in May followed his involvement in the dramatic firing and rehiring of CEO Sam Altman in November of the previous year. His new venture underscores a commitment to advancing AI technology in a manner that ensures safety and long-term progress.