NewsBreak’s AI error sparks controversy

Last Christmas Eve, NewsBreak, a popular news app, published a false report about a shooting in Bridgeton, New Jersey. The Bridgeton police quickly debunked the story, which had been generated by AI, stating that no such event had occurred. NewsBreak, which operates out of Mountain View, California, and has offices in Beijing and Shanghai, removed the erroneous article four days later, attributing the mistake to its content source.

NewsBreak, known for filling the void left by shuttered local news outlets, uses AI to rewrite news from various sources. However, this method has led to multiple errors, including incorrect information about local charities and fictitious bylines. In response to growing criticism, NewsBreak added a disclaimer about potential inaccuracies to its homepage. With over 50 million monthly users, the app primarily targets a demographic of suburban or rural women over 45 without college degrees.

The company has faced legal challenges due to its AI-generated content. Patch Media settled a $1.75 million lawsuit with NewsBreak over copyright infringement, and Emmerich Newspapers reached a settlement in a similar case. Concerns about the company’s ties to China have also been raised, as half of its employees are based there, prompting worries about data privacy and security.

Despite these issues, NewsBreak maintains that it complies with US data laws and operates on US-based servers. The company’s CEO, Jeff Zheng, emphasises its identity as a US-based business, crucial for its long-term credibility and success.

Young Americans show mixed embrace of AI, survey reveals

Young Americans are rapidly embracing generative AI, but few use it daily, according to a recent survey by Common Sense Media, Hopelab, and Harvard’s Center for Digital Thriving. The survey, conducted in October and November 2023 with 1,274 US teens and young adults aged 14-22, found that only 4% use AI tools daily. Additionally, 41% have never used AI, and 8% are unaware of what AI tools are. The main uses for AI among respondents are seeking information (53%) and brainstorming (51%).

Demographic differences show that 40% of white respondents use AI for schoolwork, compared to 62% of Black respondents and 48% of Latinos. Looking ahead, 41% believe AI will have both positive and negative impacts in the next decade. Notably, 28% of LGBTQ+ respondents expect mostly negative impacts, compared to 17% of cisgender/straight respondents. Young people have varied opinions on AI, as some view it as a sign of a changing world and are enthusiastic about its future, while others find it unsettling and concerning.

Why does it matter?

Young people globally share concerns over AI, which the IMF predicts will affect nearly 40% of jobs, with advanced economies seeing up to 60%. In comparison to the results above, a survey of 1,000 young Hungarians (aged 15-29) found that frequent AI app users are more positive about its benefits, while 38% of occasional users remain skeptical. Additionally, 54% believe humans will maintain control over AI, with 54% of women fearing loss of control compared to 37% of men.

China launches AI chatbot based on Xi Jinping’s ideology

China has unveiled an AI chatbot based on principles derived from President Xi Jinping’s political ideology. The chatbot, named ‘Xue Xi’, aims to propagate ‘Xi Jinping Thought’ through conversational interactions with users. Xi Jinping Thought, also known as ‘Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era‘, is made up of 14 principles, including ensuring the absolute power of the Chinese Communist Party, strengthening national security and socialist values, as well as improving people’s livelihoods and well-being.

Developed by a team at Tsinghua University, ‘Xue Xi’ utilises natural language processing to engage users in discussions about Xi Jinping’s ideas on governance, socialism with Chinese characteristics, and national rejuvenation. The chatbot was trained on seven databases, six of which were mostly related to information technologies provided by China’s internet watchdog and the Cyberspace Administration of China (CAC). 

The chatbot’s creation is the latest effort of a broader strategy to spread the Chinese leader’s ideology and an attempt to leverage technology, strengthen ideological education and promote ideological loyalty among citizens. Students have had to take classes on Xi Jinping’s Thoughts in schools, and an app called Study Xi Strong Nation was also rolled out in 2019 to allow users to learn and take quizzes about his ideologies.

Why Does It Matter?

The launch of Xue Xi raises important questions about the intersection of AI technology and political ideology. It represents China’s innovative approach to using AI for ideological dissemination, aiming to ensure widespread adherence to Xi Jinping Thought. By deploying AI in this manner, China advances its technological capabilities and seeks to shape public discourse and reinforce state-approved narratives. Critics argue that such initiatives could exacerbate issues related to censorship and surveillance, potentially limiting freedom of expression and promoting conformity to government viewpoints. Moreover, the development of ‘Xue Xi’ underscores China’s broader ambition to lead in AI development, positioning itself as a pioneer in using technology for ideological governance.

Adobe removes AI imitations after Ansel Adams estate complaint

Adobe faced backlash this weekend after the Ansel Adams estate criticised the company for selling AI-generated imitations of the famous photographer’s work. The estate posted a screenshot on Threads showing ‘Ansel Adams-style’ images on Adobe Stock, stating that Adobe’s actions had pushed them to their limit. Adobe allows AI-generated images on its platform but requires users to have appropriate rights and prohibits content created using prompts with other artists’ names.

In response, Adobe removed the offending content and reached out to the Adams estate, which claimed it had been contacting Adobe since August 2023 without resolution. The estate urged Adobe to respect intellectual property and support the creative community proactively. Adobe Stock’s Vice President, Matthew Smith, noted that moderators review all submissions, and the company can block users who violate rules.

Adobe’s Director of Communications, Bassil Elkadi, confirmed they are in touch with the Adams estate and have taken appropriate steps to address the issue. The Adams estate has thanked Adobe for the removal and expressed hope that the issue is resolved permanently.

Microsoft notes little AI impact on EU election disinformation

Microsoft’s president, Brad Smith, announced that the company has yet to observe significant use of AI to create disinformation campaigns in the upcoming European Parliament elections. This comes as Microsoft plans to invest 33.7 billion Swedish crowns ($3.21 billion) to expand its cloud and AI infrastructure in Sweden over the next two years. Smith acknowledged the risks of AI-generated deepfakes and abusive content but noted that the European elections have not been targeted heavily by such efforts.

Smith highlighted that while AI-generated fakes have been increasingly used in elections in countries like India, the United States, Pakistan, and Indonesia, the European context appears less affected. For instance, in India, deepfake videos of Bollywood actors criticising Prime Minister Narendra Modi and supporting the opposition went viral. In the EU, a Russian-language video falsely claimed that citizens were fleeing Poland for Belarus, but the EU’s disinformation team debunked it.

Ahead of the European Parliament elections from June 6-9, Microsoft’s training for candidates to monitor AI-related disinformation seems to be paying off. Despite not declaring victory prematurely, Smith emphasised that current threats focus more on events like the Olympics than the elections. This development follows the International Olympic Committee’s ban on the Russian Olympic Committee for recognising councils in Russian-occupied regions of Ukraine. Microsoft plans to release a detailed report on this issue soon.

Survey reveals concerns over potential AI abuse in US presidential election

A recent survey conducted by the Elon University Poll and the Imagining the Digital Future Center at Elon University has revealed widespread concerns among American adults regarding the impact of AI on the upcoming presidential election. According to the survey, more than three-fourths of respondents believe that abuses involving AI systems will influence the election outcome. Specifically, 73% of respondents fear AI will be used to manipulate social media, while 70% anticipate the spread of fake information through AI-generated content like deepfakes.

Moreover, the survey highlights concerns about targeted AI manipulation to dissuade certain voters from participating in the election, with 62% of respondents expressing apprehension about this possibility. Overall, 78% of Americans anticipate at least one form of AI abuse affecting the election, while over half believe all three identified forms are likely to occur. Lee Rainie, director of Elon University’s Imagining the Digital Future Center, notes that voters in the USA anticipate facing significant challenges in navigating misinformation and voter manipulation tactics facilitated by AI during the campaign period.

The survey underscores a strong consensus among Americans regarding the accountability of political candidates who maliciously alter or fake photos, videos, or audio files. A resounding 93% of respondents believe such candidates should face punishment, with opinions split between removal from office (46%) and criminal prosecution (36%). Additionally, the survey reveals concerns about the public’s ability to discern faked media, as 69% of respondents lack confidence in most voters’ ability to detect altered content.

Chinese researchers develop AI hospital town to revolutionise healthcare

AI is making significant strides in the healthcare sector, with Chinese researchers developing an AI hospital town that promises to revolutionise medical training and treatment. Dubbed ‘Agent Hospital’, this virtual environment, created by Tsinghua University researchers, features a large language model (LLM)-powered intelligent agents that act as doctors, nurses, and patients, all capable of autonomous interaction. These AI agents can treat thousands of patients quickly, achieving a 93.06% accuracy rate on medical exams. This innovative approach aims to enhance the training of medical professionals by allowing them to practice in a risk-free, simulated environment.

The AI hospital town not only offers advanced training opportunities for medical students but also has the potential to transform real-world healthcare delivery. The AI hospital can provide valuable insights and predictions by simulating various medical scenarios, including the spread of infectious diseases. The system utilises a vast repository of medical knowledge, enabling AI doctors to handle numerous cases efficiently and accurately, paving the way for high-quality, affordable, and convenient healthcare services.

While the future of AI in healthcare appears promising, significant challenges remain in implementing and promoting AI-driven medical solutions. Ensuring strict adherence to medical regulations, validating technological maturity, and developing effective AI-human collaboration mechanisms are essential to mitigate risks to public health. Experts emphasise that despite the impressive capabilities of AI, it can only partially replace the human touch in medicine. Personalised care, compassion, and legal responsibilities are aspects that AI cannot replicate, highlighting the indispensable role of human doctors in healthcare.

Microsoft to invest $3.21 billion in Sweden’s cloud and AI infrastructure

Microsoft announced on Monday a significant investment of 33.7 billion Swedish crowns ($3.21 billion) to enhance its cloud and AI infrastructure in Sweden over the next two years. This investment marks the company’s largest commitment to Sweden to date and includes plans to train 250,000 individuals in AI skills, aiming to boost the country’s competitiveness in the tech sector. Microsoft Vice Chair and President Brad Smith emphasised that this initiative goes beyond technology, focusing on providing widespread access to essential tools and skills for Sweden’s people and economy.

As part of this investment, Microsoft plans to deploy 20,000 advanced graphics processing units (GPUs) across its data centre sites in Sandviken, Gavle, and Staffanstorp. These GPUs are designed to accelerate computer calculations, enhancing the efficiency and capability of AI applications. Smith was scheduled to meet with Swedish Prime Minister Ulf Kristersson in Stockholm to discuss the investment and its implications for the country’s tech landscape.

In addition to bolstering AI infrastructure in Sweden, Microsoft is committed to promoting AI adoption throughout the Nordic region, which includes Denmark, Finland, Iceland, and Norway. The strategic move underscores Microsoft’s dedication to fostering innovation and equipping the Nordic countries with the necessary resources to thrive in the evolving AI era.

AMD unveils new AI chips to challenge Nvidia

Advanced Micro Devices (AMD) unveiled its latest AI processors at the Computex technology trade show in Taipei on Monday, signalling its commitment to challenging Nvidia’s dominance in the AI semiconductor market. AMD CEO Lisa Su introduced the MI325X accelerator, set for release in late 2024, and outlined the company’s ambitious roadmap to develop new AI chips annually. The move aligns with Nvidia’s strategy, as both companies race to meet the soaring demand for advanced AI data centre chips essential for generative AI programs.

AMD is not only aiming to compete with Nvidia but also to surpass it with innovations like the MI350 series, expected in 2025, which promises a 35-fold improvement in AI inference performance over current models. The company also previewed the MI400 series, set for 2026, featuring a new architecture called ‘Next’. Su emphasised that AI is the company’s top priority, driving a focus on rapid product development to maintain a competitive edge in the market.

The shift towards an annual product cycle reflects the growing importance of AI capabilities in the tech industry. Investors who have been keenly following the AI chip market have seen AMD’s shares more than double since the start of 2023, though Nvidia’s shares have surged even more dramatically. AMD’s plans include AI chip sales projections of $4 billion for 2024, up $500 million from previous estimates, and introducing new central processor units (CPUs) and neural processing units (NPUs) for AI tasks in PCs.

Why does it matter?

As the PC market looks to rebound from a prolonged slump, AMD is banking on its advanced AI capabilities to drive growth. Major PC providers like HP and Lenovo are set to incorporate AMD’s AI chips in their devices, which already meet Microsoft’s Copilot+ PC requirements. This strategic focus on AI-enhanced hardware highlights AMD’s commitment to staying at the forefront of technological innovation and market demand.

OpenAI uncovers misuse of AI in deceptive campaigns

OpenAI, led by Sam Altman, announced it had disrupted five covert influence operations that misused its AI models for deceptive activities online. Over the past three months, actors from Russia, China, Iran, and Israel used AI to generate fake comments, articles, and social media profiles. These operations targeted issues such as Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, and politics in Europe and the US, aiming to manipulate public opinion and influence political outcomes.

Despite these efforts, OpenAI stated that the deceptive campaigns did not see increased audience engagement. The company emphasised that these operations included both AI-generated and manually-created content. OpenAI’s announcement highlights ongoing concerns about using AI technology to spread misinformation.

In response to these threats, OpenAI has formed a Safety and Security Committee, led by CEO Sam Altman and other board members, to oversee the training of its next AI model. Additionally, Meta Platforms reported similar findings of likely AI-generated content used deceptively on Facebook and Instagram, underscoring the broader issue of AI misuse in digital platforms.