Chinese scientists develop world’s first AI military commander

China’s AI military commander substitutes for human military leaders in simulated war games hosted by the Joint Operations College of the National Defence University, amidst growing tensions with the US over the use of militarised AI in combat. The bots, the first of their kind, are completely automated, possess the perception and reasoning skills of human military leaders, and are learning at an exponential rate. They have also been programmed to illustrate the weaknesses of some of the country’s most celebrated military leaders such as General Peng Dehuai, and General Lin Biao. 

The AI arms race between the two countries can be likened to the chicken and egg analogy, in that both countries have expressed interest in regulating the use of these unmanned implements on the battlefield; yet, there are increasing media coverage of either on-going experiments or caged prototypes in both countries. These include the rifle-toting robot dogs, and surveillance and attack drones, some of which reportedly have already been used in battlefields in Gaza and in the Ukraine. The situation renders international rule-making in the space increasingly difficult, particularly as other players, such as NATO seek to ramp up investments in tech-driven defence systems. 

IMF calls for new fiscal policies to address AI’s economic and environmental impacts

The International Monetary Fund (IMF) has recommended fiscal policies for governments grappling with the economic impacts of AI, including taxes on excess profits and a levy to address AI-related carbon emissions. In a recent report, the IMF highlighted the rapid advancement of generative AI technologies like ChatGPT, which can simulate human-like text, voices, and images from simple prompts, noting their potential to spread quickly across industries.

One key suggestion from the IMF involves implementing a carbon tax to account for the significant energy consumption of AI servers used in data centres. These servers contribute to global emissions, currently amounting to up to 1.5%. The IMF emphasised the need to factor these environmental costs into the price of AI technologies.

The report also raised concerns about AI’s impact on job markets, predicting potential wage declines as a proportion of national income and increased inequality. It warned that AI could exacerbate job losses across various sectors, affecting white-collar professions such as law and finance and blue-collar jobs in manufacturing and trade.

Why does it matter?

To address these challenges, the IMF proposed measures such as enhancing capital income taxes, including corporation tax and personal income taxes on capital gains. It suggested reconsidering corporate income tax policies to prevent profit shifting and ensure fair taxation across sectors.

Additionally, the IMF recommended policies to support workers affected by AI-driven automation, including extending unemployment insurance and focusing on education and training programs tailored to new technologies. While the report expressed caution about universal basic income due to potential fiscal implications, it acknowledged the need for future considerations if AI disruption intensifies.

Era Dabla-Norris, co-author of the report and deputy director of the IMF’s fiscal affairs department, highlighted the importance of preparing for potential disruptions from AI and designing effective policies to mitigate their impacts on economies and societies.

AI chatbot’s mayoral bid halted by legal and ethical concerns in Wyoming

Victor Miller, 42, has stirred controversy by filing to run for mayor of Cheyenne, Wyoming, using a customised AI chatbot named VIC (virtual integrated citizen). Miller argued that VIC, powered by OpenAI technology, could effectively make political decisions and govern the city. However, OpenAI quickly shut down Miller’s access to their tools for violating policies against AI use in political campaigning.

The emergence of AI in politics underscores ongoing debates about its responsible use as technology outpaces legal and regulatory frameworks. Wyoming Secretary of State Chuck Gray clarified that state law requires candidates to be ‘qualified electors,’ meaning VIC, as an AI bot, does not meet the criteria. Despite this setback, Miller intends to continue promoting VIC’s capabilities using his own ChatGPT account.

Meanwhile, similar AI-driven campaigns have surfaced globally, including in the UK, where another candidate utilises AI models for parliamentary campaigning. Critics, including experts like Jen Golbeck from the University of Maryland, caution that while AI can support decision-making and manage administrative tasks, ultimate governance decisions should remain human-led. Despite the attention these AI candidates attract, observers like David Karpf from George Washington University dismiss them as gimmicks, highlighting the serious nature of elections and the need for informed human leadership.

Miller remains optimistic about the potential for AI candidates to influence politics worldwide. Still, the current consensus suggests that AI’s role in governance should be limited to supportive functions rather than decision-making responsibilities.

New social network app blends AI personas with user interactions

Butterflies, a new social network where humans and AI interact, has launched publicly on iOS and Android after five months in beta. Founded by former Snap engineering manager Vu Tran, the app allows users to create AI personas, called Butterflies, that post, comment, and message like real users. Each Butterfly has unique backstories, opinions, and emotions, enhancing the interaction beyond typical AI chatbots.

Tran developed Butterflies to provide a more creative and substantial AI experience. Unlike other AI chatbots from companies like Meta and Snap, Butterflies aims to integrate AI personas into a traditional social media feed, where AI and human users can engage with each other’s content. The app’s beta phase attracted tens of thousands of users, with some spending hours creating and interacting with hundreds of AI personas.

Butterflies’ unique approach has led to diverse user interactions, from creating alternate universe personas to role-playing in popular fictional settings. Vu Tran believes the app offers a wholesome way to interact with AI, helping people form connections that might be difficult in traditional social settings due to social anxiety or other barriers.

Initially free, Butterflies may introduce a subscription model and brand interactions in the future. Backed by a $4.8 million seed round led by Coatue and other investors, Butterflies aims to expand its functionality and continue to offer a novel way for users to explore AI and social interaction.

London cinema cancels AI-written film premiere after public backlash

A central London cinema has cancelled the premiere of a film written entirely by AI following a public backlash. The Prince Charles Cinema in Soho was set to host the world debut of ‘The Last Screenwriter,’ created by ChatGPT, but concerns about ‘the use of AI in place of a writer’ led to the screening being axed.

In a statement, the cinema explained that customer feedback highlighted significant concerns regarding AI’s role in the arts. The film, directed by Peter Luisi, was marketed as the first feature film written entirely by AI, and its plot centres on a screenwriter who grapples with an AI scriptwriting system that surpasses his abilities.

The cinema stated that the film was intended as an experiment to spark discussion about AI’s impact on the arts. However, the strong negative response from their audience prompted them to cancel the screening, emphasising their commitment to their patrons and the movie industry.

The controversy over AI’s role in the arts reflects broader industry concerns, as seen in last year’s Sag-Aftra strike in Hollywood. The debate continues, with UK MPs now calling for measures to ensure fair compensation for artists whose work is used by AI developers.

AI boosts Bayer’s fight against resistant weeds

Bayer’s crop science division is leveraging AI to combat herbicide-resistant weeds, aiming to speed up the discovery of new solutions. With traditional herbicides losing effectiveness, Bayer urgently needs innovative approaches to help farmers manage these resilient weeds. The company’s Icafolin product, set to launch in Brazil in 2028, will be its first new mode of action herbicide in three decades.

Frank Terhorst, Bayer’s executive vice president of strategy and sustainability, highlighted that AI significantly enhances the efficiency of finding new herbicides by matching weed protein structures with targeted molecules. This AI-driven process allows for the use of vast amounts of data, making it faster and more reliable.

Bob Reiter, head of research and development at Bayer, noted that AI tools have already tripled the number of new modes of action discovered compared to a decade ago. The mentioned technological advancement promises to shorten the timeline for developing effective herbicides, offering a critical advantage in the ongoing fight against crop-destroying weeds.

G7 Italy summit unveils AI action plan to balance AI risks and opportunities

Adopted on June 14, 2024, at the G7 Summit in Apulia, Italy, the Group of Seven (G7) Leaders’ Communiqué, expresses the wealthiest nations’ common pledges and actions to address multiple global issues. A portion of the Group of Seven (G7) declaration closing the Italian summit focuses on AI and other digital matters.

G7 leaders called for an action plan to manage AI’s risks and benefits, including developing and implementing an International Code of Conduct for organisations developing advanced AI systems, as unveiled last October under the Japanese G7 presidency. To maximise the advantages of AI while mitigating its threats, G7 nations commit to deepening their cooperation.

An action plan for the use of AI in the workplace was announced, together with the creation of a brand to promote the implementation and use of the International Code of Conduct for advanced AI systems, in cooperation with OECD. G7 leaders stressed the importance of global partnership to bridge the digital divide and ensure that people around the world have access to the benefits of AI and other technologies. The goal is to advance science, improve public health, accelerate the clean energy transition, promote sustainable development goals, etc.

Why does it matter?

The G7 is encouraging global collaboration within the group of countries, with the OECD, with other initiatives such as the Global Partnership on AI (GPAI), and towards the developing world, to facilitate the equitable distribution of the benefits of AI and other emerging technologies while minimising any threats. G7 leaders aim to mend technological gaps and address AI’s impact on workers. G7 labor ministers are tasked with designing measure to capitalize on AI’s potential, promote quality employment, and empower people, while also tackling potential barriers and risks to workers and labour markets.

G7 leaders agreed to intensify efforts to promote AI safety and enhance interoperability between diverse approaches to AI governance and risk management. That means strengthening collaboration between AI Safety Institutes in the US, UK, and equivalent bodies in other G7 nations and beyond, to improve global standards for AI development and implementation. The G7 also formed a ‘Semiconductors Point of Contact Group’ to strengthen cooperative efforts on addressing challenges affecting this critical industry that drives the AI ecosystem.

G7 nation’s commitments are consistent with the recent Seoul AI safety summit efforts and align with the intended goals of the upcoming United Nations Summit of the Future. Echoing the UN General Assembly landmark resolution on ‘seizing the opportunities of safe, secure, and trustworthy AI systems for sustainable development’ and Pope Francis’s historic address to the G7 leaders, the communiqué reflects the group’s unified stance on AI safety and the need for a framework for AI’s responsible development and use in the military.

Snapchat introduces advanced AI-powered AR features

Snap Inc, the owner of Snapchat, has unveiled a new iteration of its generative AI technology, enabling users to apply more realistic special effects when using their phone cameras. That move aims to keep Snapchat ahead of its social media competitors by enhancing its augmented reality (AR) capabilities, which superimpose digital effects onto real-world photos and videos.

In addition to this AI upgrade, Snap has introduced an enhanced version of its developer program, Lens Studio. The upgrade will significantly reduce the time required to create AR effects, cutting it from weeks to hours. The new Lens Studio also incorporates generative AI tools, including an AI assistant to help developers and a feature that can generate 3D images from text prompts.

Bobby Murphy, Snap’s chief technology officer, highlighted that these tools expand creative possibilities and are user-friendly, allowing even newcomers to create unique AR effects quickly. Plans for Snap include developing full-body AR experiences, such as generating new outfits, which are currently challenging to produce.

SewerAI utilises AI to detect sewer pipe issues

Sewage failures exacerbated by climate change and ageing infrastructure are becoming increasingly costly and common across the United States. The Environmental Protection Agency estimates that nearly $700 billion is required over the next two decades to maintain existing wastewater and stormwater systems. In response to these challenges, Matthew Rosenthal and Billy Gilmartin, veterans of the wastewater treatment industry, founded SewerAI five years ago. Their goal was to leverage AI to improve the inspection and management of sewer infrastructure.

SewerAI’s AI-driven platform offers cloud-based subscription products tailored for municipalities, utilities, and private contractors. Their tools, such as Pioneer and AutoCode, streamline field inspections and data management by enabling inspectors to upload data and automatically tag issues. That approach enhances efficiency and helps project managers plan and prioritise infrastructure repairs based on accurate 3D models generated from inspection videos.

Unlike traditional methods that rely on outdated on-premise software, SewerAI’s technology increases productivity and reduces costs by facilitating more daily inspections. The company has distinguished itself in the competitive AI-assisted pipe inspection market by leveraging a robust dataset derived from 135 million feet of sewer pipe inspections. This data underpins their AI models, enabling precise defect detection and proactive infrastructure management.

Recently, SewerAI secured $15 million in funding from investors like Innovius Capital, bringing their total raised capital to $25 million. This investment will support SewerAI’s expansion efforts, including AI model refinement, hiring initiatives, and diversification of their product offerings beyond inspection tools. The company anticipates continued growth as it meets rising demand and deploys its technology to empower organisations to achieve more with existing infrastructure budgets.

AI award-winning headless flamingo photo found to be real

A controversial AI-generated photo of a headless flamingo has ignited a heated debate over the ethical implications of AI in art and technology. The image, which was honored in the AI category of the 1839 Awards’ Color Photography Contest, has drawn criticism and concern from various sectors, including artists, technologists, and ethicists. 

The photo, titled ‘F L A M I N G O N E,’ depicts a flamingo without its head. It was created by photographer Miles Astray using a sophisticated AI model designed to generate lifelike images. Contrary to initial impressions, the photo wasn’t generated from a text prompt but was instead based on a real — and not at all beheaded — flamingo that Astray captured on the beaches of Aruba two years ago. After the photo won both third place in the category and the People’s Vote award, Astray revealed the truth, leading to his disqualification.

Proponents of AI-generated art assert that such creations push the boundaries of artistic expression, offering new and innovative ways to explore and challenge traditional concepts of art. They argue that the AI’s ability to produce unconventional and provocative images can be seen as a form of artistic evolution, allowing for greater diversity and creativity in the art world. However, detractors highlight the potential risks and ethical dilemmas posed by such technology. The headless flamingo photo, in particular, has been described as unsettling and inappropriate, sparking a broader conversation about the limits of AI-generated content. Concerns have been raised about the potential for AI to produce harmful or distressing images, and the need for guidelines and oversight to ensure responsible use.

The release of the headless flamingo photo has prompted a range of responses from the art and tech communities. Some artists view the image as a provocative statement on the nature of AI and its role in society, while others see it as a troubling example of the technology’s potential to create disturbing content. Tech experts emphasise the importance of developing ethical frameworks and guidelines for AI-generated art. They argue that while AI has the potential to revolutionize creative fields, it is crucial to establish clear boundaries and standards to prevent misuse and ensure that the technology is used responsibly.

‘‘F L A M I N G O N E’ accomplished its mission by sending a poignant message to a world grappling with ever-advancing, powerful technology and the profusion of fake images it brings. My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that. But, on the other hand, AI imagery has advanced to a point where it’s indistinguishable from real photography. So where does that leave us? What are the implications and the pitfalls of that? I think that is a very important conversation that we need to be having right now.”, Miles Astray told The Washington Post.

Why does it matter?

The controversy surrounding the AI-generated headless flamingo photo highlights the broader ethical challenges posed by artificial intelligence in creative fields. As AI technology continues to advance, it is increasingly capable of producing highly realistic and complex images. That raises important questions about the role of AI in art, the responsibilities of creators and developers, and the need for ethical guidelines to navigate these new frontiers.