Although Judy Garland never recorded herself reading ‘The Wonderful Wizard of Oz,’ fans will soon be able to hear her rendition thanks to a new app by ElevenLabs. The AI company has launched the Reader app, which can convert text into voice-overs using digitally produced voices of deceased celebrities, including Garland, James Dean, and Burt Reynolds. The app can transform articles, e-books, and other text formats into audio.
Dustin Blank, head of partnerships at ElevenLabs, emphasised the company’s respect for the legacies of these celebrities. The company has made agreements with the estates of the actors, though compensation details remain undisclosed. That initiative highlights AI’s potential in Hollywood, especially for creating content using synthetic voices, but it also raises important questions about the licensing and ethical use of AI.
The use of AI-generated celebrity voices comes amid growing concerns about authenticity and copyright in creative industries. ElevenLabs had previously faced scrutiny when its tool was reportedly used to create a fake robocall from President Joe Biden. Similar controversies have arisen, such as OpenAI’s introduction of a voice similar to Scarlett Johansson’s, which she publicly criticised.
As AI technology advances, media companies are increasingly utilising it for voiceovers. NBC recently announced the use of an AI version of sportscaster Al Michaels for Olympics recaps on its Peacock streaming platform, with Michaels receiving compensation. While the market for AI-generated voices remains uncertain, the demand for audiobooks narrated by recognisable voices suggests a promising future for this technology.
Meta is set to integrate more generative AI technology into its virtual, augmented, and mixed-reality games, aiming to boost its struggling metaverse strategy. According to a recent job listing, the company plans to create new gaming experiences that change with each playthrough and follow unpredictable paths. The initiative will initially focus on Horizon, Meta’s suite of metaverse games and applications, but could extend to other platforms like smartphones and PCs.
These developments are part of Meta’s broader effort to enhance its metaverse offerings and address the financial challenges faced by Reality Labs, the division responsible for its metaverse projects. Despite selling millions of Quest headsets, Meta has struggled to attract users to its Horizon platform and mitigate substantial operating losses. Recently, the company began allowing third-party manufacturers to license Quest software features and increased investment in metaverse gaming, spurred by CEO Mark Zuckerberg’s growing interest in the field.
Meta’s interest in generative AI is not new. In 2022, Zuckerberg demonstrated a prototype called Builder Bot, which allows users to create virtual worlds with simple prompts. Additionally, Meta’s CTO, Andrew Bosworth, has highlighted the potential of generative AI tools to democratise content creation within the metaverse, likening their impact to that of Instagram on personal content creation.
Generative AI is already making waves in game development, with companies like Disney-backed Inworld using the technology to enhance game dialogues and narratives. While some game creators are concerned about the impact on their jobs, Meta is committed to significant investments in generative AI, even though CEO Zuckerberg cautioned that it might take years for these investments to become profitable.
Google’s annual sustainability report reveals a nearly 50% increase in greenhouse gas emissions from 2019 to 2024, primarily due to its data centres and supply chain. The 2024 Environmental Report indicates that Google emitted 14.3 million tons of CO2 equivalent last year, raising concerns about its goal to be net zero by 2030. The company expects emissions to rise further before declining, attributing this trend to the growing energy demands of AI integration and increased investment in technical infrastructure.
Efforts to make data centres more efficient, such as using a new generation of tensor processing units (TPUs), have been offset by the rising energy consumption required for AI. Scope 2 emissions, mainly from data centre electricity use, increased by 37% compared to 2022. The rise outpaced the company’s ability to implement carbon-free energy projects, particularly in the United States and Asia-Pacific region. Differences between Google’s global approach to carbon-free energy and the regional guidelines of the GHG Protocol have also contributed to this mismatch.
Scope 3 emissions, which account for 75% of Google’s overall emissions, rose by 8% year-on-year. These indirect emissions from the supply chain are expected to continue increasing due to capital expenditures and investments in AI-related infrastructure. A single generative AI query consumes nearly ten times the power of a regular Google search, highlighting the significant energy demands of AI technology.
Why does it matter?
Additionally, Google’s data centres consume more than three times the amount of water that Microsoft does to remain cool, underscoring the environmental challenges posed by the tech giant’s operations. The report suggests that while Google is making strides in efficiency, the rapid growth of AI and its associated infrastructure presents significant sustainability challenges.
A recent research paper from Google reveals that generative AI already distorts socio-political reality and scientific consensus. The paper, titled ‘Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data,’ was co-authored by researchers from Google DeepMind, Jigsaw, and Google.org.
It categorises various ways generative AI tools are misused, analysing around 200 incidents reported in the media and academic papers between January 2023 and March 2024. Unlike warnings about hypothetical future risks, this research focuses on the real harm generative AI is currently causing, such as flooding the internet with generated text, audio, images, and videos.
The researchers found that most AI misuse involves exploiting system capabilities rather than attacking the models themselves. However, this misuse blurs the lines between authentic and deceptive content, undermining public trust. AI-generated content is being used for impersonation, creating non-consensual intimate images, and amplifying harmful content. These activities often uphold the terms of service of AI tools, highlighting a significant challenge in regulating AI misuse.
Google’s research also emphasises the environmental impact of generative AI. The increasing integration of AI into various products drives energy consumption, making it difficult to reduce emissions. Despite efforts to improve data centre efficiency, the overall rise in AI use has outpaced these gains. The paper calls for a multi-faceted approach to mitigate AI misuse, involving collaboration between policymakers, researchers, industry leaders, and civil society.
According to Sabine Keller-Busse, head of the Swiss bank’s domestic business, UBS is experiencing a significant shift in AI-driven client interactions. She compared this change to patients visiting doctors with pre-formed ideas about their ailments, noting that clients now use AI to generate proposals for the bank.
Speaking at the Point Zero Forum in Zurich, Keller-Busse highlighted the impact of tools like ChatGPT in making more data available, emphasising that UBS must adapt to this new client behaviour.
The bank has been integrating AI into its services and products, launching a pilot programme last year for instant credit aimed at small and mid-sized businesses with urgent liquidity needs. However, this service allows the process to bypass credit officers, expediting the approval for standard credit products. Keller-Busse described this as the beginning of AI’s transformative potential in the banking industry.
As AI continues to evolve, UBS is keenly aware of its growing role in shaping client interactions and service delivery. The bank’s early adoption of AI-driven solutions demonstrates its commitment to leveraging technology to meet its clients’ changing needs, promising future innovations.
AI is revolutionising diagnostic testing by identifying diseases much earlier than traditional methods. AI’s ability to analyse vast amounts of data is uncovering new ways to detect previously undetectable diseases. For instance, researchers at Peking University have discovered that facial temperature patterns, detected with thermal cameras and AI, can indicate chronic illnesses like diabetes and high blood pressure.
Recent advancements highlight AI’s potential in diagnostics. University of British Columbia researchers found a new subtype of endometrial cancer, and another study revealed that AI could identify Parkinson’s disease up to seven years before symptoms appear. These breakthroughs demonstrate how AI can sift through large datasets to identify patterns and markers that traditional methods might miss.
Why does it matter?
The integration of AI in diagnostics is making testing more personalised and predictive. AI analyses data from individual patient records and real-time wearables to tailor diagnoses and treatment plans. Despite concerns about AI infringing on doctors’ roles, experts like John Halamka from the Mayo Clinic emphasise that AI enhances doctors’ capabilities rather than replacing them. However, ensuring data transparency and addressing biases in AI algorithms remain critical challenges.
As AI continues to evolve, patients can expect more personalised and early detection of diseases during routine tests. This technology promises to provide new insights and recommendations that can significantly impact healthcare outcomes.
Apple Inc. has secured an observer role on OpenAI’s board, further solidifying their growing partnership. Phil Schiller, head of Apple’s App Store and former marketing chief, will take on this position. As an observer, Schiller will attend board meetings without voting rights or other director powers. The development follows Apple’s announcement of integrating ChatGPT into its devices, such as the iPhone, iPad, and Mac, as part of its AI suite.
Aligning Apple with OpenAI’s principal backer, Microsoft Corp., the observer role offers Apple valuable insights into OpenAI’s decision-making processes. However, Microsoft and Apple’s rivalry might lead to Schiller’s exclusion from certain discussions, particularly those concerning future AI initiatives between OpenAI and Microsoft. Schiller’s extensive experience with Apple’s brand makes him a suitable candidate for this role, despite his lack of direct involvement in Apple’s AI projects.
The partnership with OpenAI is a key part of Apple’s broader AI strategy, which includes a variety of in-house features under Apple Intelligence. These features range from summarising articles and notifications to creating custom emojis and transcribing voice memos. The integration of OpenAI’s chatbot feature will meet current consumer demand, with a paid version of ChatGPT potentially generating App Store fees. No financial transactions are involved; OpenAI gains access to Apple’s vast user base while Apple benefits from the chatbot’s capabilities.
Apple is also in discussions with Alphabet Inc.’s Google, startup Anthropic, and Chinese companies Baidu Inc. and Alibaba Group Holding Ltd. to offer more chatbot options to its customers. Initially, Apple Intelligence will be available in American English, with plans for an international rollout. Furthermore, collbaoration like this marks a rare instance of an Apple executive joining the board of a major partner, highlighting the significance of this partnership in Apple’s AI strategy.
Brazil’s National Data Protection Authority (ANPD) has taken immediate action to halt the implementation of Meta’s new privacy policy concerning the use of personal data to train generative AI systems within the country.
The ANPD’s precautionary measure, announced in Brazil’s official gazette, suspends the processing of personal data across all Meta products, extending to individuals who are not users of the tech company’s platforms. The regulatory body, operating under Brazil’s Justice Ministry, has imposed a daily fine of 50,000 reais ($8,836.58) for any directive violations.
The decision by the ANPD was motivated by the perceived ‘imminent risk of serious and irreparable or difficult-to-repair damage to the fundamental rights of affected individuals.’ As a result, Meta is mandated to revise its privacy policy to eliminate the segment related to the processing of personal data for generative AI training. Additionally, Meta must issue an official statement confirming the suspension of personal data processing for this purpose.
In response to the ANPD’s ruling, Meta expressed disappointment, characterising the move as a setback for innovation and predicting a delay in delivering AI benefits to the Brazilian population. Meta defended its practices by pointing to its transparency policy compared to other industry players who have used public content for training models and products. The company asserted that its approach aligns with Brazil’s privacy laws and regulations.
Vodafone has called for the establishment of a ‘Connectivity Union’ to accelerate Europe’s digital ambitions and bolster its global competitiveness. Emphasising the crucial role of next-generation connectivity, particularly 5G standalone technology, Vodafone argues that this is essential for European businesses to fully harness the industrial value of the internet and emerging technologies such as AI. They warn that Europe risks falling behind in the global digital race without addressing the current connectivity issues.
The European Commission has identified several challenges in the connectivity sector, including fragmentation, excessive costs, and inconsistent regulations that vary across companies despite offering similar services. These issues threaten the achievement of Europe’s digital decade targets and put the region at a significant competitive disadvantage.
Vodafone stresses that Europe needs critical action from policymakers to close the 5G investment gap and turn its digital future around. Joakim Reiter, Chief External & Corporate Affairs Officer at Vodafone, highlighted the urgency of resetting Europe’s telecoms policy regime. He proposed a new Connectivity Union that would bring together the European Commission, national governments, and industry stakeholders to tackle the shortcomings in Europe’s connectivity sector more aggressively.
In response to the European Commission’s consultation paper, Vodafone outlined five key policy pillars for a new Digital Communications Framework for Europe. These include enhancing investment competition in mobile and fixed markets, advocating for pro-investment spectrum policies, ensuring fair regulation based on services offered, implementing a harmonised security framework, and creating a stable policy environment that incorporates sustainability requirements. These pillars aim to end the piecemeal policy approach to telecoms and lay the foundation for a robust Connectivity Union.
Florida International University’s Moss Department of Construction Management is at the forefront of a revolution in the industry. They’re equipping students with the tools to leverage AI for increased efficiency and safety on construction sites.
Imagine generating blueprints with just a few specifications or having a watchful eye constantly monitoring a site for safety hazards. These are just a few ways AI is transforming construction. Students like Kaelan Dodd are already putting this knowledge to work. ‘An AI tool I tried at my job based on what I learned at FIU lets us create blueprints in seconds,’ Dodd said, impressed by the technology’s potential.
But FIU’s course goes beyond simply using AI. Professor Lufan Wang understands the importance of students not just using the technology but understanding it. By teaching them to code, she gives them a ‘translator’ to communicate with AI and provides valuable feedback to improve its capabilities. An approach like this one prepares students to not only navigate the constantly evolving world of AI but also shape its future applications in construction.
The benefits of AI extend far beyond efficiency. Construction is a field where safety is paramount, and AI can be a valuable ally. Imagine having a tireless AI assistant analyse thousands of construction site photos to identify potential hazards or sending an AI-powered robot into a dangerous situation to gather information. These are a few ways AI can minimise risk and potentially save lives. While AI won’t replace human construction managers entirely, it can take on the most dangerous tasks, allowing human expertise to focus on what it does best – guiding and overseeing complex projects.