McDonald’s halts AI ordering test in drive-thrus

McDonald’s has decided to discontinue the use of AI ordering technology that was being tested at over 100 drive-thru locations in the US. The company had collaborated with IBM to develop and test this AI-driven, voice-automated system. Despite this decision, McDonald’s remains committed to exploring AI solutions, noting that IBM will remain a trusted partner in other areas. The discontinuation of this specific technology is set to occur by 26 July 2024.

The partnership between McDonald’s and IBM began in 2021 as part of McDonald’s ‘Accelerating the Arches’ growth plan, which aimed to enhance customer experience through Automated Order Taking (AOT) technology. IBM highlighted the AOT’s capabilities as being among the most advanced in the industry, emphasising its speed and accuracy. Nonetheless, McDonald’s is reassessing its strategy for implementing AOT and intends to find long-term, scalable AI solutions by the end of 2024.

McDonald’s move to pause its AI ordering technology reflects broader challenges within the fast-food industry’s adoption of AI. Other chains like White Castle and Wendy’s have also experimented with similar technologies. However, these initiatives have faced hurdles, including customer complaints about incorrect orders due to the AI’s difficulty in understanding different accents and filtering out background noise. Despite these setbacks, the fast-food sector continues to push forward with AI innovations to improve operational efficiency and customer service.

FCC names Royal Tiger as first official AI robocall scammer gang

The US Federal Communications Commission (FCC) has identified Royal Tiger as the first official AI robocall scammer gang, marking a milestone in efforts to combat sophisticated cyber fraud. Royal Tiger has used advanced techniques like AI voice cloning to impersonate government agencies and financial institutions, deceiving millions of Americans through robocall scams.

These scams involve automated systems that mimic legitimate entities to trick individuals into divulging sensitive information or making fraudulent payments. Despite the FCC’s actions, experts warn that AI-driven scams will likely increase, posing significant challenges in protecting consumers from evolving tactics such as caller ID spoofing and persuasive social engineering.

While the FCC’s move aims to raise awareness and disrupt criminal operations, individuals are urged to remain vigilant. Tips include scepticism towards unsolicited calls, utilisation of call-blocking services, and verification of caller identities by contacting official numbers directly. Avoiding sharing personal information over the phone without confirmation of legitimacy is crucial to mitigating the risks posed by these scams.

Why does it matter?

As technology continues to evolve, coordinated efforts between regulators, companies, and the public are essential in staying ahead of AI-enabled fraud and ensuring robust consumer protection measures are in place. Vigilance and proactive reporting of suspicious activities remain key in safeguarding against the growing threat of AI-driven scams.

AI tools struggle with election questions, raising voter confusion concerns

As the ‘year of global elections’ reaches its midpoint, AI chatbots and voice assistants are still struggling with basic election questions, risking voter confusion. The Washington Post found that Amazon’s Alexa often failed to correctly identify Joe Biden as the 2020 US presidential election winner, sometimes providing irrelevant or incorrect information. Similarly, Microsoft’s Copilot and Google’s Gemini refused to answer such questions, redirecting users to search engines instead.

Tech companies are increasingly investing in AI to provide definitive answers rather than lists of websites. This feature is particularly important as false claims about the 2020 election being stolen persist, even after multiple investigations found no fraud. Trump faced federal charges for attempting to overturn Biden’s victory, who won decisively with over 51% of the popular vote.

OpenAI’s ChatGPT and Apple’s Siri, however, correctly answered election questions. Seven months ago, Amazon claimed to have fixed Alexa’s inaccuracies, and recent tests showed Alexa correctly stating Biden won the 2020 election. Nonetheless, inconsistencies were spotted last week. Microsoft and Google, in return, said they avoid answering election-related questions to reduce risks and prevent misinformation,, a policy also applied in Europe due to a new law requiring safeguards against misinformation.

Why does it matter?

Tech companies are increasingly tasked with distinguishing fact from fiction as it develops AI-enabled assistants. Recently, Apple announced a partnership with OpenAI to enhance Siri with generative AI capabilities. Concurrently, Amazon is set to launch a new AI version of Alexa as a subscription service in September, although it remains unclear how it will handle election queries. An early prototype struggled with accuracy, and internal doubts about its readiness persist. The new AI assistants from Amazon and Apple aim to merge traditional voice commands with conversational capabilities, but experts warn this integration may pose new challenges.

G7 summit underscores ethical AI, digital inclusion, and global solidarity

The G7 leaders met with counterparts from several countries, including Algeria, Argentina, Brazil, and India, along with heads of major international organisations such as the African Development Bank and the UN, to address global challenges impacting the Global South. They emphasised the need for a unified and equitable international response to these issues, underscoring solidarity and shared responsibility to ensure inclusive solutions.

Pope Francis made an unprecedented appearance at the summit, contributing valuable insights on AI. The leaders discussed AI’s potential to enhance industrial productivity while cautioning against its possible negative impacts on the labour market and society. They stressed the importance of developing AI that is ethical, transparent, and respects human rights, advocating for AI to improve services while protecting workers.

The leaders highlighted the necessity of bridging digital divides and promoting digital inclusion, supporting Italy’s proposal for an AI Hub for Sustainable Development. The hub aims to strengthen local AI ecosystems and advance AI’s role in sustainable development.

They also emphasised the importance of education, lifelong learning, and international mobility to equip workers with the necessary skills to work with AI. Finally, the leaders committed to fostering cooperation with developing and emerging economies to close digital gaps, including the gender digital divide, and achieve broader digital inclusion.

AI in news sparks global concerns

A new report from the Reuters Institute for the Study of Journalism highlights growing global concerns about the use of AI in news production and the spread of misinformation. The Digital News Report, based on surveys of nearly 100,000 people across 47 countries, reveals that consumers are particularly uneasy about AI-generated news, especially on sensitive topics like politics. In the US, 52% of respondents expressed discomfort with AI-produced news; this figure was 63% in the UK.

The report underscores the challenges newsrooms face in maintaining revenue and trust. Concerns about the reliability of AI-generated content are significant, with 59% of global respondents worried about false news, which rises to 81% in South Africa and 72% in the US, both of which are holding elections this year. Additionally, the reluctance of audiences to pay for news subscriptions remains a problem, with only 17% of respondents in 20 countries paying for online news, a figure unchanged for three years.

Why does it matter?

A significant trend noted in the report is the growing influence of news personalities on platforms like TikTok. Among 5,600 TikTok users surveyed, 57% said they primarily follow individual personalities for news, compared to 34% who follow journalists or news brands. The report suggests that newsrooms must establish direct relationships with their audiences and strategically use social media to reach younger, more elusive viewers. The shift is illustrated by figures like Vitus ‘V’ Spehar, a TikTok creator known for delivering news uniquely and engagingly.

Pope Francis advocates for ethical AI and human oversight at G7 summit

Pope Francis became the first sovereign pontiff to address a Group of Seven (G7) Summit, alerting world leaders that artificial intelligence (AI) must never gain the upper hand over mankind. World leaders greeted the 87-year-old Pope around a massive oval table.

During his historic address to the G7 leaders, the pope focused on the ethical implications and potential dangers of AI, defining it as an ‘epochal transformation’ for humanity. He stressed the importance of strict control of the fast-evolving technology to safeguard human life and dignity.

Key takeaways from Pope Francis’s address

-Human oversight and ethical concerns
Pope Francis acknowledged the dual nature of AI, describing it as both ‘terrifying and fascinating.’ He stressed the importance of human control over AI, particularly in life-or-death decisions, and called for a ban on autonomous weapons. He warned that allowing machines to make such critical decisions could strip humanity of its autonomy and judgment, leading to a loss of hope and human dignity.

-AI’s potential vs threats
The pontiff acknowledged AI’s potential to democratize access to knowledge, advance scientific research, and reduce difficult human tasks. However, he also cautioned that AI could deepen inequalities between advanced and developing nations and between different social classes, potentially leading to greater injustice and exclusion.

-The ‘techno-human condition’
Pope Francis introduced the concept of the ‘techno-human condition,’ describing that humans have always mediated their relationship with the environment through tools. He reasoned that this is not a weakness but a positive aspect of human nature, demonstrating our openness to others and God. The same openness is the source of our artistic and intellectual creativity.

-Call for ‘algor-ethics’
The pope emphasized the need for ‘algor-ethics,’ a set of global and pluralistic ethical principles to guide AI development. While AI remains shaped by its creators’ worldview, there is a growing difficulty in reaching consensus on major societal issues. Therefore, developing shared principles to address and resolve ethical dilemmas is paramount.

-Global collaboration and swift political action
Pope Francis called for immediate political action to address AI’s challenges and promises. He exhorted world leaders to establish frameworks guaranteeing human control over AI systems. He also explained that the goal is not to stifle human creativity but to channel it along new, ethical tracks.

Why does it matter?


The G7 summit brings together leaders of the US, Germany, the UK, France, Italy, Canada, and Japan. On Friday, Italy’s Prime Minister Giorgia Meloni invited ten additional countries to participate in the discussions, including the presidents of Algeria, Argentina, Kenya, and Turkey, who joined Jordan’s King, and India’s prime minister.


The event marked the first time a pope addressed a G7 gathering, highlighting the significance of his message. His address was well-received by the G7 leaders, who acknowledged AI’s potential for societal progress while underscoring the importance of responsible deployment, especially in warfare. The G7 heads of state and governments pledged to work towards frameworks that provide human oversight of AI technologies.


The pontiff’s speech highlights the critical need for ethical oversight in AI development, striking a balance between technological progress and moral responsibility. Pope Francis’s call for a ban on autonomous weapons and the development of ‘algor-ethics’ echoes a wider effort to safeguard human dignity, welfare, and global equity.

IOC implements AI for athlete safety at Paris Olympics

The International Olympic Committee (IOC) will deploy AI to combat social media abuse directed at 15,000 athletes and officials during the Paris Olympics next month, IOC President Thomas Bach announced on Friday. With the Games set to begin on 26 July, more than 10,500 athletes will compete across 32 sports, generating over half a billion social media engagements.

The AI system aims to safeguard athletes by monitoring and automatically erasing abusive posts to provide extensive protection against cyber abuse. That initiative comes amid ongoing global conflicts, including the wars in Ukraine and Gaza, which have already led to social media abuse cases.
Russian and Belarusian athletes, who will compete as neutral athletes without their national flags, are included in the protective measures. The IOC did not specify the level of access athletes would need to grant for the AI monitoring.

Despite recent political developments in France, including a snap parliamentary election called by President Emmanuel Macron, Bach assured that preparations for the Olympics remain on track. He emphasised that both the government and opposition are determined to ensure that France presents itself well during the Games.

LinkedIn unveils AI-driven features to enhance job hunting and recruitment

LinkedIn is using AI to streamline the job hunting process, aiming to alleviate the task of job searching for its users. The professional networking giant announced a suite of AI-driven features designed to match job seekers with opportunities more efficiently, ensuring that both employers and potential employees find the best fit with minimal effort. “We’ve been building with AI since 2007. We use it heavily for connecting people… for defense and how we keep trust in the ecosystem. It’s one of our most powerful tools,” its head of product, Tomer Cohen, said in an interview.

What is new?

Central to LinkedIn’s new offerings is an AI-powered recommendation engine that analyses user profiles, past job searches, and application history to suggest relevant job openings. The tool not only personalizes job recommendations but also learns from user interactions to refine its suggestions over time. LinkedIn’s goal is to significantly reduce the time and effort required for job seekers to find suitable roles, increasing the chances of matching them with positions that align closely with their skills and career aspirations.

LinkedIn is also rolling out AI tools designed to assist users in crafting more effective resumes and cover letters. These tools provide real-time feedback, highlighting key areas for improvement and suggesting changes to better align documents with job descriptions. By leveraging natural language processing, LinkedIn aims to help job seekers present their qualifications in the best possible light, ultimately increasing their chances of securing interviews.

To further support job seekers, LinkedIn is introducing AI-enhanced skill assessments and training modules. These features allow users to identify gaps in their skill sets and access personalized learning resources to address these deficiencies. The AI system recommends specific courses and certifications that can improve a user’s profile, making them more attractive to potential employers.

In addition to its AI-driven tools, LinkedIn is expanding the availability of Recruiter 2024, a comprehensive recruitment platform that leverages AI to help companies find and engage top talent more effectively. The platform will now include more tools for marketers, enabling them to reach and connect with their target audiences more efficiently. LinkedIn is also introducing enhanced premium company pages for small businesses, providing them with advanced features to showcase their brand and attract potential employees.

Why does it matter?

That move highlights the transformative potential of AI in professional networking. While job markets are becoming more competitive and fast-paced, LinkedIn’s embrace of AI technology represents a significant step in making the job hunting process more efficient and effective for both job seekers and employers:

  • Efficiency and personalization: AI-driven features can drastically reduce the time and effort required for job seekers to find relevant positions, leading to a more personalized and efficient job search experience.
  • Competitive edge: By assisting users in creating more compelling resumes and cover letters, LinkedIn’s AI tools can give job seekers a competitive edge in the increasingly crowded job market.
  • Skills development: The focus on personalized skill assessments and training can help job seekers stay relevant in their fields, addressing the skills gap that many industries face today.
  • Employer benefits: For employers, these AI-driven tools can lead to better job matches, reducing turnover and ensuring that new hires are well-suited for their roles.

 

Microsoft delays AI ‘Recall’ feature amid privacy concerns

Microsoft has decided to delay the rollout of its AI-powered ‘Recall’ feature, which tracks and stores computer usage histories, citing privacy concerns. Initially planned for launch with new computers next week, Recall will now undergo a preview phase within its Windows Insider Program (WIP) in the coming weeks rather than being widely available to Copilot+ PC users starting 18 June.

The Recall feature, designed to record everything from web browsing to voice chats for later retrieval, aims to help users remember past activities even months later. Microsoft emphasised that the delay is part of their commitment to ensuring a trusted and secure customer experience, seeking additional feedback before a broader release.

Copilot+ PCs, introduced in May, integrate AI capabilities and were set to include Recall as a key feature. The WIP, which allows enthusiastic users to test upcoming Windows features, will play a crucial role in gathering feedback on Recall before its eventual wider availability.

Privacy concerns surfaced swiftly after Recall’s announcement, with critics suggesting potential misuse for surveillance purposes. Elon Musk likened the feature to a scenario from the dystopian TV series ‘Black Mirror’, reflecting broader anxieties about the implications of pervasive technology on personal privacy and security.

Spotify unveils in-house creative agency, trials AI voiceover ads

Spotify has unveiled Creative Labs, an in-house advertising agency designed to assist brands in creating effective audio and visual ads on its platform, in-app digital experiences, and interactive formats like call-to-action cards (CTA). That initiative aims to streamline the ad creation process for advertisers, providing them with tools and expertise to craft compelling content tailored for Spotify’s vast user base. Creative Labs will offer a range of services, including concept development, production, and analytics, ensuring that advertisers can effectively reach their target audiences through engaging, high-quality ads. 

In addition, Spotify will begin testing generative AI ads and is developing ‘Quick Audio,’ a tool enabling advertisers to create scripts and voiceovers using AI. The tool will soon be available in Spotify Ads Manager. A company spokesperson highlighted that ‘every campaign Creative Lab touches is highly customised to each specific brand and business need.’ Previously, a Spotify executive mentioned the company’s interest in using AI to generate host-read ads for podcasters.

Why does it matter?

The following move underscores the growing importance of personalised and engaging content in digital advertising, as well as the transformative shift in the integration of generative AI ads and Quick Audio introduced in the advertising world. AI enables more efficient and creative ad production, allowing for greater customisation and engagement. That benefits advertisers by enhancing their reach and impact and enriches the overall user experience by delivering more relevant and captivating content. As AI continues to evolve, its role in transforming advertising will likely expand, making platforms like Spotify essential in driving innovation and effectiveness in the industry​.