French AI industry fears impact of proposed immigration cuts

Leading figures in France’s tech industry have expressed concern that immigration restrictions proposed by the far-right National Rally (RN) party could hinder the nation’s ambition to become Europe’s top AI hub. Following significant losses for his Renaissance party in the European Parliament election, President Emmanuel Macron has called for snap elections in the lower house, set for 30 June and 7 July.

Macron has prioritised support for domestic tech companies by easing hiring from abroad, lobbying against stringent EU regulations, and attracting investments from giants like Amazon and Microsoft. However, the RN, expected to win the most seats in the upcoming election, aims to reduce migrant worker numbers and increase scrutiny on foreign investments, which tech executives fear will undermine AI advancements.

Julien Launay, CEO of AI startup Adaptive ML, emphasised that skilled immigration is crucial for bringing talent to France, noting that many skilled professionals start as students and interns before entering the workforce. Camille Lemardeley, general director of the education startup Superprof, warned that the RN’s policies could create a less welcoming environment for international professionals, potentially stifling innovation and competitiveness across the tech sector.

Hugo Weber, head of public affairs at e-commerce firm Mirakl, echoed these concerns, stating that the RN’s policies could jeopardise France’s tech ecosystem by limiting access to global talent and venture capital. As France seeks to solidify its position as an AI leader, the proposed immigration restrictions pose a significant threat to the growth and sustainability of its tech industry.

Italian PM and Pope to address AI ethics at G7

Italian Prime Minister Giorgia Meloni and Pope Francis are teaming up to warn global leaders that diving into AI without ethical considerations could lead to catastrophic consequences. The collaboration, long in the making, will climax with Pope Francis attending the G7 summit in southern Italy at Meloni’s invitation, where he aims to educate leaders on the potential dangers posed by AI.

Concerned about AI’s societal and economic impacts, Meloni has been vocal about her fears regarding job losses and widening inequalities. She recently highlighted these concerns at the UN, coining the term ‘Algorethics’ to emphasise the need for ethical boundaries in technological advancements. Paolo Benanti, a Franciscan friar and advisor to both Meloni and the Pope, stressed the growing power of multinational corporations in AI development, raising alarms about the concentration of wealth and power.

Pope Francis, known for advocating social justice issues, has previously called for an AI ethics conference at the Vatican, drawing global tech giants and international organisations into the discussion. His upcoming address at the G7 summit is expected to focus on AI’s impact on vulnerable populations and could touch on concerns about autonomous weaponry. Meloni, in turn, is poised to advocate for stronger regulations to ensure AI technologies adhere to ethical standards and serve societal interests.

Despite AI hype, recent studies suggest the promised financial benefits for businesses implementing AI projects have been underwhelming. That challenges the optimistic narratives often associated with AI, indicating a need for more cautious and balanced approaches to its development and deployment.

Oracle shares soar on AI cloud demand

Oracle’s stock soared nearly 9% on Wednesday, propelled by surging demand for its cost-effective cloud infrastructure services, particularly from AI applications. The surge could boost the company’s market valuation by over $28 billion, adding to its current $340 billion valuation. With an 18% increase in shares since the beginning of the year, Oracle is capitalising on the momentum of its cloud infrastructure unit, which offers computing and storage services to businesses at competitive prices, positioning itself against major rivals like Google, Microsoft, and Amazon.

Oracle’s cloud infrastructure has garnered attention from AI startups, including Elon Musk’s xAI, thanks to its affordability compared to competitors. In a strategic move, Oracle recently announced partnerships with ChatGPT-maker OpenAI and Google Cloud to expand its cloud infrastructure offerings. That collaboration strengthens Oracle’s position as an AI platform and extends its database services distribution, as Evercore analyst Kirk Materne highlighted.

While Oracle’s forward earnings estimates stand at 19.59 times, lower than those of its major competitors, its fourth-quarter results missed expectations. Due to increasing competition from more cost-effective alternatives, the company faces challenges in its legacy database and enterprise resource planning (ERP) software business. Morningstar analyst Julie Sharma suggests that Oracle may experience customer churn as businesses undergo significant digital transformations, opting for cheaper database and ERP solutions over Oracle’s offerings.

Particle teams up with Reuters to reinvent news delivery

Particle, a news-reader startup developed by former Twitter engineers, is partnering with publishers to navigate the evolving landscape of news consumption in the AI era. By leveraging AI technology, Particle aims to provide news summaries from various publishers through its app, offering readers a comprehensive understanding of current events from multiple perspectives. That approach seeks to address concerns within the publishing industry about potential revenue loss due to AI-driven news summaries.

Now, Particle has teamed up with Reuters to explore new business models in a significant move. The startup has subscribed to Reuters newswire to enhance its news delivery capabilities. Additionally, Particle secured $10.9 million in Series A funding led by Lightspeed Venture Partners, with investments from media giant Axel Springer. These partnerships and investments underscore Particle’s commitment to collaborating with publishers to address their needs and goals in the rapidly evolving media landscape.

Particle’s co-founder, Sara Beykpour, emphasises the startup’s focus on delivering value to news consumers beyond AI summaries. With a mission to help readers cut through the noise and understand the news faster, Particle offers a personalised news experience while ensuring exposure to diverse viewpoints. By presenting news stories holistically and integrating perspectives from multiple outlets, Particle aims to combat information overload and mitigate media bias.

Why does it matter?

Despite its innovative approach, Particle has yet to finalise its business model. The startup actively engages with publishers to develop a sustainable model that benefits readers and publishers. Possibilities include revenue sharing, advertising, and more, with input from industry stakeholders shaping the future direction of Particle’s business strategy.

Pope Francis to address AI ethics at G7 summit

Pope Francis is set to make history at the upcoming G7 summit in Italy’s Puglia region by becoming the first pope to address the gathering’s discussions on AI. His participation underscores his commitment to ensuring that AI development aligns with human values and serves the common good. The 87-year-old pontiff recognises the potential of AI for positive change but also emphasises the need for careful regulation to prevent its misuse and safeguard against potential risks.

At the heart of the pope’s message is the call for an ethical framework to guide AI development and usage. Through initiatives like the ‘Rome Call for AI Ethics’, the Vatican seeks to promote transparency, inclusion, responsibility, and impartiality in AI endeavours. Notably, major tech companies like Microsoft, IBM, Cisco Systems, and international organisations have endorsed these principles.

During the G7 summit, Pope Francis is expected to advocate for international cooperation in AI regulation. He emphasises the importance of addressing global inequalities in access to technology and mitigating threats like AI-controlled weapons and the spread of misinformation. His presence at the summit signifies a proactive engagement with contemporary issues, reflecting his vision of a Church actively involved in shaping the world’s future.

The pope’s decision to address AI at the G7 summit follows concerns about the rise of ‘deepfake’ technology, exemplified by manipulated images of himself circulating online. He recognises the transformative potential of AI in the 21st century and seeks to ensure its development aligns with human dignity and social justice. Through his participation, Pope Francis aims to contribute to the creation of an ethical and regulatory framework that promotes the responsible use of AI for the benefit of all humanity.

Google tests AI anti-theft feature for phones in Brazil

Alphabet’s Google announced that Brazil will be the first country to test a new anti-theft feature for Android phones, utilising AI to detect and lock stolen devices. The initial test phase will offer three locking mechanisms. One uses AI to identify movement patterns typical of theft and lock the screen. Another allows users to remotely lock their screens by entering their phone number and completing a security challenge from another device. The third feature locks the screen automatically if the device remains offline for an extended period.

These features will be available to Brazilian users with Android phones version 10 or higher starting in July, with a gradual rollout to other countries planned for later this year. Phone theft is a significant issue in Brazil, with nearly 1 million cell phones reported stolen in 2022, marking a 16.6% increase from the previous year.

In response to the rising theft rates, the Brazilian government launched an app called Celular Seguro in December, allowing users to report stolen phones and block access via a trusted person’s device. As of last month, approximately 2 million people had registered with the app, leading to the blocking of 50,000 phones, according to the Justice Ministry.

Turkish student jailed for using AI to cheat on exam

Turkish authorities have arrested a student for using a makeshift device linked to AI software to cheat during a university entrance exam. The student, who was acting suspiciously, was detained by police during the exam and later formally arrested and sent to jail pending trial. Another individual involved in helping the student was also detained.

A police video from Isparta province showed the student’s setup: a camera disguised as a shirt button connected to AI software through a router hidden in the sole of their shoe. The system allowed the AI to generate correct answers, relayed to the student through an earpiece.

This incident highlights the increasing use of advanced technology in cheating, prompting concerns about exam security and integrity. The authorities are now investigating the extent of this cheating method and considering measures to prevent similar occurrences in the future.

Meta develops AI technology tailored specifically for Europe

Meta Platforms, the owner of Facebook, announced it is developing AI technology tailored specifically for Europe, taking into account the region’s linguistic, geographic, and cultural nuances. The company will train its large language models using publicly shared content from its platforms, including Instagram and Facebook, ensuring that private posts are excluded to maintain user privacy.

Last month, Meta revealed plans to inform Facebook and Instagram users in Europe and the UK about how their public information is utilised to enhance and develop AI technologies. The move aims to increase transparency and reassure users about data privacy.

By focusing on localised AI development, Meta hopes to serve the European market better, reflecting the region’s diverse characteristics in its technology offerings. That effort underscores Meta’s commitment to respecting user privacy while advancing its AI capabilities.

Apple to showcase AI innovations at developer conference

At Apple’s annual developer conference on Monday, the tech giant is anticipated to unveil how it’s integrating AI across its software suite. The integration includes updates to its Siri voice assistant and a potential collaboration with OpenAI, the owner of ChatGPT. With its reputation on the line, Apple aims to reassure investors that it remains competitive in the AI landscape, especially against rivals like Microsoft.

Apple faces the challenge of demonstrating the value of AI to its vast user base, many of whom are not tech enthusiasts. Analysts suggest that Apple needs to showcase how AI can enhance user experiences, a shift from its previous emphasis on enterprise applications. Despite using AI behind the scenes for years, Apple has been reserved in highlighting its role in device functionality, unlike Microsoft’s more vocal approach with OpenAI.

The spotlight is on Siri’s makeover, which is expected to enable more seamless control over various apps. Apple aims to make Siri smarter by integrating generative AI, potentially through a partnership with OpenAI. The move is anticipated to improve user interactions with Siri across different apps, enhancing its usability and effectiveness. Also, Apple recently introduced an AI-focused chip in its latest iPad Pro models, signalling its commitment to AI development. Analysts predict that Apple will provide developers with insights into leveraging these capabilities to support AI computing. Additionally, reports suggest Apple may discuss its plans for using its chips in data centres, which could enhance cloud computing capabilities while maintaining privacy and security features.

The Apple Worldwide Developers Conference (WWDC 2024) will run until Friday, offering developers insights into app updates and new tools. Investors are hopeful that Apple’s AI advancements will drive sales of new iPhones and boost the company’s competitive edge amid fierce global competition.

Google Play cracks down on AI apps amid deepfake concerns

Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.

The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like ‘DeepNude’ and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.

Apps offering AI features have to be ‘rigorously tested’ to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can’t advertise that their app breaks any of Google Play’s rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.

Why Does It Matter?

The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.