Google tests AI anti-theft feature for phones in Brazil

Alphabet’s Google announced that Brazil will be the first country to test a new anti-theft feature for Android phones, utilising AI to detect and lock stolen devices. The initial test phase will offer three locking mechanisms. One uses AI to identify movement patterns typical of theft and lock the screen. Another allows users to remotely lock their screens by entering their phone number and completing a security challenge from another device. The third feature locks the screen automatically if the device remains offline for an extended period.

These features will be available to Brazilian users with Android phones version 10 or higher starting in July, with a gradual rollout to other countries planned for later this year. Phone theft is a significant issue in Brazil, with nearly 1 million cell phones reported stolen in 2022, marking a 16.6% increase from the previous year.

In response to the rising theft rates, the Brazilian government launched an app called Celular Seguro in December, allowing users to report stolen phones and block access via a trusted person’s device. As of last month, approximately 2 million people had registered with the app, leading to the blocking of 50,000 phones, according to the Justice Ministry.

US set to expand sanctions on semiconductor sales to Russia

The US government is set to announce expanded sanctions on semiconductor chips and other goods sold to Russia, targeting third-party sellers in China. That move is part of a broader effort by the Biden administration to thwart Russia’s attempts to bypass Western sanctions and sustain its war efforts against Ukraine. The new measures will extend existing export controls to include US-branded goods, even those not made in the United States. They will identify specific Hong Kong entities involved in shipping goods to Moscow.

These upcoming sanctions come as President Joe Biden prepares to attend a summit with other Group of Seven (G7) leaders in southern Italy, where supporting Ukraine and weakening Russia’s military capabilities are top priorities. US officials have expressed increasing concern over China’s growing trade with Russia, which they believe is enabling Moscow to maintain its military supplies by providing essential manufacturing equipment. The broadened export controls aim to address this issue by encompassing a wider range of US goods.

Additionally, the US plans to impose significant new sanctions on financial institutions and non-banking entities involved in the ‘technology and goods channels’ that supply the Russian military. That decision comes amid efforts to ensure that Ukrainian President Volodymyr Zelenskiy can emphasise the critical situation facing Ukrainian forces in their ongoing struggle against Russia during his meetings with G7 leaders.

LinkedIn disables targeted ads tool to comply with EU regulations

In a move to align with EU’s technology regulations, LinkedIn, the professional networking platform owned by Microsoft, has disabled a tool that facilitated targeted advertising. The decision comes in adherence to the Digital Services Act (DSA), which imposes strict rules on tech companies operating within the EU.

The move by LinkedIn followed a complaint by several civil society organizations, including European Digital Rights (EDRi), Gesellschaft für Freiheitsrechte (GFF), Global Witness, and Bits of Freedom, to the European Commission. These groups raised concerns that LinkedIn’s tool might allow advertisers to target users based on sensitive personal data such as racial or ethnic origin, political opinions, and other personal details due to their membership in LinkedIn groups.

In March, the European Commission had sent a request for information to LinkedIn after these groups highlighted potential violations of the DSA. The DSA requires online intermediaries to provide users with more control over their data, including an option to turn off personalised content  and to disclose how algorithms impact their online experience. It also prohibits the use of sensitive personal data, such as race, sexual orientation, or political opinions, for targeted advertising. In recent years, the EU has been at the forefront of enforcing data privacy and protection laws, notably with the GDPR. The DSA builds on these principles, focusing more explicitly on the accountability of online platforms and their role in shaping public discourse.

A LinkedIn spokesperson emphasised that the platform remains committed to supporting its users and advertisers, even as it navigates these regulatory changes. “We are continually reviewing and updating our processes to ensure compliance with applicable laws and regulations,” the spokesperson said. “Disabling this tool is a proactive step to align with the DSA’s requirements and to maintain the trust of our community.” EU industry chief Thierry Breton commented on LinkedIn’s move, stating, “The Commission will monitor the effective implementation of LinkedIn’s public pledge to ensure full compliance with the DSA.” 

Why does it matter?

The impact of LinkedIn’s decision extends beyond its immediate user base and advertisers. Targeted ads have been a lucrative source of income for social media platforms, allowing advertisers to reach niche markets with high precision. By disabling this tool, LinkedIn is setting a precedent for other tech companies to follow, highlighting the importance of regulatory compliance and user trust.

Google Play cracks down on AI apps amid deepfake concerns

Google has issued new guidance for developers building AI apps distributed through Google Play in response to growing concerns over the proliferation of AI-powered apps designed to create deepfake nude images. The platform recently announced a crackdown on such applications, signalling a firm stance against the misuse of AI for generating non-consensual and potentially harmful content.

The move comes in the wake of alarming reports highlighting the ease with which these apps can manipulate photos to create realistic yet fabricated nude images of individuals. Reports have surfaced about apps like ‘DeepNude’ and its clones, which can strip clothes from images of women to produce highly realistic nude photos. Another report detailed the widespread availability of apps that could generate deepfake videos, leading to significant privacy invasions and the potential for harassment and blackmail.

Apps offering AI features have to be ‘rigorously tested’ to safeguard against prompts that generate restricted content and have to provide a way for users to signal it. Google strongly suggests that developers document the recommended tests before launching them, as Google could ask them to be reviewed in the future. Additionally, developers can’t advertise that their app breaks any of Google Play’s rules at the risk of getting banned from the app store. The company is also publishing other resources and best practices, like its People + AI Guidebook, which aims to support developers building AI apps.

Why Does It Matter?

The proliferation of AI-driven deepfake apps on platforms like Google Play undermine personal privacy and consent by allowing anyone to generate highly realistic and often explicit content of individuals without their knowledge or consent. Such misuse can lead to severe reputational damage, harassment, and even extortion, affecting both individuals and public figures alike.

Qilin group claims responsibility for the cyberattack on London hospitals

The Qilin ransomware group has claimed responsibility for a cyberattack on Synnovis labs, a key partner of the National Health Service (NHS) in England. The attack, which began on Monday, has severely disrupted services at five major hospitals in London, including King’s College Hospital and Guy’s and St Thomas’ NHS Foundation Trust. The NHS declared the situation a ‘critical incident,’ noting that the full extent and impact of the attack on patient data remain unclear.

Synnovis, a prominent pathology service provider, runs over 100 specialised labs offering diagnostics for various conditions. Due to the ransomware attack, several critical services, such as blood testing and certain operations, have been postponed, prioritising only the most urgent cases. NHS England has deployed a cyber incident response team to assist Synnovis and minimise patient care disruption, though longer wait times for emergency services are expected.

The Qilin group, operating a ransomware-as-a-service model, typically targets victims via phishing emails. The attack on Synnovis has raised significant concerns about the security of healthcare systems and the reliance on third-party providers. Kevin Kirkwood from LogRhythm emphasised that the attack causes operational disruptions and undermines public trust in healthcare institutions. He called for robust security measures, including continuous monitoring and comprehensive incident response plans, to protect healthcare infrastructure better and ensure patient safety.

Russian propagandists launch disinformation campaign against Paris Olympics

Russian operatives are intensifying efforts to discredit the upcoming Paris Summer Olympics and undermine support for Ukraine, utilizing both online and offline tactics, according to experts and officials.

Efforts include using AI to create fake videos featuring actor Tom Cruise criticizing the International Olympic Committee and placing symbolic coffins near the Eiffel Tower, suggesting French soldiers in Ukraine.

Analysts note a sense of desperation among Russian propagandists, which aim to tarnish the Olympics and thwart Ukraine’s momentum in procuring Western weapons against Russia.

Not limited to online disinformation, recent stunts include the placement of symbolic coffins near the Eiffel Tower, fueling suspicions of Russian involvement, amidst French President Macron’s consideration of deploying troops to Ukraine, further angering Russia.

With the Paris Olympics approaching, concerns are mounting over potential cyber threats, given Russia’s history of disruptive actions during major events, highlighting the need for heightened vigilance and cybersecurity measures.

Ransomware attack disrupts major London hospitals

A ransomware attack on Synnovis, a pathology services provider, has severely disrupted major hospitals in London, including King’s College Hospital, Guy’s and St Thomas’, and the Royal Brompton. This incident has led to the cancellation and redirection of numerous medical procedures. The hospitals have declared a ‘critical incident’ due to the significant impact on services, notably affecting blood transfusions. Synnovis’ CEO, Mark Dollar, expressed deep regret for the inconvenience caused and assured efforts to minimise the disruption while maintaining communication with local NHS services.

Patients in various London boroughs, including Bexley, Greenwich, and Southwark, have been affected. Oliver Dowson, a 70-year-old patient at Royal Brompton, experienced a cancelled surgery and expressed frustration over repeated delays. NHS England’s London region acknowledged the significant impact on services and emphasised the importance of attending emergency care and appointments unless instructed otherwise. They are working with the National Cyber Security Centre to investigate the attack and keep the public informed.

Synnovis, a collaboration between SYNLAB UK & Ireland and several NHS trusts, prides itself on advanced pathology services but has fallen victim to this attack despite stringent cybersecurity measures. Deryck Mitchelson from Check Point highlighted the healthcare sector’s vulnerability to such attacks, given its vast repository of sensitive data. Recent cyber incidents in the UK, including a similar attack on NHS Dumfries and Galloway, underscore the persistent threat to healthcare services. Government agencies actively mitigate the current situation and support affected NHS organisations.

TikTok battles cyberattacks amid national security concerns

TikTok has recently thwarted a cyberattack targeting several high-profile accounts, including CNN and Paris Hilton, though Hilton’s account remained uncompromised. The company is working closely with affected users to restore access and enhance security measures to prevent future breaches.

The number of compromised accounts is minimal, according to TikTok, which is actively assisting those affected. The incident occurred as TikTok’s parent company, ByteDance, faced a legal battle against a US law that demands the app be sold or face a national ban by January.

The US government has raised national security concerns over Chinese ownership of TikTok. Still, the company maintains that it has taken significant steps to safeguard user data and privacy, asserting that it will not share American user information with the Chinese government.

Microsoft faces GDPR investigation over data protection concerns

The advocacy group NOYB has filed two complaints against Microsoft’s 365 Education software suite, alleging that the company is shifting its responsibilities for children’s personal data onto schools that are not equipped to handle these responsibilities. The complaints centre on concerns about transparency and processing children’s data on the Microsoft platform, potentially violating the European Union’s General Data Protection Regulation (GDPR).

The first complaint alleges that Microsoft’s contracts with schools attempt to shift responsibility for GDPR compliance onto them despite schools lacking the capacity to monitor or enforce Microsoft’s data practices. That could result in children’s data being processed in ways that do not comply with GDPR. The second complaint highlights the use of tracking cookies within Microsoft 365 Education software, which reportedly collects user browsing data and analyses user behaviour, potentially for advertising purposes.

NOYB claims that such tracking practices occur without users’ consent or the schools’ knowledge, and there appears to be no legal justification for it under GDPR. They request that the Austrian Data Protection Authority investigate the complaints and determine the extent of data processing by Microsoft 365 Education. The group has also urged the authority to impose fines if GDPR violations are confirmed.

Microsoft has not yet responded to the complaints. Still, the company has stated that its 365 for Education complies with GDPR and other applicable privacy laws and that it thoroughly protects the privacy of its young users.

Sweden plans to integrate facial recognition technology in police

Sweden is set to allow its police force to use facial recognition technology (FRT). This decision, recently approved by the country’s data protection authority, aims to aid in the identification of criminal suspects through advanced biometric screening.

After the adoption of the EU rules which ban real-time facial recognition in public spaces but allows some exceptions for law enforcement, the Swedish government ordered an inquiry into expanded powers for law enforcement to use camera surveillance, including the use of facial recognition technology. The EU exceptions include searching for missing people or specific suspected victims of human trafficking, or preventing imminent threats such as a terrorist attack. It also allows the technology for locating individuals suspected of committing certain criminal offenses.

The Swedish police plan to integrate facial recognition into their daily operations by leveraging a database containing over 40,000 facial images of individuals who have been detained or arrested. This technology enables law enforcement to quickly compare these images with footage from closed-circuit television (CCTV), streamlining the process of identifying suspects and potentially speeding up investigations​.

Why does it matter?

The deployment of FRT by Swedish police is governed by stringent regulations to ensure compliance with both national and EU data protection laws, aligning with Sweden’s Crime Data Act and the EU’s Data Protection Law Enforcement Directive (GDPR). This compliance is crucial to addressing concerns about privacy and civil liberties, which are often raised in discussions about surveillance technologies​. The adoption of FRT in Sweden comes as part of a broader trend within Europe, where several countries are exploring or have already implemented similar technologies. For example, Dutch police utilize a substantial biometric database to aid in their law enforcement efforts.