Home | Newsletters & Shorts | DW Weekly #124 – 21 August 2023

DW Weekly #124 – 21 August 2023

DigWatch Weekly 100th issue 1920x1080px generic

Dear readers,

The already fragile relationship between the USA and China is becoming further complicated by new restrictions and measures affecting the semiconductor industry. On the AI regulation front, nothing much has happened, but we can’t say the same for data protection and privacy issues.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

USA to restrict investment in China in key tech sectors

The US government announced plans to prohibit or restrict US investments in China in areas deemed critical for a country’s military, intelligence, and surveillance capabilities across 3 industry sectors – semiconductors, quantum technologies, and (certain) AI systems. The decision stems from an executive order signed by US President Joe Biden on 9 August 2023, which grants authorisation to the US Treasury Secretary to impose restrictions on US investments in designated ‘countries of concern’ – with an initial list that includes China, Hong Kong, and Macau.

The executive order serves a dual purpose: to preempt potential national security risks and to regulate investments in sectors that could empower China with military and intelligence advantages. While the US already enforces export restrictions on various technologies bound for China, the new executive order extends its scope to restrict investment flows that could support China’s domestic capabilities.

Semiconductors. The intent to impose restrictions on semiconductors – now a critical strategic asset due to their integration into so many industries – is particularly significant. It comes at a time when the semiconductor landscape is increasingly intertwined with geopolitical considerations of market dominance, self-sufficiency, and national security. A move on one geopolitical side usually triggers repercussions on the other, as history has confirmed time and again.

Delayed countermeasures? So far, this hasn’t been the case. China’s reaction has been a mix of caution and concern, with no actual countermeasures announced yet. One questions whether this is a sign that Beijing will react more cautiously than usual. Although Chinese authorities have expressed disappointment, Beijing has only said so far that China is undergoing a comprehensive assessment of the US executive order’s impact and will respond accordingly. 

Too early. There are several reasons that could explain this reaction. The restrictions won’t come into effect before next year (and even then, they won’t apply retroactively). It might therefore be too early to gauge the implications of what the order and the US Treasury’s regulations will mean for China. 

Antitrust arsenal. China may also opt to hit back through other means, as it has been doing with merger approvals involving US companies. China’s failure to approve Intel’s acquisition of Israel’s Tower Semiconductor is a tough blow. (More coverage below).

Reactions from US allies. Beijing may also be waiting for more concrete reactions from other countries. Both the EU and the UK have signalled their intent to adopt similar strategies. The European Commission said it was analysing the executive order closely, and will continue its cooperation with the US on this issue, while the UK’s premier Rishi Sunak is consulting on the issue with UK businesses

It seems that neither the EU nor the UK is expected to immediately follow the USA. For China, the USA is a confrontational open book; the EU is diplomatically less so.


Digital policy roundup (7–21 August)
// SEMICONDUCTORS //

China blocks Intel’s acquisition of Tower Semiconductor

Intel has abandoned its plans to acquire Israeli chipmaker, Tower Semiconductor, after Chinese regulators failed to approve the deal. The acquisition was central to Intel’s efforts to build its semiconductor business and better compete with industry giant Taiwan Semiconductor Manufacturing Company (TSMC).

Acquisitions involving multinational companies typically require regulatory approval in several jurisdictions, due to the complex operations and market impact in those countries. China’s antitrust regulations require that a deal be reviewed if the 2 companies seeking a merger have a total revenue of more than USD117 million a year from China

Why is it relevant? The failure of the deal shows how China is able to disrupt strategic plans for US companies involved in the semiconductor industry. In Intel’s case, the move will complicate its plans to increase the production of chips for other companies alongside its own products.

microchips on a circuit board

// AI GOVERNANCE //

Canada opens consultation on guardrails for generative AI 

The Canadian government has just launched a draft code of practice for regulating generative AI, and is seeking public input. The code consists of six elements:

1. Safety: Generative AI systems must be safe, and ways to identify potential malicious or harmful use must be established.

2. Fairness: The system’s output must be fair and equitable. Datasets are to be assessed and curated, and measures to assess and mitigate biassed output are to be in place.

3. Transparency: The system must be transparent.

4. Human supervision: Deployment and operations of the system must be supervised by humans, and a mechanism to identify and report adverse impacts must be established.

5. Validity and robustness: The system’s validity and robustness must be ensured by employing testing methods and appropriate cybersecurity measures.

6. Accountability: Multiple lines of defence must be in place, and roles and responsibilities have to be clearly defined to ensure the accountability of the system.

Why is it relevant? First, it’s a voluntary code that aims to provide legal clarity ahead of the implementation of Canada’s AI and Data Act (AIDA), known as Bill C-27, which is still undergoing parliamentary review. Second, it reminds us of the European Commission’s approach: Developing voluntary AI guardrails ahead of the actual AI law.


// DATA PROTECTION //

Meta seeks to block Norwegian authority’s daily fine for privacy breaches

The Norwegian Data Protection Authority has imposed daily fines of one million kroner (USD98,500) on Meta, starting from 14 August 2023. These penalties are a consequence of Meta’s non-compliance with a ban on behaviour-based marketing carried out by Facebook and Instagram. In response, Meta has sought a temporary injunction from the Oslo District Court to halt the ban. The court will review the case this week (22–23 August).

The Norwegian watchdog believes Meta’s behaviour-based marketing – which involves the excessive monitoring of users for targeted ads – is illegal. The watchdog’s ban does not prohibit the use of Facebook or Instagram in Norway.

What is it relevant? The GDPR, the EU’s data protection regulations, offer companies six options for gathering and processing people’s data, depending on the context. Meta attempted to rely on two of the options (the ones where users do not need to consent specifically. But European data protection authorities deemed Meta’s use of these options for its behaviour-based marketing practises illegal. On 1 August, Meta announced that it would finally switch to asking users for specific consent, but so far, it hasn’t yet done so.

 Person, Security

Google fails to block USD5 billion consumer privacy lawsuit

A US District judge has rejected Google’s bid to dismiss a lawsuit claiming it invaded the privacy of millions of people by secretly tracking their internet use. The reason? Users did not consent to letting Google collect information about what they viewed online, because the company never explicitly told them it would. The case will therefore continue.

Why is it relevant? Many people believe that using a browser’s ‘private’ or ‘incognito’ mode ensures their online activities remain untracked. However, according to the plaintiffs, Google continues to track and gather browsing data in real time. 

Probable outcomes: Google’s explanation of how private browsing functions states that data won’t be stored on devices, yet websites might still collect user data. This suggests that the problem might boil down to two aspects: Google’s representation of its privacy settings (the fact that user data is still collected renders the setting neither private nor incognito), and the necessity of seeking user consent regardless.

Case details: Brown et al v Google LLC et al, US District Court, Northern District of California, No. 20-03664


Was this newsletter forwarded to you, and you’d like to see more?


// NEWS MEDIA //

Canadian PM criticises Meta for putting profits before safety

Canadian Prime Minister Justin Trudeau has criticised Meta for banning domestic news from its platforms as wildfires ravage parts of Canada. Up-to-date information during a crisis is crucial, he told a news conference. ‘Facebook is putting corporate profits ahead of people’s safety.’

Meanwhile, Canadian news industry groups have asked the country’s antitrust regulator to investigate Meta’s decision to block news on its platforms in the country, accusing the Facebook parent of abusing its dominant position.

Why is it relevant? The fight is turning into both a safety and an antitrust issue. Plus, we’re not sure Meta is not doing itself any favours by telling Canadian users that they can still access timely information from other reputable sources, and directing them to its Safety Check feature which allows users to let their Facebook friends know they are safe. 


// TIKTOK //

TikTok adapts practices to EU rules, allowing users to opt out of personalised feeds…

TikTok will allow European users to opt out from receiving a personalised algorithm-based feed. This change is in response to the EU’s Digital Services Act (DSA), which imposes more onerous obligations on very large platforms such as TikTok. 

The new law also prohibits companies from targeting children with advertising. The DSA’s deadline for companies to implement these changes is 28 August. 

Why is it relevant? With TikTok’s connections to China and the ensuing security concerns, the company has been trying very hard to convince European policymakers of its commitment to data protection and the implementation of robust safety measures. A few weeks ago, for instance, it willingly subjected itself to a stress test (which pleased European Commissioner for Markets Thierry Breton very much). Compliance with the DSA could also help improve the company’s standing in Europe.

…but is banned in New York City

New York City has implemented a TikTok ban on government-owned devices due to security and privacy concerns. The ban requires NYC agencies to remove TikTok within 30 days, and employees are barred from downloading or using the app from any city-owned devices and networks. The ban brings NYC in line with the federal government.

Why is it relevant? TikTok has faced bans around the world, but perhaps the toughest restrictions (including draft laws with more restrictions) in the USA. And yet, generative AI seems to have displaced the legislative momentum of imposing more restrictions on TikTok.


Law enforcement personnel presence dominates the scene on Oxford Street

TikTok, Snapchat videos encourage looting

There were several arrests and a heavy police presence on Oxford Street, London, on 9 August, after videos encouraging people to steal from shops made the rounds on TikTok and Snapchat. A photo circulating on social media with the time and location of the planned loot said: ‘Last year was lit, we know this years gonna be 10x better’ (this message has since been taken down). Meanwhile, former Chief Superintendent of Greater London’s Metropolitan Police, Dal Babu, has criticised politicians for their reluctance to confront technology firms. Similar grab-and-go flash mob shoplifting has occurred in the USA. Photo credit: Skynews


The week ahead (21–28 August)

21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session

25 August: Very Large Online Platforms and search engines must comply with the DSA’s obligations


#ReadingCorner

Rise in criminals’ use of generative AI, but impact is limited so far: study

Cybercriminals have shown interest in using AI for malicious activities since 2019, but its adoption remains limited, according to researchers at Mandiant, a cybersecurity company owned by Google. The malicious use of generative AI is mainly linked to social engineering, a practice involving fraudsters impersonating a trusted entity to trick users into providing confidential information. What about the techniques which criminals are using? The researchers say that criminals are increasingly using imagery and video in their campaigns, which are more deceptive than text-based or audio messages. Access the full report.

 Crowd, Person, People, Audience, Business Card, Paper, Text, Press Conference, Face, Head
Fake! Screenshot from an AI-generated deepfake video of Ukrainian President Volodymyr Zelenskyy stating that Ukraine would surrender to Russia. Source: Mandiant.com

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?