UN First Committee adopts draft resolution on lethal autonomous weapons

On 1 November 2023, the First Committee (Disarmament and International Security) of the UN General Assembly approved a draft resolution on lethal autonomous weapons systems (LAWS), expressing concern about the possible negative consequences and impact of autonomous weapons systems on global security and regional and international stability and stressing the urgent need for the international community to address the challenges and concerns raised by such systems.

The resolution, once endorsed by the General Assembly, would require the UN Secretary-General to seek the views of Member States and observer States on LAWS and on ways to address the challenges and concerns they raise from humanitarian, legal, security, technological, and ethical perspectives, and to submit a report to the General Assembly at its seventy-ninth session.  The Assembly would also request the Secretary-General to invite the views of international and regional organizations, the International Committee of the Red Cross, civil society, the scientific community and industry and to include those in the annex to the report.

The ongoing work of the Group of Governmental Experts on Emerging Technologies in the Areaof Lethal Autonomous Weapons System (GGE on LAWS) – created under the Convention on Certain Conventional Weapons – is acknowledged in the resolution.

Within the First Committee, the draft resolution was adopted by a vote of 164 in favour to 5 against (Belarus, India, Mali, Niger, Russian Federation), with 8 abstentions (China, Democratic People’s Republic of Korea, Iran, Israel, Saudi Arabia, Syria, Türkiye, United Arab Emirates). In addition, 11 votes were recorded on the resolution’s provisions.

Some of the points raised by member states during the debates include:

  • Egypt noted that algorithms must not be in full control of decisions that involve harming or killing humans. Human responsibility and accountability for the use of lethal force must be preserved.
  • The Russian Federation expressed concern that the resolution seeks to undermine the work of the GGE on LAWS, which is the sole ideal forum to discuss LAWS. The country also argued that the resolution does not acknowledge that autonomous weapons systems can play an important role in defence and in fighting terrorism, and that international law fully applies to these systems.
  • Iran noted that the definition and scope of the term ‘lethal autonomous weapons’ are not clearly defined, and that GGE on LAWS should focus on states parties.
  • Türkiye also raised the issue of a lack of agreement on the definition of autonomous weapons systems and noted that the absence of shared terminology increases ‘question marks’ on the way forward. The country also added that international law and international humanitarian law should be sufficient to alleviate concerns regarding the use of such weapons systems.
  • The USA stated that it does not support the creation of a parallel process on LAWS or any other efforts that will seek to undermine the centrality of the GGE on LAWS on making progress on this issue. Poland also noted that the GGE is the forum to make progress on identifying challenges and opportunities related to LAWS, and that other international forums are not equally fit, as they often lack technical and diplomatic capacity and do not address the significant balance between humanitarian aspects and military necessity.
  • Israel called on member states not to undermine the work done in the Convention through the creation of a parallel forum. It also outlined the importance of the full application of international humanitarian law to LAWS.
  • Australia called for the report to be prepared by the UN Secretary-General to be balanced and inclusive of the views of all UN member states. South Africa expressed concern about the provision of the resolution, noting that the integrity of the process under way in the GGE on LAWS should be respected, and states parties have already made their views known on the issue. Brazil argued that  the GGE might benefit from the fresher views of a wider audience.

Russian hackers ramp up attacks on Ukrainian authorities investigating war crimes

Russian hackers are reportedly intensifying their cyberattacks on Ukraine’s law enforcement agencies, focusing on uncovering information related to investigations of war crimes allegedly committed by Russian soldiers.

According to an SSSCIP report, the Russian objective appears to be to identify war crime suspects, potentially aiding them in evading prosecution and facilitating their return to Russia. Additionally, the hackers are likely keen to ascertain the identities of elite soldiers and officers captured in Ukraine for possible exchange.

Ukrainian cybersecurity officials have voiced concerns over these espionage campaigns, which have targeted entities such as the prosecutor general’s office, courts, and other bodies investigating war crimes.

In a development that may be related, Karim Khan, the lead prosecutor of the International Criminal Court (ICC), announced that the court intends to investigate cyberattacks as possible acts of war crimes. Russia’s cyber assaults on Ukraine’s essential civilian infrastructure could be some of the initial instances under this new interpretation.

Not long after this announcement, the ICC decided to establish a field office in Kyiv in charge of investigating Russian war crimes. The ICC then reported a breach of its computer systems without divulging further details regarding the severity or attribution of the attack.

Japan to build cyber defense grid for the Indo-Pacific

Japan is developing a counter-cyber attack grid for the Indo-Pacific region to protect its interests and allies from cyber threats. The grid will consist of a cyber defence network that covers Pacific islands and enhances cybersecurity cooperation with regional countries.

This project is aligned with Japan’s goal of creating a free and open Indo-Pacific region, where it can balance the rising power of Russia, North Korea, and especially China. Japan wants to build this grid to prevent future cyberattacks and protect its national security and stability.

To strengthen cyber capabilities in the Indo-Pacific, the Japanese Foreign Ministry has allocated around a $75 billion investment plan to strengthen its ties with South and Southeast Asian nations and promote peace, connectivity, and security in the Indo-Pacific. The allocated funds will be utilized for various initiatives, including installing necessary cybersecurity equipment. Additionally, capacity building efforts will be undertaken through joint training sessions. The World Bank will also offer a dedicated fund to support the development of cybersecurity human resources in these nations.

Why does it matter?

The move comes amid growing concerns over China’s alleged involvement in cyber attacks against Japan. Around 200 Japanese organizations, including the Japan Aerospace Exploration Agency, are believed to have been targeted by Chinese cyber hackers. Reports suggest that Chinese military hackers have also accessed Japanese defence secrets.

AI’s right to forget – Machine unlearning

Machine unlearning is a growing field within AI that aims to address the challenge of forgetting outdated, incorrect, or private data in machine learning (ML) models. ML models struggle to forget information, which has significant implications for privacy, security, and ethics. This has led to the development of machine unlearning techniques.

When issues arise with a dataset, it is possible to modify or delete the dataset. However, if the data has been used to train an ML model, it becomes difficult to remove the impact of a problematic dataset. ML models are often considered black boxes, making it challenging to understand how specific datasets influenced the model and undo their effects.

OpenAI has faced criticism for the data used to train their models, and generative AI art tools are involved in legal battles regarding their training data. This highlights concerns about privacy and the potential disclosure of information about individuals whose data was used to train the models.

Machine unlearning aims to erase the influence of specific datasets on ML systems. This involves identifying problematic datasets and excluding them from the model or retraining the entire model from scratch. However, the latter approach is costly and time-consuming.

Efficient machine unlearning algorithms are needed to remove datasets without compromising utility. Some promising approaches include incremental updates to ML systems, limiting the influence of data points, and scrubbing network weights to remove information about specific training data.

However, machine unlearning faces challenges, including efficiency, standardization of evaluation metrics, validation of efficacy, privacy preservation, compatibility with existing ML models, and scalability to handle large datasets.

To address these challenges, interdisciplinary collaboration between AI experts, data privacy lawyers, and ethicists is required. Google has launched a machine unlearning challenge to unify evaluation metrics and foster innovative solutions.

Looking ahead, advancements in hardware and infrastructure will support the computational demands of machine unlearning. Collaborative efforts between legal professionals, ethicists, and AI researchers can align unlearning algorithms with ethical and legal standards. Increased public awareness and potential policy and regulatory changes will also shape the development and application of machine unlearning.

Businesses using large datasets are advised to understand and adopt machine unlearning strategies to proactively manage data privacy concerns. This includes monitoring research, implementing data handling rules, considering interdisciplinary teams, and preparing for retraining costs.

Machine unlearning is crucial for responsible AI, improving data handling capabilities while maintaining model quality. Although challenges remain, progress is being made in developing efficient unlearning algorithms. Businesses should embrace machine unlearning to manage data privacy issues responsibly and stay up-to-date with advancements in the field.

Read more

Digital technologies in UN Secretary-General’s Policy Brief on a New Agenda for Peace

As part of the process leading to the Summit of the Future in 2024, the UN Secretary-General has issued a new Policy Brief – the ninth in its series – outlining proposals for a New Agenda for Peace. Not missing in the Policy Brief are references to digital technologies and the challenges they pose for peace and security. 

The document highlights the perils of weaponising new and emerging technologies, such as the proliferation of armed uncrewed aerial systems, the ease of access to powerful tools that facilitate the spread of misinformation, disinformation, and hate speech, and the misuse of digital technology by terrorist groups. 

Among the 12 sets of recommendations detailed in the Policy Brief as steps towards achieving more effective multilateral action for peace and security, one is dedicated to ‘preventing the weaponisation of emerging domains and promote responsible innovation’. Here, the Secretary-General calls for:

  • The development of governance frameworks, at the international and national levels, to minimise  harms and address the cross-cutting risks posed by converging technologies. 
  • The establishment of an independent multilateral accountability mechanism for malicious use of cyberspace by states, to reduce incentives for such conduct. Such a mechanism, the Secretary-General argues, could enhance compliance with agreed norms and principles of responsible state behaviour. 
  • The conclusion, by 2026, of a legally binding instrument to prohibit lethal autonomous weapon systems that function without human control or oversight, and which cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems.
  • The development of frameworks to mitigate risks relating to AI-enabled systems in the peace and security domain. The Secretary-General specifically mentions the International Atomic Energy  Agency,  the International Civil Aviation Organization and the Intergovernmental Panel on Climate  Change as governance approaches that member states could seek inspiration from. He also invites member states to consider the creation of a new global body to mitigate the peace and security risks of AI while harnessing its benefits to accelerate sustainable development
  • The development of norms, rules and principles around the design, development, and use of military applications of AI through a multilateral process, with the engagement of stakeholders from industry, academia, civil society and other sectors. 
  • The development of a global framework regulating and strengthening oversight mechanisms for the use of data-driven technology, including AI, for counter-terrorism purposes.
  • The development of measures to address the risks involved in biotechnology and human enhancement technologies applied in the military domain. 

Employees at Fortune 1000 telecom companies are some of the most exposed on darkweb, researchers report

A recent report by threat intelligence firm SpyCloud has shed light on the alarming vulnerability of employees at Fortune 1000 telecommunications companies on dark web sites. The report reveals that researchers have uncovered approximately 6.34 million pairs of credentials, including corporate email addresses and passwords, which are likely associated with employees in the telecommunications sector.

The report highlights this as an ‘extreme’ rate of exposure compared to other sectors. In comparison, SpyCloud’s findings uncovered 7.52 million pairs of credentials belonging to employees in the tech sector, but this encompassed a significantly larger pool of 167 Fortune 1000 companies.

Media reports that these findings underscore the heightened risk faced by employees within the telecommunications industry, as their credentials are more readily available on dark web platforms. The compromised credentials pose a significant threat to the affected individuals and their respective companies, as cybercriminals can exploit them for various malicious activities such as unauthorized access, data breaches, and targeted attacks.

Western Digital, a technology company, confirms that hackers stole customer data

Western Digital, a technology company, has notified its customers after the March 2023 data breach and confirmed that the customer data was stolen.

In a press release, the company mentioned it worked with external forensic experts and determined that the hackers obtained a copy of a database which contained limited personal information of online store customers. The exact number of affected customers has not been disclosed. The company has notified affected customers and advised them to remain vigilant against potential phishing attempts.

The March data breach had previously been reported in early April when the company disclosed it has suffered a cyberattack. TechCrunch reported that an ‘unnamed’ hacking group breached Western Digital, claiming to have stolen ten terabytes of data.

The hackers subsequently published some of the stolen data and threatened to release more if their demands were not met. Western Digital has restored the majority of its impacted systems and services and continues to investigate the incident.

Ransomware criminal group leaks MSI’s private code on darkweb

The ransomware gang responsible for targeting Taiwanese PC manufacturer MSI has leaked the private code signing keys of the company available on their darkweb leak site. The attack, orchestrated by the group known as Money Message, was announced in early April: The group revealed that they had successfully breached the systems of MSI, a multinational IT corporation renowned for its production and distribution of motherboards and graphics cards worldwide, including in the USA and Canada. MSI is headquartered in Taipei, Taiwan.

It is reported that initially, the criminal group demanded a ransom from MSI, threatening to publish the stolen files if their demands were not met by a specified deadline. However, the group has eventually exposed MSI’s private code signing keys on their darkweb leak site. These keys are of significant importance as they are used to authenticate the legitimacy and integrity of software and firmware updates released by the company. Malicious actors could potentially misuse these keys to distribute malware or carry out other malicious activities, putting MSI’s customers at risk. The company now faces the daunting task of mitigating the potential fallout from this exposure and bolstering their cybersecurity measures to prevent further unauthorized access.

ICANN launches project to look at what drives malicious domain name registrations

The Internet Corporation for Assigned Names and Numbers (ICANN) has launched a project to explore the practices and choices of malicious actors when they decide to use the domain names of certain registrars over others. The project, called Inferential Analysis of Maliciously Registered Domains (INFERMAL), will systematically analyse the preferences of cyberattackers and possible measures to mitigate malicious activities across top-level domains (TLDs). It is funded as part of ICANN’s Domain Name System (DNS) Security Threat Mitigation Program, which aims to reduce the prevalence of DNS security threats across the Internet.

The team leading the project intends to collect and analyse a comprehensive list of domain name registration policies pertinent to would-be attackers, and then use statistical modelling to identify the registration factors preferred by attackers. It is expected that the findings of the project could help registrars and registries identify relevant DNS anti-abuse practices, strengthen the self-regulation of the overall domain name industry, and reduce the costs associated with domain regulations. The project would also help increase the security levels of domain names and, thus, the trust of end-users.

Data poisoning – a new type of cyberattacks against AI systems

Data poisoning is a new type of cyber-attack aimed at misleading AI systems. AI is developed by processing huge amounts of data. The quality of data impacts the quality of AI. Data poisoning is the intentional supply of wrong or misleading data to impact the quality of AI. Data poisoning is becoming particularly risky with the development of Large Language Models (LLM) such as ChatGPT.

Researchers from the Swiss Federal Institute of Technology (ETH) in Zurich, Google, NVIDIA and Robust Intelligence have recently published a preprint paper investigating the feasibility of data poisoning attacks against machine learning (ML) models used in artificial intelligence (AI). They injected corrupted data into an existing training data set in order to influence the behaviour of an AI algorithm that is being trained on it. It impacted the functionality of AI systems.

As AI systems are becoming more complex and massive, the detection of data poisoning attacks will be difficult. The main risks are in dealing with politically charged topics.