Israeli military’s AI use in Gaza raises concerns

An investigation by Israeli publications +972 Magazine and Local Call suggests that Israel’s military has been using AI to select bombing targets in Gaza, resulting in thousands of civilian casualties.

 Armored, Military, Tank, Transportation, Vehicle, Weapon

According to an investigation by Israeli publications +972 Magazine and Local Call, the Israeli military has reportedly utilised AI technology, known as Lavender, to assist in selecting bombing targets in Gaza. The AI system was allegedly developed following attacks by Hamas on 7 October. Lavender was said to have identified around 37,000 Palestinians in Gaza as suspected ‘Hamas militants’ for potential assassination.

Based on the interviews with Israeli intelligence officers, the Lavender system was used without thorough independent examinations of the identified targets before airstrikes. The system analysed data associated with known Hamas operatives to rank other individuals in Gaza on a 1–100 scale for similarity. However, concerns were raised about the loose definition of ‘Hamas operative’ used during the system’s training, leading to errors in target identification.

The AI-driven warfare strategy reportedly allowed significant collateral damage, with wide latitude given to intelligence officers regarding civilian casualties. During the conflict, officers were purportedly authorised to cause civilian deaths as a part of targeting suspected Hamas operatives. Additionally, a system called ‘Where’s Daddy?’ was allegedly used to track targets to their homes, resulting in bombings that sometimes killed entire families, even when the targeted individual was not present.

Why does it matter?

The use of AI technologies like Lavender and facial recognition programs in Gaza has raised ethical concerns, especially as they contribute to civilian casualties. Mona Shtaya, a non-resident fellow, highlighted these technologies’ extension of Israel’s regional surveillance efforts. Reports of civilian deaths due to AI-targeted strikes underscore the need for careful consideration of the ethical implications of using advanced technologies in conflict zones.