Israel-Palestine conflict: AI warfare in action?

In the Gaza conflict, reports say, the IDF is deploying artificial intelligence for surveillance and targeted airstrikes

Representative image of an eye attached to circuitry, casting a spotlight on human silhouettes; legend reads 'Automating Genocide: Israel's Use of AI-driven Warfare' (photo courtesy @visualizing_palestine/Instagram)
Representative image of an eye attached to circuitry, casting a spotlight on human silhouettes; legend reads 'Automating Genocide: Israel's Use of AI-driven Warfare' (photo courtesy @visualizing_palestine/Instagram)
user

NH Political Bureau

Artificial intelligence (AI) and machine learning have revolutionised various aspects of modern life, from chess-playing computers to self-driving cars. But there are murkier aspects that have raised worldwide concern, from copyright violations to... the alarming, Terminator-esque domain of AI warfare.

In the case of Israel-Palestine conflict, it's not just the vaunted Iron Dome.

Reports have spoken of AI being employed by the Israel Defense Forces (IDF) to surveil Palestinians, generate targets for airstrikes and to streamline military logistics—enabling the nation to escalate the conflict with less active effort, lower risk and far fewer human resources than would otherwise be possible.

Ethical concerns are rife, as are humanitarian misgivings. The Visualizing Palestine collective, in a recent Instagram post, spoke of Israel's use of AI having created a 'mass assassination factory'. These sophisticated AI systems include:

  • the Alchemist, used for real-time visual detection of possible targets;

  • the Gospel, used for target selection and recommendation for airstrikes; and

  • the Fire Factory, which Israel uses for organising its wartime logistics, such as calculating payloads, assigning targets to drones and other aircraft, and creating air raid schedules for maximum impact.

These systems, developed both by the Israeli military itself and its private defense contractors, have drastically accelerated its decision-making processes, allowing 'tasks that previously took hours to be completed within minutes'. For instance, the 'Gospel' analyses and selects targets 50 times faster than a human operator could.

The consequence of Israel's AI-driven warfare has been devastating for Palestinian civilians: Since 7 October 2023, over 29,313 Palestinians — including more than 12,000 children — have been killed in Gaza. By 7 January 2024, Israel had bombarded Gaza with 65,000 tonnes of explosives. It is chilling, surely, to wonder how much of this was calculated for maximum damage with the precision of a highly advanced machine.

One of the key criticisms is that the Israeli military's AI systems, while purportedly aimed at targeting specific targets, have often resulted in civilian casualties due to their indiscriminate nature, as a Guardian report pointed out last year.

As per the Guardian, multiple reports in Israeli media talk about the precision of strikes recommended by an 'AI target bank'.


Another issue is that on the one hand, as Visualizing Palestine points out, AI guidance removes some of the decision making—and hence culpability—from humans; on the other, it may be argued that the human moral responsibility is all the greater when a system reports likelihood of collateral damage and also seeks out 'high-value' "power targets" in terms of population density (think high-rises, public buildings, educational institutions...). Some 1,329 power targets were felled in the first five days of Israel's attack on Gaza after the Hamas assault of 7 October.

A former senior Israeli military source was quoted by the Guardian as saying that operatives use a 'very accurate' measurement of the rate of civilians evacuating a building shortly before a strike. 'We use an algorithm to evaluate how many civilians are remaining. It gives us a green, yellow, red, like a traffic signal.”

As per the Guardian report, experts in AI and armed conflict said they were skeptical of assertions that AI-based systems reduced civilian harm by encouraging more accurate targeting.

As per reports, the AI-powered warfare strategy adopted by Israel has been criticised for its deliberate targeting of civilian infrastructure, including residential buildings and public facilities. This approach, often referred to as the 'Dahiya Doctrine', aims to disrupt the basic elements of Palestinian society and civic life.

As AI has the potential to enhance military capabilities and efficiency, its use in warfare (like Israel does) raises complex ethical questions—including accountability and transparency. And sooner rather than later, it will become imperative for international bodies—such as the International Court of Justice (ICJ), evaluating whether the situation in Gaza now constitutes a genocide—to address these ethical challenges.

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines