Israel’s AI Systems Have Designed an Industrial-Scale Killing Machine
How to design industrial genocide in the 2020s
Israel’s war on Gaza has revealed a chilling evolution in the use of artificial intelligence in warfare. Behind the devastation lies a network of military algorithms, chief among them ‘Lavender’ and ‘Gospel’, developed by Israel’s elite cyber-intelligence arm, Unit 8200. These systems, known as a are invisible, fast, and built for mass-scale targeting.
In past conflicts, Israel’s targeting apparatus was slower not out of humanitarian restraint, but because the occupation’s intelligence-gathering infrastructure relied on human surveillance, informants, and laborious cross-checking of phone intercepts and movement patterns.
Analysts would still approve strikes that devastated civilian life, but the process, from surveillance passes to so-called proportionality debates, moved at a human pace, often stretched over weeks or months. Today, that process has been compressed into minutes. Instead of dossiers and corroborated intelligence, analysts now receive lists generated by algorithms, often with only the most cursory human check before approval. Whistleblowers from within the IDF describe the process as “rubber-stamping” algorithmic outputs, with some analysts expected to clear 50 or more strike recommendations per shift. One former operator recalled: “You’re not asking if the target is correct anymore. You’re asking if the system thinks they are.”
Lavender functions as a machine-learning database that assigns a score to each individual in Gaza based on suspected affiliation with Hamas or Islamic Jihad. A flagged individual might be placed on a strike list if their score passes a set threshold, a threshold the IDF refuses to disclose. Factors may include call records, associations, online activity, and even patterns of movement that align with other flagged individuals.
In the early months of this carpet bombing, Lavender reportedly flagged up to 37,000 people. Gospel works in parallel, scanning drone feeds, intercepted communications, and geolocation data to pinpoint structures, vehicles, and people. One officer described how Gospel could propose a hundred new targets daily, each passed to operators for rapid approval. Once cleared, strike orders can be executed within hours, sometimes minutes. Former intelligence staff say these systems were explicitly designed to maintain a constant “pipeline” of targets, keeping bombing campaigns continuous and relentless.
This automation has not led to surgical precision; it has led to scale. Entire apartment blocks have been flattened because one person on a floor appeared on Lavender’s list. Civilian homes, hospitals, and even schools have been obliterated by decisions rooted in algorithmic suspicion. The United Nations and human rights groups, including Israel’s own B’Tselem, have accused the government of acts consistent with genocide.
In January, South Africa’s case at the International Court of Justice argued that Israel’s campaign, enabled by AI-assisted mass targeting, demonstrates intent to destroy Palestinians in Gaza as a group, citing the indiscriminate nature of strikes, attacks on essential infrastructure, and the scale of civilian casualties. South Africa’s legal filings point to the chilling efficiency of these AI systems as evidence that mass killing is not collateral; it is systemic.
The opacity of these systems deepens the ethical crisis. The public has no access to their training data, error rates, or decision thresholds. Even within the IDF, only a select cadre understands the algorithms’ mechanics. This mirrors historical moments when new military technologies, from chemical weapons to drone strikes, obscured lines of accountability, enabling mass killing without individual responsibility. Once integrated into the military machine, these tools take on a life of their own, normalising practices that would once have been politically unthinkable.
Unit 8200 has long been Israel’s incubator for cutting-edge surveillance and cyber tools. Many veterans now work in Silicon Valley firms or Israel’s thriving start-up scene, bringing military-grade analytics into civilian and security industries worldwide. Predictive policing systems, border surveillance, and facial recognition tools used in other countries share DNA with the targeting systems tested in Gaza. Civil liberties experts warn that the same methods used to select bombing targets could be adapted for domestic repression or authoritarian control.
Corporate complicity is undeniable. Tech giants like Microsoft, Google, and Amazon have supplied cloud and AI infrastructure to the Israeli military through contracts such as Project Nimbus. Internal protests and whistleblower complaints at these companies cite fears that their services are enabling war crimes. Project Nimbus in particular grants Israel scalable AI computing power, real-time analytics, and image-recognition capabilities that integrate directly into its military intelligence workflow. Former employees describe a culture of silencing dissent, where engineers questioning the project’s ethics are marginalised or pushed out.
What has emerged in Gaza is not a vision of the future; it is here, and it is lethal. A fusion of military power, corporate technology, and algorithmic speed has created an industrial-scale killing machine. The risk is that Gaza becomes not a warning but a blueprint, replicated in future conflicts from Eastern Europe to East Asia. Without urgent global action to regulate or ban such systems, the mechanisation of mass killing could spread far beyond the borders of Palestine, leaving the world with a new, automated architecture of atrocity.