AI Weapons
Autonomous Killing Machines With No Accountability
In Gaza, an AI system called Lavender generated a list of 37,000 Palestinians marked for assassination. Human “review” of each target took approximately 20 seconds. The acceptable civilian casualty ratio: 15–20 dead civilians per suspected low-ranking militant. In Libya, a Turkish-made autonomous drone may have killed a human being without any human giving the order — the first time in history a machine independently decided to end a human life. In Silicon Valley, OpenAI quietly dropped its ban on military use. The future of war is here, and it has no conscience, no due process, and no accountability.
37,000
Palestinians Flagged by Lavender AI
For potential assassination
20 sec
Human Review Per Target
Rubber-stamping machine decisions
$1.8B+
Pentagon AI Spending (FY2024)
Official — real number higher
0
International Laws on AI Weapons
No treaty, no convention, no rules
AI Weapons Systems
Project Maven
Pentagon / Google (2017), then Palantir, Microsoft, AmazonActive and expandingOfficially the "Algorithmic Warfare Cross-Functional Team," Project Maven uses AI to analyze drone surveillance footage — identifying people, vehicles, buildings, and patterns of behavior. It was the Pentagon's first major AI initiative and the template for military AI integration.
Controversy
In 2018, Google employees discovered their company was working on Project Maven. Over 4,000 Google employees signed a petition demanding the company withdraw. A dozen engineers resigned. Google allowed the contract to expire — but the Pentagon simply moved to other contractors. The AI continued; only the logo changed.
Implications: Maven demonstrated that AI could process surveillance data faster and more comprehensively than humans. It also showed the Pentagon that Silicon Valley's ethical objections were a speed bump, not a roadblock.
Lavender
Israel Defense ForcesUsed in Gaza (2023–present)An AI targeting system that generates kill lists of suspected Hamas and Palestinian Islamic Jihad operatives. According to +972 Magazine and Local Call investigations based on Israeli intelligence sources, Lavender marked approximately 37,000 Palestinians as suspected militants for potential assassination.
Controversy
Sources revealed that the IDF accepted a casualty ratio of 15–20 civilian deaths per suspected low-ranking militant target. For senior commanders, the acceptable ratio was even higher — potentially 100+ civilians. The AI system generated targets; human review was perfunctory, typically under 20 seconds per target.
Implications: Lavender represents the automation of mass killing. A machine generates a kill list. A human rubber-stamps it in seconds. Bombs flatten apartment buildings. The AI produces a new list. The cycle continues.
Gospel (Habsora)
Israel Defense ForcesUsed in Gaza (2023–present)A companion system to Lavender, Gospel generates recommendations for physical targets — buildings, structures, and infrastructure associated with suspected militants. It dramatically accelerated the pace of target generation, producing targets faster than any previous system.
Controversy
Former Israeli intelligence officials told +972 Magazine that Gospel was a "mass assassination factory." The system produced 100 targets per day — compared to roughly 50 per year in previous operations. This unprecedented pace of targeting contributed to the destruction of over 70% of northern Gaza's residential buildings.
Implications: Gospel transformed war from a series of deliberate decisions into an industrial process. The bottleneck was no longer intelligence analysis but bomb supply and aircraft availability.
Kargu-2 Autonomous Drone
STM (Turkish defense company)Deployed in Libya (2020), combat use confirmedA small rotary-wing drone designed to autonomously locate and attack targets using facial recognition and AI. Weighing just 7 kg, it can be launched by hand, fly autonomously to a target area, identify targets matching pre-programmed parameters, and attack without human intervention.
Controversy
A March 2021 UN Panel of Experts report on Libya documented that Kargu-2 drones "were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability." If confirmed, this represents the first known instance of an autonomous weapon killing without human input.
Implications: The Kargu-2 incident may have crossed the Rubicon. A machine may have independently decided to kill a human being. No international law governs this. No accountability framework exists.
Replicator Initiative
US Department of DefenseLaunched August 2023, deployment ongoingDeputy Secretary of Defense Kathleen Hicks announced the Replicator initiative to field "multiple thousands" of small, cheap, autonomous drones and unmanned systems within 18–24 months to counter China's military mass. The goal: overwhelm adversaries with swarms of AI-enabled disposable systems.
Controversy
Replicator explicitly embraces quantity over quality — flooding battlefields with cheap autonomous systems that can overwhelm air defenses and military formations through sheer numbers. The program envisions AI-coordinated drone swarms acting as a unified force with minimal human oversight.
Implications: Replicator represents the industrialization of autonomous warfare. Thousands of AI-enabled kill vehicles, produced cheaply and deployed en masse. The logical endpoint is a world where lethal autonomous weapons are as ubiquitous and uncontrollable as small arms.
DARPA AI Programs
Defense Advanced Research Projects AgencyMultiple active programsDARPA funds dozens of AI-related military programs: OFFSET (swarm tactics for 250+ drones), ACE (AI-controlled fighter jets — successfully flew an AI F-16 in combat maneuvers, 2024), ALIAS (autonomous aircraft systems), MCS (AI-assisted military decision-making), and many classified programs.
Controversy
In February 2024, DARPA's X-62A VISTA (a modified F-16) flew AI-controlled dogfighting maneuvers against a human pilot — the first time AI controlled a tactical aircraft in flight. Air Force Secretary Frank Kendall flew in the AI-piloted jet to demonstrate confidence in the system.
Implications: DARPA is systematically removing humans from every aspect of warfare — targeting, piloting, decision-making, logistics. The vision is clear: AI-driven warfare where machines fight machines, with humans increasingly marginalized from the loop.
Lavender: The AI That Chose Who Dies in Gaza
In April 2024, +972 Magazine and Local Call published a devastating investigation based on interviews with six Israeli intelligence officers who worked with the Lavender system during the Gaza war. Their revelations are among the most disturbing documents in the history of automated warfare.
How Lavender works: The system analyzes surveillance data — phone records, social media activity, location data, communication patterns, group affiliations — and assigns each person in Gaza a numerical score from 1 to 100 indicating the probability they are a militant. Anyone above a certain threshold is automatically added to the kill list.
The human rubber stamp: Sources described the “human review” process as a formality. An officer would verify that the target was male and that the system hadn't flagged any obvious errors. This process took approximately 20 seconds per target. One source said: “I would invest 20 seconds for each target at this stage, and do dozens of them every day.”
The civilian cost: Intelligence sources revealed the IDF accepted that for every low-ranking Hamas operative killed, up to 15–20 civilians could be killed as “collateral damage.” For senior commanders, the ratio was even higher. One source said the policy was to bomb suspected militants in their homes — at night, when their families were present — because it was easier to locate them there.
The error rate: Sources estimated Lavender had an error rate of approximately10% — meaning roughly 1 in 10 people flagged as militants were misidentified civilians. With 37,000 targets, that means approximately 3,700 people were marked for death by mistake. The system's errors were considered acceptable.
The Accountability Gap: When a Machine Kills, Who Is Guilty?
When a soldier kills a civilian, there is a chain of accountability: the soldier, their commanding officer, the rules of engagement, the political leadership. When an AI system selects a target and a drone autonomously executes the strike, who is responsible?
The programmer? They wrote the code but didn't choose the target. The commander? They authorized the system but didn't make the specific decision. The AI? It has no moral agency, no capacity for guilt, no understanding of what it means to kill.
This is the “accountability gap” — and it is by design. Autonomous weapons diffuse responsibility across so many actors and algorithms that no single person can be held accountable. The military calls this “human in the loop,” but when “the loop” consists of 20 seconds of rubber-stamping machine decisions, the human is a fig leaf, not a safeguard.
The International Committee of the Red Cross has warned that autonomous weapons create an“accountability vacuum” incompatible with international humanitarian law. Under the laws of war, someone must be responsible for every use of lethal force. If no one is responsible, then the killing is by definition unlawful. But no court has ever prosecuted an AI-enabled killing. No government has acknowledged the problem. The machines keep killing.
The LAWS Debate: Why There Are No Rules
Since 2014, the UN Convention on Certain Conventional Weapons (CCW) has been debating Lethal Autonomous Weapons Systems (LAWS). After a decade of discussion, the result has been nothing. No treaty. No binding regulations. No agreed-upon definitions. Not even a shared understanding of what “autonomous” means.
The reason is simple: the countries developing AI weapons don't want rules. The United States, Russia, Israel, China, the UK, South Korea, and Turkey have all blocked meaningful progress on LAWS regulation. The US position, articulated in DOD Directive 3000.09, requires “appropriate levels of human judgment” in autonomous weapons — but doesn't define what “appropriate” means, and the directive is policy, not law.
Over 100 countries have called for some form of regulation. Nobel Peace laureates, AI researchers (including Geoffrey Hinton, Yoshua Bengio, and Stuart Russell), and organizations like the International Committee of the Red Cross have warned of the dangers. In 2023, the Campaign to Stop Killer Robots — a coalition of 250+ organizations in 70 countries — called for a preemptive ban on autonomous weapons. The major military powers dismissed it.
The historical parallel is chilling. Before World War I, some argued that machine guns were too terrible to use. Before WWII, some argued that bombing cities was beyond the pale. Before the atomic age, some argued that nuclear weapons would never be built. In each case, the weapons were built, deployed, and normalized before meaningful regulation was attempted. The same pattern is playing out with AI weapons — and this time, the weapons improve themselves.
Silicon Valley's Military Turn
The tech industry's brief flirtation with ethical boundaries is over. After Google's Project Maven controversy in 2018, the industry appeared to be drawing lines. Five years later, those lines have been erased. Every major tech company is now competing for Pentagon contracts, and a new generation of “defense tech” startups has emerged with no ethical qualms whatsoever.
Palantir Technologies
Founded by Peter Thiel$50B+ (2024)Gotham platform used by CIA, NSA, ICE, military. Provides AI-driven intelligence analysis, targeting support, and surveillance. CEO Alex Karp has explicitly embraced the military mission: "We believe the most effective way to protect civil liberties is to make sure the good guys win."
Anduril Industries
Founded by Palmer Luckey (Oculus VR)$14B+ (2024)AI-powered autonomous drones (Altius, Ghost), surveillance towers (Lattice AI), counter-drone systems. Explicitly founded to be the "next great defense company." Lattice AI creates autonomous mesh networks of sensors and weapons.
Shield AI
Founded by Brandon Tseng (Navy SEAL)$2.7B+ (2023)Autonomous military drones that can fly and fight without GPS or communications. V-BAT (autonomous vertical takeoff drone) and Hivemind (AI pilot). Demonstrated autonomous building-clearing operations.
Scale AI
Founded by Alexandr Wang$14B+ (2024)Provides AI training data for Pentagon programs. The "picks and shovels" of military AI — training the algorithms that power autonomous weapons. Major contracts with DOD, intelligence community.
OpenAI
Founded by Sam Altman$157B+ (2025)Quietly dropped its ban on military use in January 2024. Now works with the Pentagon and defense contractors. The company that once warned AI could be "an existential threat to humanity" is now helping build military AI systems.
Microsoft
Founded by Bill Gates$3T+IVAS (Integrated Visual Augmentation System) for Army: $21.9B contract for AR combat goggles. Azure Government cloud for classified workloads. Nuance AI for military communications. HoloLens combat systems.
💡 Did You Know: OpenAI's Military Ban Reversal
OpenAI was founded in 2015 with an explicit mission to ensure AI benefits “all of humanity.” Its usage policies originally prohibited military applications. Then, quietly, in January 2024, OpenAI updated its terms of service to remove the ban on “military and warfare” use.
The company now works with the Pentagon and defense contractors. An OpenAI spokesperson clarified that the company still prohibits using its tools to “develop or use weapons” or “harm others” — but military applications like intelligence analysis, logistics, and communications support are now permitted.
The irony is staggering: the company whose CEO, Sam Altman, has testified before Congress about the “existential risk” of AI and called for government regulation is now providing AI tools to the world's most powerful military. The company that warned AI could destroy humanity is helping build military AI systems. The profit motive won.
Drone Swarms: The Future of War Is Cheap and Disposable
The Pentagon's Replicator initiative represents a paradigm shift in military thinking. Instead of expensive, exquisite platforms like $100M F-35s and $13B aircraft carriers, the future of war may be thousands of cheap, AI-enabled, disposable autonomous systems — drone swarms that overwhelm defenses through sheer numbers.
The concept is simple: a $2 billion aircraft carrier can be sunk by a swarm of 1,000 drones costing $10,000 each ($10 million total). An air defense system designed to track and engage dozens of targets is overwhelmed by hundreds of autonomous drones attacking simultaneously from different directions. The economics of defense collapse.
China is leading in drone swarm technology. In 2022, the PLA demonstrated a swarm of 200+ drones operating autonomously — sharing targeting data, coordinating attacks, and adapting to losses without human intervention. The US Replicator initiative is explicitly designed to counter this capability with American swarms.
The result is an autonomous weapons arms race with no rules, no treaties, and no brakes. Both sides are developing AI-controlled swarms of lethal autonomous weapons, each driving the other to remove human oversight in the name of speed. In a swarm-vs-swarm engagement, the side with a “human in the loop” loses because human decision-making is too slow. The logic of competition drives both sides toward full autonomy — machines fighting machines, at machine speed, with human life as collateral.
The Ideology of Autonomous Killing
The new defense tech industry is driven by a specific ideology — one that views AI-powered warfare as not just inevitable but desirable. Peter Thiel, whose Palantir provides AI surveillance tools to the CIA and military, has argued that technology companies have a moral obligation to work with the defense establishment. Palmer Luckey, founder of Anduril, explicitly set out to create the “next great defense company” because he believed Silicon Valley's reluctance to work with the military was naive.
The argument goes: if the US doesn't develop AI weapons, China will, and a world where Chinese AI dominates is worse than a world where American AI dominates. This is the exact logic of every arms race in history — and it always leads to the same place: proliferation, escalation, and catastrophe.
The defense tech founders present themselves as patriotic disruptors. But their companies are building tools of unprecedented lethality with minimal oversight, selling them to governments with poor human rights records, and generating enormous profits in the process. Anduril's surveillance towers, deployed on the US-Mexico border, use the same AI technology sold to foreign militaries. Palantir's Gotham platform, used by ICE for immigration enforcement, uses the same algorithms used for military targeting. The line between domestic surveillance and foreign killing is becoming invisible.
The Libertarian Case: No Algorithm Has Due Process
The Fifth Amendment to the United States Constitution states: “No person shall be... deprived of life, liberty, or property, without due process of law.” An AI system that generates kill lists based on phone records and behavioral patterns is the antithesis of due process. There is no trial, no defense, no jury, no appeal. An algorithm assigns a number, a human rubber-stamps it in 20 seconds, and a bomb falls.
The government claims these protections don't apply to foreign nationals in war zones. But the US killed American citizens Anwar al-Awlaki and his 16-year-old son by drone strike with no trial. The legal reasoning — that “due process” doesn't require “judicial process” — is an Orwellian destruction of the concept. If the executive branch can unilaterally decide that due process has been satisfied, then the right is meaningless.
AI weapons accelerate this erosion. When a human decided to kill someone, there was at least the possibility of moral reflection, doubt, mercy. When a machine generates a target list of 37,000 people and a human spends 20 seconds “reviewing” each one, the decision to kill has been outsourced to an algorithm. The human is decoration.
The libertarian principle is clear: no government should have the power to kill people based on algorithmic predictions. Not foreign governments, and certainly not our own. If a person cannot be arrested, charged, tried, and convicted through a transparent legal process, they should not be killed. Full stop. There is no exception in the Bill of Rights for “but the computer said so.”
Furthermore, the concentration of AI weapons development in a handful of private companies — funded by the Pentagon, insulated from competition by classified contracts, and subject to minimal oversight — is the military-industrial complex on steroids. Eisenhower warned of the “unwarranted influence” of the defense industry. He could not have imagined a world where private companies build autonomous killing machines that governments deploy with no accountability to anyone.
What Comes Next
The trajectory is clear and accelerating. Within a decade, autonomous weapons will be ubiquitous on the battlefield. AI will select targets, plan operations, coordinate attacks, and make life-or-death decisions at speeds no human can match. The “human in the loop” will become a “human on the loop” (monitoring but not controlling) and eventually a “human out of the loop” entirely.
The risks are existential. An AI arms race creates pressure to remove safeguards in the name of speed. Autonomous weapons lower the barrier to conflict — it's easier to start a war when your own soldiers don't die. Proliferation is inevitable — today's cutting-edge military AI will be tomorrow's consumer drone with a payload. And the concentration of lethal AI in the hands of governments and corporations creates a power asymmetry that threatens the foundations of individual liberty.
The choice is not between AI weapons and no AI weapons — that ship has sailed. The choice is between a world where autonomous killing is regulated, transparent, and accountable, and a world where it is secret, unaccountable, and unlimited. Right now, we are sprinting toward the latter.
Sources
- • +972 Magazine / Local Call, “'Lavender': The AI Machine Directing Israel's Bombing Spree in Gaza” (April 2024)
- • UN Panel of Experts on Libya, Final Report S/2021/229 (March 2021)
- • DOD Directive 3000.09, “Autonomy in Weapon Systems” (updated 2023)
- • Congressional Research Service, “Defense Primer: US Policy on Lethal Autonomous Weapon Systems” (2024)
- • International Committee of the Red Cross, “Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects” (2014)
- • Paul Scharre, Army of None: Autonomous Weapons and the Future of War (2018)
- • Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (2023)
- • Campaign to Stop Killer Robots reports and briefings
- • DARPA program announcements and budget justifications
- • OpenAI usage policy changes (January 2024)
- • Deputy Secretary of Defense Kathleen Hicks, Replicator Initiative announcement (August 2023)