🕊️CEASEFIRE: Iran War Day 40 — 2-Week Pause Announced —Live Tracker →
In-Depth Analysis

AI Weapons

Autonomous Killing Machines With No Accountability

In Gaza, an AI system called Lavender generated a list of 37,000 Palestinians marked for assassination. Human “review” of each target took approximately 20 seconds. The acceptable civilian casualty ratio: 15–20 dead civilians per suspected low-ranking militant. In Libya, a Turkish-made autonomous drone may have killed a human being without any human giving the order — the first time in history a machine independently decided to end a human life. In Silicon Valley, OpenAI quietly dropped its ban on military use. The future of war is here, and it has no conscience, no due process, and no accountability.

37,000

Palestinians Flagged by Lavender AI

For potential assassination

20 sec

Human Review Per Target

Rubber-stamping machine decisions

$1.8B+

Pentagon AI Spending (FY2024)

Official — real number higher

0

International Laws on AI Weapons

No treaty, no convention, no rules

AI Weapons Systems

Project Maven

Pentagon / Google (2017), then Palantir, Microsoft, AmazonActive and expanding

Officially the "Algorithmic Warfare Cross-Functional Team," Project Maven uses AI to analyze drone surveillance footage — identifying people, vehicles, buildings, and patterns of behavior. It was the Pentagon's first major AI initiative and the template for military AI integration.

Controversy

In 2018, Google employees discovered their company was working on Project Maven. Over 4,000 Google employees signed a petition demanding the company withdraw. A dozen engineers resigned. Google allowed the contract to expire — but the Pentagon simply moved to other contractors. The AI continued; only the logo changed.

Implications: Maven demonstrated that AI could process surveillance data faster and more comprehensively than humans. It also showed the Pentagon that Silicon Valley's ethical objections were a speed bump, not a roadblock.

Lavender

Israel Defense ForcesUsed in Gaza (2023–present)

An AI targeting system that generates kill lists of suspected Hamas and Palestinian Islamic Jihad operatives. According to +972 Magazine and Local Call investigations based on Israeli intelligence sources, Lavender marked approximately 37,000 Palestinians as suspected militants for potential assassination.

Controversy

Sources revealed that the IDF accepted a casualty ratio of 15–20 civilian deaths per suspected low-ranking militant target. For senior commanders, the acceptable ratio was even higher — potentially 100+ civilians. The AI system generated targets; human review was perfunctory, typically under 20 seconds per target.

Implications: Lavender represents the automation of mass killing. A machine generates a kill list. A human rubber-stamps it in seconds. Bombs flatten apartment buildings. The AI produces a new list. The cycle continues.

Gospel (Habsora)

Israel Defense ForcesUsed in Gaza (2023–present)

A companion system to Lavender, Gospel generates recommendations for physical targets — buildings, structures, and infrastructure associated with suspected militants. It dramatically accelerated the pace of target generation, producing targets faster than any previous system.

Controversy

Former Israeli intelligence officials told +972 Magazine that Gospel was a "mass assassination factory." The system produced 100 targets per day — compared to roughly 50 per year in previous operations. This unprecedented pace of targeting contributed to the destruction of over 70% of northern Gaza's residential buildings.

Implications: Gospel transformed war from a series of deliberate decisions into an industrial process. The bottleneck was no longer intelligence analysis but bomb supply and aircraft availability.

Kargu-2 Autonomous Drone

STM (Turkish defense company)Deployed in Libya (2020), combat use confirmed

A small rotary-wing drone designed to autonomously locate and attack targets using facial recognition and AI. Weighing just 7 kg, it can be launched by hand, fly autonomously to a target area, identify targets matching pre-programmed parameters, and attack without human intervention.

Controversy

A March 2021 UN Panel of Experts report on Libya documented that Kargu-2 drones "were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true 'fire, forget and find' capability." If confirmed, this represents the first known instance of an autonomous weapon killing without human input.

Implications: The Kargu-2 incident may have crossed the Rubicon. A machine may have independently decided to kill a human being. No international law governs this. No accountability framework exists.

Replicator Initiative

US Department of DefenseLaunched August 2023, deployment ongoing

Deputy Secretary of Defense Kathleen Hicks announced the Replicator initiative to field "multiple thousands" of small, cheap, autonomous drones and unmanned systems within 18–24 months to counter China's military mass. The goal: overwhelm adversaries with swarms of AI-enabled disposable systems.

Controversy

Replicator explicitly embraces quantity over quality — flooding battlefields with cheap autonomous systems that can overwhelm air defenses and military formations through sheer numbers. The program envisions AI-coordinated drone swarms acting as a unified force with minimal human oversight.

Implications: Replicator represents the industrialization of autonomous warfare. Thousands of AI-enabled kill vehicles, produced cheaply and deployed en masse. The logical endpoint is a world where lethal autonomous weapons are as ubiquitous and uncontrollable as small arms.

DARPA AI Programs

Defense Advanced Research Projects AgencyMultiple active programs

DARPA funds dozens of AI-related military programs: OFFSET (swarm tactics for 250+ drones), ACE (AI-controlled fighter jets — successfully flew an AI F-16 in combat maneuvers, 2024), ALIAS (autonomous aircraft systems), MCS (AI-assisted military decision-making), and many classified programs.

Controversy

In February 2024, DARPA's X-62A VISTA (a modified F-16) flew AI-controlled dogfighting maneuvers against a human pilot — the first time AI controlled a tactical aircraft in flight. Air Force Secretary Frank Kendall flew in the AI-piloted jet to demonstrate confidence in the system.

Implications: DARPA is systematically removing humans from every aspect of warfare — targeting, piloting, decision-making, logistics. The vision is clear: AI-driven warfare where machines fight machines, with humans increasingly marginalized from the loop.

Lavender: The AI That Chose Who Dies in Gaza

In April 2024, +972 Magazine and Local Call published a devastating investigation based on interviews with six Israeli intelligence officers who worked with the Lavender system during the Gaza war. Their revelations are among the most disturbing documents in the history of automated warfare.

How Lavender works: The system analyzes surveillance data — phone records, social media activity, location data, communication patterns, group affiliations — and assigns each person in Gaza a numerical score from 1 to 100 indicating the probability they are a militant. Anyone above a certain threshold is automatically added to the kill list.

The human rubber stamp: Sources described the “human review” process as a formality. An officer would verify that the target was male and that the system hadn't flagged any obvious errors. This process took approximately 20 seconds per target. One source said: “I would invest 20 seconds for each target at this stage, and do dozens of them every day.”

The civilian cost: Intelligence sources revealed the IDF accepted that for every low-ranking Hamas operative killed, up to 15–20 civilians could be killed as “collateral damage.” For senior commanders, the ratio was even higher. One source said the policy was to bomb suspected militants in their homes — at night, when their families were present — because it was easier to locate them there.

The error rate: Sources estimated Lavender had an error rate of approximately10% — meaning roughly 1 in 10 people flagged as militants were misidentified civilians. With 37,000 targets, that means approximately 3,700 people were marked for death by mistake. The system's errors were considered acceptable.

How AI Killing Machines Actually Work

AI weapons systems are not science fiction — they are operational today and built on commercially available technologies. Understanding how they work reveals both their capabilities and their dangers.

Step 1: Data Collection (Surveillance Layer)

Sources: Satellite imagery, drone cameras, cell phone intercepts, social media scraping, financial transaction monitoring, facial recognition databases, vehicle tracking, communication metadata

Volume: The US intelligence community processes 20+ petabytes of data daily — equivalent to the entire Library of Congress every 14.4 seconds

This creates what military analysts call "the tyranny of data" — so much information that human analysis is impossible, necessitating AI filtering and prioritization

Step 2: Pattern Recognition (Analysis Layer)

Behavioral Analysis: AI identifies "patterns of life" — when someone wakes up, where they go, who they meet, how they communicate. Deviations from normal patterns trigger alerts.

Association Networks: Machine learning algorithms map relationships between people based on communication, co-location, financial transactions. If you talk to a militant, the AI flags you.

The Lavender system reportedly analyzed features like: being in a WhatsApp group with known militants, changing phones frequently, changing sleeping locations

Step 3: Scoring and Ranking (Targeting Layer)

Risk Scoring: AI assigns numerical probability scores (0-100) indicating likelihood someone is a combatant. Scores above threshold automatically generate targeting recommendations.

Priority Ranking: System ranks targets by strategic value, accessibility, and collateral damage estimates. High-value targets get priority regardless of civilian casualties.

Sources told +972 that human reviewers became "quality assurance" rather than decision-makers — verifying the target was male and the system hadn't made obvious errors

Step 4: Autonomous Execution (Weapons Layer)

Platform Selection: AI chooses optimal weapons platform (drone, missile, artillery) based on target location, weather, air defenses, and civilian density.

Timing Algorithm: System determines optimal strike time, often selecting moments when target is home with family to ensure positive identification and eliminate escape.

The Gospel system reportedly recommended striking targets in their homes at night, when entire families would be present, because it was easier to locate them

The Error Cascade Problem

Each layer in the AI kill chain amplifies errors from the previous layer. Bad surveillance data leads to bad pattern recognition. Bad patterns lead to bad targeting. Bad targeting leads to dead civilians.A 10% error rate at each stage compounds to a 34% error rate in the final targeting decision. With 37,000 targets, that means over 12,000 potential misidentifications.

The Global AI Weapons Race

While the US and Israel dominate headlines, AI weapons development is a global phenomenon. At least 30 countries are developing some form of lethal autonomous weapons, and the technology is proliferating rapidly.

China: The Swarm Leader

Programs: Sharp Sword stealth combat drone, WZ-8 hypersonic reconnaissance drone, swarm systems demonstrated with 200+ coordinated drones

Strategy: "Intelligentized warfare" doctrine envisions AI-controlled systems operating at speeds no human can match. Focus on overwhelming US defenses with cheap, expendable autonomous systems.

Exports: China is the world's largest drone exporter, selling AI-enabled systems to Pakistan, Nigeria, Saudi Arabia, UAE, and others. No human rights restrictions on sales.

In 2019, China's Ziyan Blowfish A3 demonstrated autonomous target identification and tracking. The promotional video showed it hunting and killing without human input.

Russia: The Kalashnikov of Killer Robots

Programs: Poseidon autonomous nuclear torpedo, Lancet loitering munitions (extensively used in Ukraine), S-70 Okhotnik stealth combat drone

Combat Use: Lancet drones in Ukraine operate with increasing autonomy, using AI to identify and attack Ukrainian armor with minimal human oversight.

Exports: Russia sells autonomous systems to allies including Iran, Syria, and Wagner mercenaries. No export controls or ethical guidelines.

Kalashnikov Concern, maker of the AK-47, now manufactures AI-enabled weapons. Their motto: "Protecting peace through strength." The AK-47 killed more people than any weapon in history.

South Korea: The DMZ Testing Ground

Programs: SGR-A1 autonomous sentry guns deployed along DMZ, KAIST AI combat systems, KAI KUS-FC autonomous fighter concept

Rationale: Facing North Korea's 1.2 million-man army, South Korea sees AI weapons as force multipliers. Autonomous systems can respond faster than human soldiers.

The SGR-A1 sentry guns can autonomously detect, track, and engage human targets up to 3km away. They are reportedly configured for "human in the loop" operation — but the loop can be removed.

India: The Border AI War

Programs: Autonomous Border Surveillance System along Pakistan/China borders, DRDO swarm drone development, AI-enabled missile defense systems

Deployment: AI surveillance systems along the Line of Actual Control with China automatically detect troop movements and recommend responses.

India's border AI systems have triggered multiple false alarms, nearly escalating tensions with nuclear-armed neighbors. Machines don't understand diplomacy.

Turkey: Export Success Story

Programs: Bayraktar TB2 combat drones (semi-autonomous), Kargu loitering munitions, ALPAGU micro attack drones

Combat Record: Turkish drones devastated Syrian/Russian forces (2020), Armenian forces in Nagorno-Karabakh (2020), and have been used in Libya, Ethiopia, and Ukraine.

Business Model: Turkey sells advanced drones to countries excluded from US/European arms markets. No human rights conditions, competitive prices, proven effectiveness.

The Kargu-2 used in Libya may be the first confirmed kill by a fully autonomous weapon. Turkey neither confirms nor denies this historic moment.

The Economics of AI Death

Autonomous weapons represent the ultimate economic disruption: they make killing cheaper, faster, and scalable. This economic logic drives adoption regardless of ethical concerns.

Cost Comparison: Human vs. Machine

US soldier (full lifecycle cost)$4.2M
Chinese Ziyan Blowfish drone$15,000
Turkish Kargu-2 loitering munition$70,000
Ratio advantage280:1

One soldier's cost could buy 280 autonomous weapons. The math is compelling for defense ministers.

Speed Comparison: Analysis to Strike

Traditional targeting cycleHours-Days
Human-supervised AIMinutes
Fully autonomous systemSeconds
Speed advantage1,000x+

In high-intensity conflicts, speed equals survival. Humans are too slow for modern warfare.

Market Size: The AI Death Economy

Global Military AI Market: $18.8B (2024) → $45.5B (2030)

Growth Rate: 15.8% CAGR

Autonomous Weapons Segment: $4.1B (2024) → $18.7B (2030)

Growth Rate: 29.1% CAGR

Key Players: Northrop Grumman, Lockheed Martin, BAE Systems, Raytheon, Thales

New Entrants: Palantir, Anduril, Shield AI, etc.

Stock analysts describe autonomous weapons as the defense industry's "next growth vector." Death is profitable, especially when it's scalable.

The Moral Calculus of Machine Killing

The introduction of autonomous weapons forces us to confront fundamental questions about the nature of killing, moral agency, and human dignity that philosophy has grappled with for centuries.

The Kantian Objection: Instrumentalization

Immanuel Kant's categorical imperative holds that humans must never be treated merely as means to an end, but always as ends in themselves. When an AI system reduces human life to a probability score and decides to kill based on algorithmic calculations, it treats humans as objects to be processed rather than moral agents deserving respect.

A human soldier, even in killing, recognizes the humanity of their target through the act of moral choice. A machine has no capacity for such recognition.

The Utilitarian Defense: Precision Killing

Defenders argue AI weapons could reduce civilian casualties by making more precise targeting decisions than emotional, stressed, or biased human soldiers. If an AI can distinguish combatants from civilians better than humans, utilitarian logic suggests autonomous weapons could save lives.

This argument collapses when confronted with actual deployment: Lavender's 10% error rate and acceptance of 15-20 civilian deaths per militant target shows precision is subordinated to operational tempo.

The Just War Problem: Discrimination and Proportionality

Just War Theory requires combatants to distinguish between legitimate military targets and protected persons (civilians). It also requires proportionality — that military advantage gained must outweigh harm to civilians. AI systems like Lavender systematically violate both principles by accepting high error rates and predetermined casualty ratios.

Traditional soldiers can show mercy, recognize surrender, or abort attacks when civilians are present. Current AI systems lack this capacity for moral judgment.

The Rights-Based Objection: Right to Life

If there is a fundamental human right to life, does that right include being killed only by another human being capable of moral judgment? Many philosophers and legal scholars argue that the right to life includes the right not to be killed by a machine programmed by someone else.

This principle is recognized in domestic law — no country allows robots to execute criminals — but ignored in warfare where human rights protections are weaker.

The Accountability Gap: When a Machine Kills, Who Is Guilty?

When a soldier kills a civilian, there is a chain of accountability: the soldier, their commanding officer, the rules of engagement, the political leadership. When an AI system selects a target and a drone autonomously executes the strike, who is responsible?

The programmer? They wrote the code but didn't choose the target. The commander? They authorized the system but didn't make the specific decision. The AI? It has no moral agency, no capacity for guilt, no understanding of what it means to kill.

This is the “accountability gap” — and it is by design. Autonomous weapons diffuse responsibility across so many actors and algorithms that no single person can be held accountable. The military calls this “human in the loop,” but when “the loop” consists of 20 seconds of rubber-stamping machine decisions, the human is a fig leaf, not a safeguard.

The Accountability Chain Breakdown

AI Developer: "We built a tool, we didn't choose targets"
Military Commander: "We authorized the system, not specific strikes"
Human Operator: "I followed the AI's recommendations"
Political Leader: "We authorized force, not individual decisions"
Legal System: "No clear jurisdiction or precedent for AI killings"

Result: Everyone is responsible, therefore no one is responsible.

The International Committee of the Red Cross has warned that autonomous weapons create an“accountability vacuum” incompatible with international humanitarian law. Under the laws of war, someone must be responsible for every use of lethal force. If no one is responsible, then the killing is by definition unlawful. But no court has ever prosecuted an AI-enabled killing. No government has acknowledged the problem. The machines keep killing.

The Proliferation Problem: When Everyone Has Killer Robots

The most dangerous aspect of AI weapons isn't their current capabilities — it's their inevitable proliferation. Unlike nuclear weapons, which require rare materials and complex infrastructure, AI weapons are built from commercial technologies that are rapidly becoming ubiquitous.

Scenario 1: The Terrorist Swarm (2027-2030)

A terrorist organization purchases 100 commercial drones ($50,000 total), adds facial recognition software (open source), explosive payloads ($500 each), and AI coordination systems (commercially available). The swarm attacks a music festival, using facial recognition to target specific ethnic groups.

Prevention: Nearly impossible. All components are legal and commercially available. Detection requires sophisticated counter-AI systems that don't exist.

Scenario 2: The Proxy War Escalation (2025-2028)

China provides autonomous weapons to Iran, which deploys them against US forces in Iraq. The US retaliates by giving autonomous systems to Taiwan. Each side escalates with more advanced AI weapons until human control is completely removed from the conflict cycle.

Risk: Autonomous systems fighting at machine speed can escalate conflicts faster than humans can de-escalate them.

Scenario 3: The Authoritarian Control System (2030+)

Governments deploy autonomous weapons domestically for "security." AI systems continuously monitor populations, automatically identifying "dissidents" based on behavioral patterns, and deploying lethal force without human authorization. Protest becomes impossible.

Precedent: China's Xinjiang surveillance system already uses AI to monitor Uyghurs. Adding weapons is a software upgrade.

Scenario 4: The Flash War (2030+)

Two nations' AI defense systems misinterpret each other's actions as hostile. Autonomous weapons engage automatically to "defend" against perceived threats. A full-scale war begins and ends before human leaders can intervene — decided entirely by machines in minutes.

Historical parallel: In 1995, Russia nearly launched nuclear weapons after mistaking a scientific rocket for a US submarine-launched ballistic missile. Only human hesitation prevented war.

The LAWS Debate: Why There Are No Rules

Since 2014, the UN Convention on Certain Conventional Weapons (CCW) has been debating Lethal Autonomous Weapons Systems (LAWS). After a decade of discussion, the result has been nothing. No treaty. No binding regulations. No agreed-upon definitions. Not even a shared understanding of what “autonomous” means.

The reason is simple: the countries developing AI weapons don't want rules. The United States, Russia, Israel, China, the UK, South Korea, and Turkey have all blocked meaningful progress on LAWS regulation. The US position, articulated in DOD Directive 3000.09, requires “appropriate levels of human judgment” in autonomous weapons — but doesn't define what “appropriate” means, and the directive is policy, not law.

Over 100 countries have called for some form of regulation. Nobel Peace laureates, AI researchers (including Geoffrey Hinton, Yoshua Bengio, and Stuart Russell), and organizations like the International Committee of the Red Cross have warned of the dangers. In 2023, the Campaign to Stop Killer Robots — a coalition of 250+ organizations in 70 countries — called for a preemptive ban on autonomous weapons. The major military powers dismissed it.

The historical parallel is chilling. Before World War I, some argued that machine guns were too terrible to use. Before WWII, some argued that bombing cities was beyond the pale. Before the atomic age, some argued that nuclear weapons would never be built. In each case, the weapons were built, deployed, and normalized before meaningful regulation was attempted. The same pattern is playing out with AI weapons — and this time, the weapons improve themselves.

Silicon Valley's Military Turn

The tech industry's brief flirtation with ethical boundaries is over. After Google's Project Maven controversy in 2018, the industry appeared to be drawing lines. Five years later, those lines have been erased. Every major tech company is now competing for Pentagon contracts, and a new generation of “defense tech” startups has emerged with no ethical qualms whatsoever.

Palantir Technologies

Founded by Peter Thiel$50B+ (2024)

Gotham platform used by CIA, NSA, ICE, military. Provides AI-driven intelligence analysis, targeting support, and surveillance. CEO Alex Karp has explicitly embraced the military mission: "We believe the most effective way to protect civil liberties is to make sure the good guys win."

Anduril Industries

Founded by Palmer Luckey (Oculus VR)$14B+ (2024)

AI-powered autonomous drones (Altius, Ghost), surveillance towers (Lattice AI), counter-drone systems. Explicitly founded to be the "next great defense company." Lattice AI creates autonomous mesh networks of sensors and weapons.

Shield AI

Founded by Brandon Tseng (Navy SEAL)$2.7B+ (2023)

Autonomous military drones that can fly and fight without GPS or communications. V-BAT (autonomous vertical takeoff drone) and Hivemind (AI pilot). Demonstrated autonomous building-clearing operations.

Scale AI

Founded by Alexandr Wang$14B+ (2024)

Provides AI training data for Pentagon programs. The "picks and shovels" of military AI — training the algorithms that power autonomous weapons. Major contracts with DOD, intelligence community.

OpenAI

Founded by Sam Altman$157B+ (2025)

Quietly dropped its ban on military use in January 2024. Now works with the Pentagon and defense contractors. The company that once warned AI could be "an existential threat to humanity" is now helping build military AI systems.

Microsoft

Founded by Bill Gates$3T+

IVAS (Integrated Visual Augmentation System) for Army: $21.9B contract for AR combat goggles. Azure Government cloud for classified workloads. Nuance AI for military communications. HoloLens combat systems.

💡 Did You Know: OpenAI's Military Ban Reversal

OpenAI was founded in 2015 with an explicit mission to ensure AI benefits “all of humanity.” Its usage policies originally prohibited military applications. Then, quietly, in January 2024, OpenAI updated its terms of service to remove the ban on “military and warfare” use.

The company now works with the Pentagon and defense contractors. An OpenAI spokesperson clarified that the company still prohibits using its tools to “develop or use weapons” or “harm others” — but military applications like intelligence analysis, logistics, and communications support are now permitted.

The irony is staggering: the company whose CEO, Sam Altman, has testified before Congress about the “existential risk” of AI and called for government regulation is now providing AI tools to the world's most powerful military. The company that warned AI could destroy humanity is helping build military AI systems. The profit motive won.

Drone Swarms: The Future of War Is Cheap and Disposable

The Pentagon's Replicator initiative represents a paradigm shift in military thinking. Instead of expensive, exquisite platforms like $100M F-35s and $13B aircraft carriers, the future of war may be thousands of cheap, AI-enabled, disposable autonomous systems — drone swarms that overwhelm defenses through sheer numbers.

The concept is simple: a $2 billion aircraft carrier can be sunk by a swarm of 1,000 drones costing $10,000 each ($10 million total). An air defense system designed to track and engage dozens of targets is overwhelmed by hundreds of autonomous drones attacking simultaneously from different directions. The economics of defense collapse.

China is leading in drone swarm technology. In 2022, the PLA demonstrated a swarm of 200+ drones operating autonomously — sharing targeting data, coordinating attacks, and adapting to losses without human intervention. The US Replicator initiative is explicitly designed to counter this capability with American swarms.

The result is an autonomous weapons arms race with no rules, no treaties, and no brakes. Both sides are developing AI-controlled swarms of lethal autonomous weapons, each driving the other to remove human oversight in the name of speed. In a swarm-vs-swarm engagement, the side with a “human in the loop” loses because human decision-making is too slow. The logic of competition drives both sides toward full autonomy — machines fighting machines, at machine speed, with human life as collateral.

The Ideology of Autonomous Killing

The new defense tech industry is driven by a specific ideology — one that views AI-powered warfare as not just inevitable but desirable. Peter Thiel, whose Palantir provides AI surveillance tools to the CIA and military, has argued that technology companies have a moral obligation to work with the defense establishment. Palmer Luckey, founder of Anduril, explicitly set out to create the “next great defense company” because he believed Silicon Valley's reluctance to work with the military was naive.

The argument goes: if the US doesn't develop AI weapons, China will, and a world where Chinese AI dominates is worse than a world where American AI dominates. This is the exact logic of every arms race in history — and it always leads to the same place: proliferation, escalation, and catastrophe.

The defense tech founders present themselves as patriotic disruptors. But their companies are building tools of unprecedented lethality with minimal oversight, selling them to governments with poor human rights records, and generating enormous profits in the process. Anduril's surveillance towers, deployed on the US-Mexico border, use the same AI technology sold to foreign militaries. Palantir's Gotham platform, used by ICE for immigration enforcement, uses the same algorithms used for military targeting. The line between domestic surveillance and foreign killing is becoming invisible.

The Libertarian Case: No Algorithm Has Due Process

The Fifth Amendment to the United States Constitution states: “No person shall be... deprived of life, liberty, or property, without due process of law.” An AI system that generates kill lists based on phone records and behavioral patterns is the antithesis of due process. There is no trial, no defense, no jury, no appeal. An algorithm assigns a number, a human rubber-stamps it in 20 seconds, and a bomb falls.

The government claims these protections don't apply to foreign nationals in war zones. But the US killed American citizens Anwar al-Awlaki and his 16-year-old son by drone strike with no trial. The legal reasoning — that “due process” doesn't require “judicial process” — is an Orwellian destruction of the concept. If the executive branch can unilaterally decide that due process has been satisfied, then the right is meaningless.

AI weapons accelerate this erosion. When a human decided to kill someone, there was at least the possibility of moral reflection, doubt, mercy. When a machine generates a target list of 37,000 people and a human spends 20 seconds “reviewing” each one, the decision to kill has been outsourced to an algorithm. The human is decoration.

The libertarian principle is clear: no government should have the power to kill people based on algorithmic predictions. Not foreign governments, and certainly not our own. If a person cannot be arrested, charged, tried, and convicted through a transparent legal process, they should not be killed. Full stop. There is no exception in the Bill of Rights for “but the computer said so.”

Furthermore, the concentration of AI weapons development in a handful of private companies — funded by the Pentagon, insulated from competition by classified contracts, and subject to minimal oversight — is the military-industrial complex on steroids. Eisenhower warned of the “unwarranted influence” of the defense industry. He could not have imagined a world where private companies build autonomous killing machines that governments deploy with no accountability to anyone.

What Comes Next

The trajectory is clear and accelerating. Within a decade, autonomous weapons will be ubiquitous on the battlefield. AI will select targets, plan operations, coordinate attacks, and make life-or-death decisions at speeds no human can match. The “human in the loop” will become a “human on the loop” (monitoring but not controlling) and eventually a “human out of the loop” entirely.

The risks are existential. An AI arms race creates pressure to remove safeguards in the name of speed. Autonomous weapons lower the barrier to conflict — it's easier to start a war when your own soldiers don't die. Proliferation is inevitable — today's cutting-edge military AI will be tomorrow's consumer drone with a payload. And the concentration of lethal AI in the hands of governments and corporations creates a power asymmetry that threatens the foundations of individual liberty.

The choice is not between AI weapons and no AI weapons — that ship has sailed. The choice is between a world where autonomous killing is regulated, transparent, and accountable, and a world where it is secret, unaccountable, and unlimited. Right now, we are sprinting toward the latter.

Sources

  • • +972 Magazine / Local Call, “'Lavender': The AI Machine Directing Israel's Bombing Spree in Gaza” (April 2024)
  • • UN Panel of Experts on Libya, Final Report S/2021/229 (March 2021)
  • • DOD Directive 3000.09, “Autonomy in Weapon Systems” (updated 2023)
  • • Congressional Research Service, “Defense Primer: US Policy on Lethal Autonomous Weapon Systems” (2024)
  • • International Committee of the Red Cross, “Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects” (2014)
  • • Paul Scharre, Army of None: Autonomous Weapons and the Future of War (2018)
  • • Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence (2023)
  • • Campaign to Stop Killer Robots reports and briefings
  • • DARPA program announcements and budget justifications
  • • OpenAI usage policy changes (January 2024)
  • • Deputy Secretary of Defense Kathleen Hicks, Replicator Initiative announcement (August 2023)