We must fight the invasion of the killer robots

0

“Killer robots” are taking over. Also known as autonomous weapons, these devices, once activated, can destroy targets without human intervention.

The technology has been with us for years. In 1959, the US Navy started using the Phalanx Close-In Weapon System, an autonomous defense device that can spot and attack anti-ship missiles, helicopters and similar threats. In 2014, Russia announced that killer robots would guard five of its ballistic missile installations. That same year, Israel deployed the Harpy, an autonomous weapon that can stay airborne for nine hours to identify and pick off enemy targets from enormous distances. In 2017, China introduced its own Harpy-type weapon.

But, with the US’s plans to launch drones based on the X-47B in 2023, the invasion of killer robots is going to a new level. These stealth, jet-powered autonomous aircraft can aerially refuel and penetrate deep inside a well-defended territory to gather intelligence and strike enemy targets, a more aggressively lethal tool than we’ve seen before.

Is it ethical to deploy “killer robots?” The International Human Rights Clinic at Harvard Law School says no, arguing that artificially intelligent weapons fail to comply with the “principles of humanity” and the “dictates of public conscience” in the Geneva Convention.

Aware of the resistance to killer robots, the US Department of Defense issued Directive 3000.09, which requires that weapons “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The word “appropriate” requires a human operator be “in-the-loop” (i.e., control the weapon) or “on-the-loop” (i.e., supervise the weapon) and have the final say in taking human lives. As a result, the Navy currently only uses the X-47B prototype in a semiautonomous mode, always keeping a human operator involved.

The pace of war is escalating exponentially, driven by the increasing use of computer technology. As the arms race continues, the potential for an unintended conflict is going up. As seen during the Cold War between the US and Russia, we came dangerously close to nuclear war on a number of occasions. Only human judgment averted all-out armageddon.

So, where does this leave us?

As I outlined in my book “Genius Weapons,” there are only three ways to ensure killer robots are kept in check:

Focus autonomous weapons on defense, not offense. In a defensive role, autonomous weapon systems have the potential to lower the probability of conflict. For example, if the United States deployed autonomous weapons that could destroy any missile targeted to hit the US or its allies, a potential adversary would judge such an attack as futile and avoid conflict.

ReadMore…

Leave A Reply