You might suppose that Hollywood is good at predicting the future. Indeed, Robert Wallace, head of the CIA’s Office of Technical Service and the U.S. equivalent of MI6’s fictional Q, has recounted how Russian spies would watch the latest Bond movie to see what technologies might be coming their way. This now applies to killer robots.
Hollywood’s continuing obsession with killer robots might therefore be of significant concern. The newest such movie is Apple TV’s forthcoming sex robot courtroom drama, Dolly. I never thought I’d write the phrase “sex robot courtroom drama,” but there you go. Based on a 2011 short story by Elizabeth Bear, the plot concerns a billionaire killed by a sex robot that asks for a lawyer to defend its murderous actions.
The real killer robots
Dolly is the latest in a long line of movies featuring killer robots – including HAL in Kubrick’s 2001: A Space Odyssey and Arnold Schwarzenegger’s T-800 robot in the Terminator series. Indeed, the conflict between robots and humans was central to the first feature-length science fiction film, Fritz Lang’s 1927 classic Metropolis. But almost all these movies get it wrong. Killer robots won’t be sentient humanoid robots with evil intent.
This might make for a dramatic storyline and a box office success, but such technologies are many decades, if not centuries, away. Indeed, contrary to recent fears, robots may never be sentient. It’s a much simpler technology we should be worrying about. And these technologies are starting to turn up on the battlefield today in places like Ukraine and Nagorno-Karabakh.
A war transformed
Movies that feature much simpler armed drones, like Angel has Fallen (2019) and Eye in the Sky (2015), paint perhaps the most accurate picture of the real future of killer robots. On the nightly TV news, we see how modern warfare is transformed by autonomous drones, tanks, ships, and submarines. But, of course, these robots are a little more sophisticated than those you can buy in your local hobby store. And increasingly, the decisions to identify, track, and destroy targets are being handed over to their algorithms.
This is taking the world to a dangerous place, with a host of moral, legal, and technical problems. Such weapons will, for example, further upset our troubled geopolitical situation. We already see Turkey emerging as a significant drone power. And such weapons cross a moral red line into a terrible and terrifying world where unaccountable machines decide who lives and dies. Robot manufacturers are, however, starting to push back against this future.
A pledge not to weaponize
Last week, six leading robotics companies pledged they would never weaponize their robot platforms. The companies include Boston Dynamics, which makes the Atlas humanoid robot, which can perform an impressive backflip, and the Spot robot dog, which looks straight out of the Black Mirror TV series. This isn’t the first time robotics companies have spoken out about this worrying future. Five years ago, I organized an open letter signed by Elon Musk and more than 100 founders of other A.I. and robot companies calling for the United Nations to regulate the use of killer robots.
The letter even knocked the Pope third place for a global disarmament award. However, the fact that leading robotics companies are pledging not to weaponize their robot platforms is more virtue signaling than anything else. We have, for example, already seen third parties mount guns on the Boston Dynamics’ Spot robot dog clones. And such modified robots have proven effective in action. For example, Iran’s top nuclear scientist was assassinated by Israeli agents using a robot machine gun in 2020.
Collective action to safeguard our future
We can only safeguard against this terrifying future if nations collectively take action, as they have with chemical, biological, and nuclear weapons. Of course, such regulation won’t be perfect, just as the regulation of chemical weapons isn’t perfect. But it will prevent arms companies from openly selling such weapons and thus their proliferation. Therefore, it’s even more critical than a pledge from robotics companies to see the UN Human Rights Council has recently unanimously decided to explore the human rights implications of new and emerging technologies like autonomous weapons.
Several dozen nations have already called for the UN to regulate killer robots. The European Parliament, the African Union, the UN Secretary-General, Nobel peace laureates, church leaders, politicians, and thousands of A.I. and robotics researchers like myself have all called for regulation. Unfortunately, Australia is not a country that has, so far, supported these calls. But if you want to avoid this Hollywood future, you may want to take it up with your political representative next time you see them.
Provided by Toby Walsh, Professor of A.I. at UNSW, Research Group Leader, UNSW Sydney [Note: Materials may be edited for content and length.]
Follow us on Twitter, Facebook, or Pinterest