For centuries, technology has been an object of human fascination and fear. Today, we live in an age in which technology is inseparable from our daily lives.

As such, humans commonly imagine a world where technology dominates every facet of our lives: Microchips in our brains give us the processing power and speed of computers while robots do menial tasks for us, and so on.

Movies like “I, Robot” and “The Terminator” and video games like “Mass Effect” depict futuristic societies in which autonomous robots gain self-awareness and consequently present some threat to humanity. Luckily for us, this will all remain science fiction for the time being.

A recent policy directive signed by the U.S. Department of Defense has officially outlawed the use of autonomous and semi-autonomous robots when making life-or-death decisions.

The directive comes after pressure from the Human Rights Watch for the government to ban what they termed “killer robots.”

In short, if a robot is on the battlefield, it will contain technology requiring human authorization before killing a human.

To some, this decision by the DOD may seem superfluous.

Why should some agency sign policy statements about something that isn’t even an issue yet?

The truth is that robots on the battlefield are no longer science fiction.

The United States’ wars in the Middle East have seen an unprecedented increase in the use of robotic weapons systems on land, air, and sea.

The most visible of these robots are–ironically–the White House’s “secret” drones which are used to assassinate suspected terrorists. The effectiveness of these drones is questionable and controversial, but that’s an issue for another editorial.

The point is that the U.S. military is increasingly spending billions of dollars researching and manufacturing robots. We have the technology, and robots are only getting better, faster, and stronger.

According to the Human Rights Watch’s concerns, the technology for “killer robots” could be available within 20 years. And if we think about it, this idea represents everyone’s worst fears: a literal killing machine with no regard for human life working at the behest of the government.

Thus, the statement by the DOD promising never to leave the question of life and death up to a robot is good news for humans everywhere.

The reason the Nazis were so terrible is because they were “only following orders”–that is, they were doing what others told them to do unquestionably.

It is not unreasonable to say that they were robotic, and this is why their actions are all the more reprehensible: Why didn’t they disregard their orders? Where was their humanity?

Clearly humans make mistakes. They kill other humans of their own accord or on the orders of other humans. But humans always possess the capability to say no.

Robots, on the other hand, technically never make mistakes, and they never say no. This is the difference between human killers and robotic killers.

Try as we might, we can never program a human sense of morality. The decision by the Department of Defense represents a rare showing of their own sense of morality, and, thanks to this directive, “I, Robot” will forever remain in the realm of science fiction.

William Hupp is a College Sophomore from Little Rock, Arkansas.


+ posts

The Emory Wheel was founded in 1919 and is currently the only independent, student-run newspaper of Emory University. The Wheel publishes weekly on Wednesdays during the academic year, except during University holidays and scheduled publication intermissions.

The Wheel is financially and editorially independent from the University. All of its content is generated by the Wheel’s more than 100 student staff members and contributing writers, and its printing costs are covered by profits from self-generated advertising sales.