Robots Are Now Learning To Refuse Human Orders

It's happening...

If you've ever watched a science fiction thriller, you already know the greatest fear about the future of robots: what happens when they start disobeying our commands.

But researchers at Tufts Univeristy are now teaching robots to do just that. A team of researchers from the university's Human-Robot Interaction Lab are now showing off robots that have been programmed to not just follow any command, but assess the outcome of what will happen if the order is fulfilled.

For instance, in a video from Quartz, a robot can be seen taking a command to walk off the edge of a table. 

"But it is unsafe," the robot says, knowing that it would walk off the edge of the table.

"I will catch you," the researcher says.

"OK," the robot responds, before walking forward.

The video is meant to show off a sort of deductive reasoning that may give some people pause, but is exciting the robotics community everywhere.

According to United Press International, Gordon Briggs and Matthias Scheutz created what they're calling "felicity conditions," questions the robot asks itself to determine if it is meant to perform the task commanded to it. Here are the questions the robot goes through:

1. Knowledge: Do I know how to do X?

2. Capacity: Am I physically able to do X now? Am I normally physically able to do X?

3. Goal priority and timing: Am I able to do X right now?

4. Social role and obligation: Am I obligated based on my social role to do X?

5. Normative permissibility: Does it violate any normative principle to do X?

Check out the process in action below:

Here is a video from Quartz demonstrating a similar interaction: