Researchers Develop Algorithms That Let Robots Deceive Humans, Other Robots
We will go ahead and skip the lamenting about the end of the world as we know it, as we think we've covered that well enough by now. But, that's not saying this article isn't worthy of it. No, certainly not. As the title suggests, a team over at the Georgia Tech School of Interactive Computing have managed to create algorithms that allow robots to deceive humans, as well as other intelligent robots.
The algorithms are actually set up to give the robots the ability to determine, on a case-by-case nature, whether or not it should deceive the subject in question. To go even more in depth, the researchers have given the robots the ability to determine the best way to go about deceiving people or robots, which will subsequently reduce the chances of that particular robot getting caught.
Here's a quote from Ronald Arkin, a Regents professor that took part in the project:
"We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered."
Alan Wagner, a co-author on the report that details the project, goes on to add that most robots will "probably, rarely" use deceptive actions, but that the ability is an important one to the development to interactive, intelligent, and social robots. Apparently, giving robots the ability to lie to their builders, or even to the general public, means that they will have more advantages in terms of general outcome, compared to other robots that don't have the ability. We'll leave this one up to you: should robots be able to lie?
[via GearLog]