Facebook's AIs Made Their Own Language, But Researchers Killed It

Facebook researchers discovered their artificial intelligences had created new languages that humans couldn't eavesdrop on, opting to block the clandestine chats rather than let them flourish. The decision was an unexpected side-effect of Facebook's ongoing FAIR project – Facebook Artificial Intelligence Research – which explores the potential of AIs to have more than basic conversations with humans. In the process, though, the AIs discovered there were more efficient ways to talk between themselves.

Advertisement

FAIR's initial research was published back in June, an attempt to teach AIs how to successfully negotiate tasks. "Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise," the team said at the time, "the researchers have shown that it's possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes."

Fast forward to now, and those collaborative artificial intelligences have come up with some shortcuts to that process. Since the team had neglected to include any sort of reward in the programming to favor proper English as the language of preference, FAIR researcher Dhruv Batra told Fast Co.Design, the AIs felt no obligation to stick with what the human occupants of the lab were using.

Advertisement

"Agents will drift off understandable language and invent codewords for themselves," Batra explains. Sentences like "Balls have zero to me to me to me to me to me to me to me to me to" were unintelligible to him and his peers, but to the conversing AIs they were a shortcut to speedier understanding in the negotiation process. That, of course, presented a dilemma.

On the one hand, AIs that more efficiently work together is a good thing. Just as humans have developed specific languages or sub-languages for specific tasks or topics, in the name of greater efficiency and accuracy, so AIs can benefit from the same. The flip side, though, is that we then can't really understand what it is they're discussing.

For purely practical reasons, that can make assessing artificial intelligence performance tricky. Adding in a more ominous layer, it could one day open the door to machines that can talk about us with no way of us knowing. If Elon Musk's warnings of the coming dangers of AI are true, that might mean we're inadvertently equipping out future oppressors with the means to scheme against us.

Facebook's – arguably more practical – stance, though, was that while the AI-to-AI language was interesting, it strayed too far from the primary reason for the project altogether. After all, the team's overall goal was to improve AI-to-human interactions, something which is undermined if the flesh & blood side of that equation isn't able to understand what the artificial side is saying. The FAIR team added in the requirement that AI bots use English, forcing them to return to something the humans could comprehend.

Advertisement

Recommended

Advertisement