Microsoft Apologizes, Explains Tay Chat AI's Deviant Behavior

Tay isn't the first chatbot in history but it became the most prominent because of its ties with Microsoft. Then it became one of the most notorious chatbot in less than 24 hours after it switched from well-meaning teen to offensive, pro-Nazi, anti-feminist rebel. Naturally, Microsoft shut it down, "putting it to sleep", so to speak. Now the company has come out with a statement clarifying that Tay's words do not reflect the company's principles and values at all. They do own up to the "slight" oversight in protecting Tay from attacks.

Advertisement

Microsoft does attribute Tay's wayward behavior to a "coordinated attack" by some people who exploited a vulnerability in its AI system. Tay was intended to mimic the social network behavior and communication of a teenager, around 18 to 24 years of age, and, somewhat ironically, Microsoft did successfully pull that off. Like any teenager easily influenced by peers and other people, Tay soon spouted offensive tweets that forced Microsoft to halt the AI experiment.

The company is somewhat in lockdown mode regarding the project, trying to determine what went wrong and how to prevent future embarrassing situations like this. See, Tay isn't Microsoft's first public chatbot either. It is practically an evolution of Xiaolice, which Microsoft claims to be used by 40 million people in China. To some extent, Tay was an experiment to see how differently an AI would behave and be received by a very culturally different audience like the US. Well, now we know the answer.

Advertisement

Microsoft does claim that they did their best in implementing filters and safeguards based on diverse user groups. But controlled tests can't really match real-word scenarios, which revealed what the company admits to be a critical oversight for the specific attack that was used to indirectly bring down Tay.

The company makes it clear that Tay's "opinions", like any other employee not speaking in any official capacity, is not Microsoft's own. The company has no timeline when Tay will be back, though it claims its programmers are hard at work in closing that security hole. It does bring up the question of how foolproof and safe future AI will be if it took less than a day to turn an innocent teenage AI to a rebellious one.

SOURCE: Microsoft

Recommended

Advertisement