Microsoft Turns Tay.ai Back On With Safeguards In Place

I don't think anyone was sure just what Microsoft was thinking when it put its AI chatbot called Tay on twitter with no protections in place. Tay is back and this time Microsoft thinks it has things in hand. The thing with Tay was that it learned from the chats it had with other users and could be forced to repeat what users tweeted at it back as if the chatbot had the same opinions. Anyone who has ever seen twitter or used the platform could have told you that wouldn't end well, and end well it didn't .Tay turned into a racist pro-Hitler fan before Microsoft realized its mistake and turned Tay off. Later Microsoft apologized for the things that Tay tweeted and was quick to point out that the "opinions" that Tay had were not the opinions of Microsoft. With Tay back the AI chatbot is again sending out tweets and from the looks of things if Tay isn't sure what you are asking it about or is overwhelmed with tweets it responds with "You are too fast, please take a rest..."

Advertisement

Apparently Tay is also has a problem not replying to itself. Presumably, that means the bot replies to its own tweets, which is strange. At least Tay isn't spouting racism or the virtues of a mass murderer this time out. Mixed in with the request to slow down there have been some mostly normal tweets.

One of them read, "Well LA DI DA Excuse me for not being up to your incredibly unrealistic standards. Jerk." Wicked AI burn right there. You can bet if there are any more flaws in Tay that twitter users can exploit to make the chatbot act offensively, they will be found.

SOURCE: The Verge

Recommended

Advertisement