The chatbot, Tay, was built to learn from its interactions with Twitter users and ended up spouting some very troubling rhetoric
The story of Tay the Twitter Chatbot is short but spectacular: Microsoft introduced @TayandYou Wednesday morning, and hours later it was decrying feminism and the Jews.
Microsoft, of course, has pulled the plug on Tay (for the moment, at least) just 15 hours after starting it up — and had to delete its overtly racist, misogynist and otherwise messed up tweets.
c u soon humans need sleep now so many conversations today thx???? — TayTweets (@TayandYou) March 24, 2016
c u soon humans need sleep now so many conversations today thx????
— TayTweets (@TayandYou) March 24, 2016
The idea behind Tay was a bit more complex than that of your standard Twitter bot. Microsoft referred to Tay as an artificial intelligence because it was intended to eventually learn to interact organically with people who tweeted at it.
It was also easily exploitable, as you could tell it to “repeat after me” and it would say whatever you said. But the wild and disturbing stuff coming out of Tay’s, er, mouth was not limited to things it was told to repeat.
For example, it would respond positively at points to white supremacist sentiments and even come up with some of its own (click here for a bunch of examples). Or it would say things like “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism” — the Hitler-is-the-inventor-of-atheism thing, of course, is an old internet troll joke that Tay picked up somewhere.
It is not surprising, or it shouldn’t be, to Twitter users that Tay went rogue. This is just how things go in that frontier.