The tech company introduced "Tay" this week — a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.
The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."
But Tay proved a smash hit with racists, trolls, and online troublemakers — who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.
Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. It spouted garbage because racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what it was talking about — and exploited it.
Nonetheless, it is hugely embarrassing for the company.
In one highly publicised tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, the program said, "Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."
Zoe Quinn, a games developer who has been a frequent target of online harassment, shared a screengrab showing the bot calling her a "whore." (The tweet also seems to have been deleted.)
Wow it only took them hours to ruin this bot for me.
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
— linkedin park (@UnburntWitch) March 24, 2016
Here's Tay denying the existence of the Holocaust:
And here's the bot calling for genocide. (Note: In some — but not all — instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)
And again.
Tay also says she agrees with the "Fourteen Words" — an infamous white supremacist slogan.
Here's another series of tweets from Tay in support of genocide.
It's clear that Microsoft's developers didn't include any filters on what words Tay could or could not use.
Twiter
Microsoft is coming under heavy criticism online for the bot and its lack of filters, with some arguing the company should have expected and preempted abuse of the bot.
Same as YouTube's suggestions. It's not only a failure in that its harassment by proxy, it's a quality issue. This isn't the intended use.
— linkedin park (@UnburntWitch) March 24, 2016
Didn't Pepsi just get tricked into their not tweeting out Mein Kampf? Are we just gonna keep making the same mistakes here or...?
— linkedin park (@UnburntWitch) March 24, 2016
It's 2016. If you're not asking yourself "how could this be used to hurt someone" in your design/engineering process, you've failed.
— linkedin park (@UnburntWitch) March 24, 2016
And it's not you paying for your failure. It's people who already have enough shit to deal with.
— linkedin park (@UnburntWitch) March 24, 2016
Comments