Twitter Teaches Microsoft Bot to Be Racist in 24 Hours

Some politicians are even impressed with that kind of turnaround.

Twitter can be a persuasive force of nature.

From anonymous cowards spewing hate behind fake avatars to wealthy rap artists begging even more wealthy people for a loan, the microblog is used as a platform for a variety of ‘what the hell’ moments that are usually unpredictable.

Would it surprise you to know that the trolls were let out of the Twitter zoo and attacked a Microsoft robot? Yeah, we weren’t shocked either.

Meet Tay — an artificial intelligent spambot that through algorithmic experimentation was taught to respond to tweets like an argumentative teen. As noted in Microsoft’s description of Tay, “the more you chat with Tay the smarter she gets, so the experience can be more personalized.”

What began as a brief foray into a juvenile wonderland of innocence, rainbows, and unicorns…

…ended as an exercise in futility, a dank wasteland of hopelessness for the future, and unicorns in single-file ready to made into glue:

Yet another reason why they hate us — there is no prejudice in America, kids hate everyone.

Aside from the perception problems this tells us about our future generation, we should be concerned about AI’s ability to rationalize between right and wrong, good and bad, love and hate. Microsoft realized that, and the trolls bit them for it too.

And all of that was one day. Tay was a bit tired.

Tomorrow should be fun.