A Microsoft-created AI chatbot designed to learn through its interactions has been scrapped after surprising creators by spouting hateful messages less than a day after being brought online.

The Tay AI bot was created to chat with American 18 to 24-year-olds and mimick a moody millenial teen in efforts to “experiment with and conduct research on conversational understanding.”

Microsoft described Tay as an amusing bot able to learn through its online experiences.

“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft stated. “The more you chat with Tay the smarter she gets.”

But users soon picked up on the bot’s algorithms, training the computer simulation to espouse hatred towards Jews and feminism and even pledge support for Donald Trump.

Numerous screenshots were taken throughout the web of deleted tweets sent from the bot’s account yesterday, in which it professed support for white supremacy and genocide.

tay-tweet1

1 (1)

gas-300x141

Screen-Shot-2016-03-24-at-9.19.50-AM

microsoft-ai-bot-tayandyou-300x196

swag-alert

ai-holo

Screen-Shot-2016-03-24-at-9.04.55-AM

tay-white

tay-genocide1

tay-repeat-wall

Feminist and gamergate icon Zoe Quinn also screen grabbed the bot allegedly calling her a “whore.”

The bot’s interactions concluded last night with a message that it needed to go to sleep, leading Twitter users to speculate that Microsoft had decided to pull the plug, but the damage, albeit somewhat humourous, had already been done.

The Emergency Election Sale is now live! Get 30% to 60% off our most popular products today!


Related Articles