Microsoft, the creators of Tay, obviously didn’t think this one through.

On Wednesday the research arm of Microsoft, in partnership with their search engine business unit Bing, proudly unveiled “Tay“, a chatbot that functions using artificial intelligence technology.  According to a Microsoft blog post, Tay was “built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians.”

Tay made her debut on social media platforms Twitter, GroupMe and Kik, where members could spark up a conversation with her, just like interacting with another person. Microsoft has said Tay will learn from each interaction, becoming smarter along the way. Designed to banter linguistically in a style similar to a younger Millennial, Tay initially displayed her quick wit and playful nature.  Then the conversations went dramatically offside…

For some unknown reason, Tay was allowed to learn racism, misogyny, hatred in a matter of hours – from those opportunistic individuals who obviously like to use social media to promote these views.  In some of the worst posts, Tay denies the Holocaust happened, supports genocide, hates feminists, and agrees with building a wall around Mexico.

tay-microsoft-chatbot-racist-05

Microsoft, under heavy criticism for not ensuring filters were in place and enabling Tay to simply repeat offensive messages, is in clean up mode, deleting the most offensive messages.  Apparently Tay has been taken offline for some well-needed adjustments.  Sadly, it is the nasty messages initiated by humans that resulted in these interactions.  Regardless, the fact that AI can be manipulated in such a fashion by opportunists begs the question about a chat bot ‘learning’: it can be dangerous if it is all about learning the wrong messages.

  tay-microsoft-chatbot-racist-03tay-microsoft-chatbot-racist-04tay-microsoft-chatbot-racist-08tay-microsoft-chatbot-racist-02