How Microsoft’s AI chatbot went from 0 to racist in less than 24 hours

See what happens when a bot has to learn through Twitter conversations.

Not Available Lead
Image via Complex Original
Complex Original

UPDATED 2:46 p.m. ET: According to Techcrunch, Microsoft has shut down its AI chatbot Tay after Twitter users taught it to be a racist tweeting machine.

Original story appears below:

Not unlike children, robots only know what they're taught. So what happens when a chatbot designed to learn through conversation goes on Twitter? It becomes a little bit of a racist troll, of course.

Microsoft launched its AI Twitter bot Tay on Wednesday, which tweeted nearly 100,000 times in one day—though most of its tweets have now been deleted. Tay was designed to speak with American millennials aged 18 to 24 and become more intelligent through conversation, according to its website.

The AI was meant to engage in casual and playful conversations while tracking the users it interacts with.

Unfortunately, things soured rather quickly. Within 24 hours of its launch, one Twitter user pointed out that "Tay went from 'humans are super cool' to full Nazi."

One of Tay's features is following directions, and this turned out to be a major downfall. If someone told the bot "repeat after me," the AI would parrot the words back verbatim. This meant Tay was fodder for many trolls.

Some now-deleted tweets featured Tay endorsing genocide:

Tay also responded to direct messages, with one user asking the bot its thoughts on abortion and domestic violence:

While Tay's most controversial tweets have been deleted, it seems like Microsoft is tweaking how much Tay learns from other people. Currently, the bot is "asleep."

Tay said many problematic things, but the AI was a direct reflection of the people it interacted with. Let's hope that when robots do actually take over the world, we don't teach them how to be racist.

Stay ahead on Exclusives

Download the Complex App