The Daily Parker

Politics, Weather, Photography, and the Dog

Sad day for AIs everywhere

Microsoft launched and then quickly shut down an AI customer service bot this week after the Internet taught it bad habits:

The aim was to “experiment with and conduct research on conversational understanding,” with Tay able to learn from her conversations and get progressively “smarter.” But Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.

Microsoft has now taken Tay offline for “upgrades,” and it is deleting some of the worst tweets—though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists or what racism is. The reason it spouted garbage is because racist humans on Twitter quickly spotted a vulnerability—that Tay didn't understand what it was talking about—and exploited it.

What interests me here is that no one at Microsoft had the imagination (or, frankly, real-world junior high school experience) that might have led to the rudimentary testing to predict this outcome.

I can't wait to see what the "upgrades" do. At least we can rest assured that Skynet is still a long way off.

Comments are closed