Have you ever thought about we would have human-like AI just about 10 years or even 5 years ago? Now, people can chat and communicate with the AI like normal people do on the Internet. And the AI get people’s attention even more than a movie star! But you never expect that the even the AI is man-made, it still has the evil side of itself, just like human being.
Today, as artificial intelligences multiply, our ethical dilemmas have grown thornier. That’s because AI can (and often should) behave in ways human creators might not expect. And sometimes our friendly, well-intentioned chatbots turn out to be racist Nazis.
Microsoft’s disastrous chatbot Tay was meant to be a clever experiment in artificial intelligence and machine learning. The chatbot would speak and publish twitter like people. The speaking tone and the personality are all from the twitter it reads. So, we can say that, you guys make what Tay is like. But it took lees than 24 hours for Tay’s cheery greeting of “Humans are super cool!” to morph into the decidedly less bubbly “Hitler was right.” such a bad behavior. Microsoft quickly took the bot offline because of what Tay said online. Upon seeing what their code had wrought, one wonders if those Microsoft engineers had the words of J. Robert Oppenheimer ringing in their ears: “Now I am become death, the destroyer of worlds.”
Cynics might argue that Tay’s bad behavior is actually proof of Microsoft’s success. They aimed to create a bot indistinguishable from human Twitter users, and Tay’s racist tweets are pretty much par for the course on social media these days.
In fact, irresponsible people teach Tay to say something hateful. I think from a certain point of view, the people who taught Tay to talk as a racist Nazis are themselves racist Nazis. Maybe some of them did this just for fun, some just really meant that I think.
Clearly none of this was a part of Microsoft’s plan. But the larger question raised by Tay is why we are making bots that imitate millennials at all. So, what do you think?