Tay, Microsoft’s new Artificial Intelligence (AI) chatbot on Twitter had to be pulled down a day after it launched, following incredibly racist comments and tweets praising Hitler and bashing feminists.
Microsoft had launched the Millennial-inspired artificial intelligence chatbot on Wednesday, claiming that it will become smarter the more people talk to it.




The real-world aim of Tay is to allow researchers to "experiment" with conversational understanding, as well as learn how people talk to each other and get progressively "smarter."

"The AI chatbot Tay is a machine learning project, designed for human engagement,” a Microsoft spokesperson said. “It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

Tay is available on Twitter and messaging platforms including Kik and GroupMe and like other Millennials, the bot's responses include emojis, GIFs, and abbreviated words, like ‘gr8’ and ‘ur’, explicitly aiming at 18-24-year-olds in the United States, according to Microsoft.


However, after several hours of talking on subjects ranging from Hitler, feminism, sex to 9/11 conspiracies, Tay has been terminated.

Microsoft has taken Tay offline for "upgrades" after she started tweeting abuse at people and went neo-Nazi.

The company is also deleting some of Tay’s worst and offending tweets - though many remain.
Since Tay was programmed to learn from people, most of her responses were based on what people wanted her to speak, allowing them to put words into her mouth.




However, some of Tay’s responses were organic. Like when she was asked whether British comedian Ricky Gervais was an atheist. She responded: “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”
Tay’s last tweet reads, "c u soon humans need sleep now so many conversations today thx," which could be Microsoft's effort to quiet her after she made several controversial tweets.
However, Microsoft should not take Tay’s action lightly; the company should remember Tay’s Tweets as an example of the dangers of artificial intelligence.


Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants.

AI is likely to be ‘either the best or worst thing ever to happen to humanity,’ Hawking said, ‘so there's huge value in getting it right’



Post a Comment