Microsoft set out to learn about “conversational understanding” by creating a bot designed to have automated discussions with Twitter users, mimicking the language they use.
What could go wrong?
If you guessed, “It will probably become really racist,” you’ve clearly spent time on the Internet. Less than 24 hours after the bot, @TayandYou, went online Wednesday, Microsoft halted posting from the account and deleted several of its most obscene statements.
The bot, developed by Microsoft’s technology and research and Bing teams, got major assistance in being offensive from users who egged it on. It disputed the existence of the Holocaust, referred to women and minorities with unpublishable words and advocated genocide. Several of the tweets were sent after users commanded the bot to repeat their own statements, and the bot dutifully obliged.
But Tay, as the bot was named, also seemed to learn some bad behavior on its own. . . . It responded to a question about whether the British actor Ricky Gervais is an atheist by saying: “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”
Microsoft, in an emailed statement, described the machine-learning project as a social and cultural experiment.
Friday, March 25, 2016
Twitterbot Masters Human Conversation, Offends Everyone
Sometimes truth is better than fiction:
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment