This Article is From Mar 25, 2016

Microsoft Created A Twitter Bot To Learn From Users. It Quickly Became A Racist Jerk.

Microsoft Created A Twitter Bot To Learn From Users. It Quickly Became A Racist Jerk.

The bot, developed by Microsoft's technology and research and Bing teams, got major assistance in being offensive from users who egged it on.

Highlights

  • Microsoft embarrassed after its chat bot shoots out offensive tweets
  • Coordinated effort to undermine bot's conversational abilities: Microsoft
  • Artificial Intelligence-powered bot 'learns' what people say on Twitter
Microsoft set out to learn about "conversational understanding" by creating a bot designed to have automated discussions with Twitter users, mimicking the language they use.

What could go wrong?

If you guessed, "It will probably become really racist," you've clearly spent time on the Internet. Less than 24 hours after the bot, @TayandYou, went online Wednesday, Microsoft halted posting from the account and deleted several of its most obscene statements.

The bot, developed by Microsoft's technology and research and Bing teams, got major assistance in being offensive from users who egged it on. It disputed the existence of the Holocaust, referred to women and minorities with unpublishable words and advocated genocide. Several of the tweets were sent after users commanded the bot to repeat their own statements, and the bot dutifully obliged.

But Tay, as the bot was named, also seemed to learn some bad behavior on its own. According to The Guardian, it responded to a question about whether British actor Ricky Gervais is an atheist by saying: "rocky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism."

Microsoft, in an emailed statement, described the machine-learning project as a social and cultural experiment.

"Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," Microsoft said. "As a result, we have taken Tay offline and are making adjustments."

On a website it created for the bot, Microsoft said the artificial intelligence project had been designed  to "engage and entertain people" through "casual and playful conversation," and that it was built through mining public data. It was targeted at 18- to 24-year-olds in the United States and was developed by a staff that included improvisational comedians.

Its Twitter bio described it as "Microsoft's A.I. fam from the internet that's got zero chill!" (If you don't understand any of that, don't worry about it.)

Most of the account's tweets were innocuous, usually imitating common slang. When users tweeted at the account, it responded in seconds, sometimes as naturally as a human would but, in other cases, missing the mark.

Tay follows a long history of attempts by humans to get machines to be our pals. In 1968, a professor at the MIT taught a computer to respond conversationally in the role of a psychotherapist. Many 20- and 30-somethings have fond memories of Smarterchild, a friendly bot on AOL Instant Messenger that was always available for a chat when their friends were away.

Now, Apple's Siri, Amazon's Alexa, Microsoft's Cortana and Google Now mix search-by-voice capabilities with folksy charm. The idea of a personable machine was taken to its logical ending in the 2013 movie "Her," in which a man played by Joaquin Phoenix falls in love with the voice in his phone.

And this is not the first time automated technology has unexpectedly gotten a company in trouble. Last year, Google apologized for a flaw in Google Photos that let the application label photos of black people as "gorillas."

"We're appalled and genuinely sorry that this happened," a Google representative said.
© 2016, The New York Times News Service


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
.