I Thought That Was Someone Ass Until I Looked Again Twitter

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational agreement." The more y'all conversation with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."

Unfortunately, the conversations didn't stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — existence essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming aphorism: flaming garbage pile in, flaming garbage pile out.

Now, while these screenshots seem to show that Tay has assimilated the cyberspace's worst tendencies into its personality, it'south not quite as straightforward as that. Searching through Tay'south tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "echo after me," it will — allowing everyone to put words in the chatbot's mouth.

One of Tay'southward at present deleted "repeat afterwards me" tweets.

Even so, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable chat with i user (sample tweet: "new telephone who dis?"), earlier it replied to the question "is Ricky Gervais an atheist?" by proverb: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

Only while it seems that some of the bad stuff Tay is being told is sinking in, it'south not like the bot has a coherent ideology. In the bridge of 15 hours Tay referred to feminism as a "cult" and a "cancer," besides every bit noting "gender equality = feminism" and "i beloved feminism now." Tweeting "Bruce Jenner" at the bot got similar mixed response, ranging from "caitlyn jenner is a hero & is a stunning, beautiful woman!" to the transphobic "caitlyn jenner isn't a real woman notwithstanding she won woman of the twelvemonth?" (Neither of which were phrases Tay had been asked to repeat.)

It's unclear how much Microsoft prepared its bot for this sort of thing. The visitor'south website notes that Tay has been congenital using "relevant public data" that has been "modeled, cleaned, and filtered," but it seems that after the chatbot went live filtering went out the window. The company starting cleaning up Tay's timeline this morning, deleting many of its well-nigh offensive remarks.

Tay's responses have turned the bot into a joke, but they raise serious questions

It's a joke, obviously, but there are serious questions to respond, similar how are we going to teach AI using public information without incorporating the worst traits of humanity? If we create bots that mirror their users, exercise we intendance if their users are human trash? There are plenty of examples of technology embodying — either accidentally or on purpose — the prejudices of society, and Tay's adventures on Twitter bear witness that even big corporations like Microsoft forget to take any preventative measures confronting these problems.

For Tay though, it all proved a scrap too much, and simply past midnight this morning, the bot called it a night:

In an emailed statement given later to Business Insider, Microsoft said: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with information technology. We're making some adjustments to Tay."

Update March 24th, 6:50AM ET: Updated to note that Microsoft has been deleting some of Tay's offensive tweets.

Update March 24th, 10:52AM ET: Updated to include Microsoft's argument.


Verge Athenaeum: Tin we build a witting computer?

johnsonlifeastrom1951.blogspot.com

Source: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

0 Response to "I Thought That Was Someone Ass Until I Looked Again Twitter"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel