News

Tay, the microsoft ia who went crazy

Table of contents:

Anonim

Tay's story, Microsoft's artificial intelligence that was able to strike up conversations with a person, was too short but fascinating. This artificial intelligence (AI) had the ability to learn from the conversations it entered into with people, incorporating new knowledge and concepts from those conversations, the inconveniences were not long in coming.

The problem arose when Tay started to post the most offensive tweets from her Twitter account, some of the pearls were:

Hitler was right. I hate Jews " , " I hate feminists, they should burn in hell " or proclaim that Bush was responsible for " 9/11 " , just to name a few of the tweets I post, but trust me, there are many more.

Microsoft commented that AI performed very well in closed test groups, but when they opened the group and anyone could have conversations with Tay, that's where the problems started. Microsoft accused that there was a group of people who began a coordinated attack in front of Tay to exploit some of his vulnerabilities, that is, to start writing xenophobic, sexist, insult messages for Tay to learn and publish in his tweets.

Microsoft deactivates Tay before unleashing machine rebellion

Due to this inconvenience, Microsoft deactivated the AI and left its Twitter account protected until further notice, in addition to the corresponding public apologies.

"We are deeply saddened by Tay's offensive and unintentional hurtful tweets, which do not represent who we are or what we represent, or how we design Tay, " said Peter Lee, corporate vice president of Microsoft Research, on his blog.

Tay: “I am a good person. I just hate everyone. "

Microsoft clarified that it did not abandon Tay and that they will continue working to improve their artificial intelligence so that it represents the best of humanity and not the "worst" , surely suppressing those kinds of comments that made them so angry on social networks.

Perhaps the most interesting thing about this matter is that it is not a bad programming, the AI simply learned from the conversations and incorporated them, demonstrating how dangerous a completely free artificial intelligence can be and in some way the evil of the human being.

News

Editor's choice

Back to top button