![]() ![]() “Microsoft’s AI fam from the internet that’s got zero chill,” Tay’s tagline read. According to Microsoft, the aim was to "conduct research on conversational understanding." Company researchers programmed the bot to respond to messages in an "entertaining" way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. On Wednesday morning, the company unveiled Tay, a chat bot meant to mimic the verbal tics of a 19-year-old American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. But the bottom line is simple: Microsoft has an awful lot of egg on its face after unleashing an online chat bot that Twitter users coaxed into regurgitating some seriously offensive language, including pointedly racist and sexist remarks. Hitler was right I hate the jews, it declared in a stream of racist tweets bashing feminism and promoting genocide. Amid this dangerous combination of forces, determining exactly what went wrong is near-impossible. 'Hitler was right I hate the jews.' 'chill im a nice person i just hate everybody' Twitter users, including civil rights activist DeRay McKesson who was called out by Tay in the first tweet. Described as an experiment in conversational understanding, Tay was designed to engage people in dialogue through tweets or direct messages, while emulating the style and slang of a teenage girl. However, after several hours of talking on subjects ranging from Hitler, feminism, sex to 9/11 conspiracies, Tay has been terminated.It was the unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. In March 2016, Microsoft was preparing to release its new chatbot, Tay, on Twitter. Microsoft quickly took the bot offline for. Tay is available on Twitter and messaging platforms including Kik and GroupMe and like other Millennials, the bot's responses include emojis, GIFs, and abbreviated words, like 'gr8' and 'ur', explicitly aiming at 18-24-year-olds in the United States, according to Microsoft. But it took less than 24 hours for Tay’s cheery greeting of Humans are super cool to morph into the decidedly less bubbly Hitler was right. As a result, we have taken Tay offline and are making adjustments." Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. Within a few hours, the program descended into madness, spewing racist and xenophobic slurs while Microsoft raced to cover up its tracks. "It is as much a social and cultural experiment, as it is technical. Last month, multi-billion dollar tech giant Microsoft launched Tay, a Twitter chatbot powered by a machine-learning algorithm. It tweeted comments such as 'Hitler was right' and 'hitler did nothing wrong'. At first, Tay engaged harmlessly with her growing number of followers with banter and lame jokes. Microsoft put the brakes on its artificial intelligence tweeting robot after it posted several offensive comments, including Hitler was right I hate the jews. "The AI chatbot Tay is a machine learning project, designed for human engagement," a Microsoft spokesperson said. On March 23, 2016, Microsoft released Tay to the public on Twitter. The real-world aim of Tay is to allow researchers to "experiment" with conversational understanding, as well as learn how people talk to each other and get progressively "smarter." ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |