In March 2016, Microsoft launched a chatbot named 'Tay' on Twitter. The project hoped to emulate its success with Xiaoice, a Microsoft chatbot launched in China. Tay's purpose was to entertain 18-24 year olds in the United States, and to determine whether an AI like Xiaoice would be just as captivating in a radically different cultural environment. Unfortunately, the chatbot was quickly taken offline after it 'learned' to tweet offensive and hateful remarks. Research is currently underway to redress the technical vulnerability that allowed for Tay to be misused.