Teenage AI Chatbot

Invoke Capital founder Mike Lynch discusses Microsoft’s failed teenage chat bot and whether it raises concerns over humans’ (rather than AI’s) moral judgment.

May 2016

For almost seventy years, we have been putting computers through something called the Turing Test, a question set out by Alan Turing in 1950 to determine whether a computer could think, or, put differently, whether a human could interact with a computer and not recognize that his interlocutor was a machine and not made of flesh and blood. In some very narrow tasks, we are getting very close as speech, language and inference systems improve, giving us Virtual Personal Assistants like Siri or Cortana.

Indeed, so emboldened was Microsoft by Cortana's abilities to impress humans, that it unleashed a teenage chatbot called Tay on the world. Ostensibly indistinguishable from many other young girls in her love of Justin Bieber, Taylor Swift and the Kardashians, complete with virtual eye-rolling and a convincing slang vocabulary, Tay was launched into the Twittersphere one day last month. Her job was to improve Microsoft's engagement and customer service among young people. What happened next was very interesting and, were I not already a cynic, might have lessened my faith in the general good of humans. Far from turning into a paragon of customer service representatives, in less than 24 hours, Tay became a Nazi-loving, Hitler-promoting, foul-mouthed sex bot. The important point here is that Tay did exactly what she was supposed to. She learnt from her internet friends how to be like them becoming yet another of the opinionated internet fauna.

To understand this one has to understand that Artificial Intelligences learn from experience. They are not told what to do, they learn by interacting with the world. In this case the hateful underbelly of the online world. The way a computer differentiates a cow from a horse is similar to the way a toddler understands that not all four-legged animals are the same by seeing them. Siri has been "listening" to humans interact for years in order to gain the competence required to obey human commands.

Tay is a classic example of what happens when you let the AI genie out of the bottle: you get exactly what you asked for and you can't put the genie back in. In this case Microsoft quickly retired Tay, who presumably is now sulking in her virtual teenage bedroom, and Microsoft is back to the drawing board after taking a bit of a PR hit for the greater good of the advancement of understanding in computing.

However, it won't always be computers masquerading as teenagers.

With the next wave of AI for example with autonomous drones the fear is that in the heat of a big battle we rely on the drone itself to weigh up the pros and cons of releasing its bombs at a particular moment - is it worth killing innocent civilians for the sake of eliminating one dangerous person? Is it true that the suspect is in that vehicle? Do the humans it learns from do what they say they do?

An example closer to everyday are the AI systems used to make us buy more online. One retailer asked the system to increase margins. It did it very well by getting rid of poor customers. Not quite what was intended.

Whilst a human is still making the moral and ethical judgement, we feel collectively safe, but as computing improves and chatbots like Tay learn to discern what is generally-acceptable behavior, the temptation to devolve these hard choices will increase and will we be happy with what the AI decides we asked for? Those genie's wishes never quite turned out right.

My fear for AI is not terminators from the future but a million unintended consequences as AIs see us not as we like to think we are but how we are.

We therefore can't be surprised when our unsupervised learning teenager like Tay goes distastefully rogue. The same has happened in teenager's bedrooms for centuries.

If we don't like what we see in the mirror should we blame the mirror?