The AI revolution will stall. Many investments will fail.

Celebrated technology entrepreneur, Mike Lynch, explores the recent stunning advances in AI but warns of some of the misunderstood difficulties with these advanced technologies.

August, 2018

Advances in mathematics and computing have meant that in recent years we have made huge progress in machine learning, and a number products that incorporate self-learning algorithms are widely available to companies and consumers alike. The most obvious examples of these are the AI-enabled assistants lying dormant in our pockets such as Siri and Alexa. Indeed, the prevalence of these apps, and the manner in which they can be convincing enough that their interlocutor is sometimes unaware that they are interacting with a bot, means that we have quickly begun to debate the ethics of AI and worry about when, not whether, robots will take over the world.

As a technologist, I have watched this revolution unfold over the years, and am excited by the recent leaps we have made. There have been some advances in AI that are truly stunning and enable us be more efficient and productive. Computers have been taught to detect cancer tumours more accurately than a trained human eye. They also read and understand thousands of near-identical contracts in a data room more accurately and a lot faster than lawyers because they don't fall asleep or have a beer at lunchtime.

Every pitch I come across these days has AI in it. Many products are being reinvented with AI at their core and investors are leaping in with great enthusiasm. Undoubtedly, there will be some real AI winners, particularly in narrow applications in cybersecurity or in genomics, for example. However, the current level of investor interest in AI hides the fact that, while AI techniques are applicable in certain problems, they are not yet applicable to all problems, and adding an off the shelf whizzy algorithm to an old piece of software isn't going to change things significantly.

In fact, one of the biggest difficulties lies not so much in solving a problem, as in defining the problem in the first place and exception processing. Take driverless cars, for example. In theory, traffic on our roads should be flowing freely with vehicles that can communicate with each other and with traffic management systems to optimise journeys for the best route. In reality, we are more than a few trials in a garden city away from this automotive utopia. On a recent trip to Amalfi, I saw two coaches navigate an impossibly narrow bend on a road that it was obvious was only made for one horse and cart. With hot sunshine glinting off wing mirrors, seagulls swooping overhead and a long line of cars behind each coach, the drivers somehow, amidst much gesticulating, angry words and frantic hand waving managed to manouever past each other without falling into the sea. I am not confident than an AI can replace the medieval battle of wills those two local coach drivers displayed, and yet it is exactly this sort of exceptional circumstance that we need to train AIs on. Take this example and now recreate it against a snowy backdrop, or a hailstorm, or a calm driver, as opposed to a frenzied one shows you just how hard it is to turn a theoretically viable product into a commercially successful one.

Many companies have raised money to take their product out of the lab and onto the street. A few will be successful, many others will not be able to navigate the coastal routes of Amalfi so smoothly. Investor enthusiasm will wane with the first infamous failures but this won't be the end of AI. There will be a reckoning while the industry re-groups and re-defines the problems and then there will be a resurgence when powerful methods emerge to address those misunderstood problems of the past and create solutions that work in real-world circumstances.