Why wearables are the future

April 2014

Dr Mike Lynch OBE on why wearable technology is the future, and the developments in hardware, computer vision and augmented reality we need to get us there.

This year we have seen unprecedented hype around the wearable computing market. Of course, the big headline was Facebook's $2bn acquisition of Oculus Rift, a virtual reality-headset manufacturer that has earned a cult following amongst gamers (despite not actually selling a product as yet). Meanwhile, Google Glass is receiving constant press attention, and "smart watches" from the likes of Samsung and Sony are hitting the market now, so it's surely only a matter of time before Apple makes good on all the rumours and enters the fray too.

They may seem futuristic, but these products aren't just science fiction: wearable devices are already entering the mainstream. Driving the "quantified self" movement – gathering data about all aspects of your life in order to make quantifiable improvements – we have seen the recent success of Nike's Fuelband, Fitbit and the Jawbone Up. Now, users can measure everything from heart rate and fitness levels, to sleep patterns and how many hours you've spent sitting on the sofa watching Netflix. With big brands putting their marketing power behind these devices, we're seeing the beginning of the wearables revolution.

Admittedly, the first iterations of wearable devices that we see now are generally very clunky and large - even perhaps something that appears useful but that you wouldn't want to actually wear. But the current state of wearables is like looking at television in the 1940s: technological changes will make them so refined and sophisticated that in 30 years' time, we will barely recognise the "first generation" of wearables of today's world.

Indeed, the wearable devices released so far are only scratching the surface of what's possible. One of the most powerful ways that we'll use wearables in the future will be by taking advantage of recent advances in the field of computer vision – where computers can actually recognize and understand the environments they are seeing, just like a human would.

Let's imagine a world where I am wearing some kind of device that understands everything I am looking at – perhaps this will take the form of glasses with a tiny camera in them, or maybe a chip in specially-designed contact lenses. So, let's say I'm about to install the new router in my house. Now, as I look at the router, the computer recognizes what it is seeing and overlays useful information automatically into my field of view – this is known as "augmented reality". As each part of the scene is recognised, the information is shown in context – therefore, I see where to plug each wire, and where to connect my phone line. Or perhaps I am cooking – a rare event, so I need a recipe book to help me out. Instead of continually looking back at the recipe book, I can now see a video placed in my line of sight, which shows me step-by-step what to do, based on what I'm doing at each stage of the cooking process.

These ideas are certainly helpful in a home setting, but the possibilities are endless and will be transformational across many markets. Imagine automatic translations of foreign text, completed as soon as you look at the script. Historic information could appear as soon as you look at a landmark, or suggestions of what else you might like to visit. In healthcare, surgeons in operating rooms will be able to see augmented blood vessel overlays on the organs that they are operating on.

At the moment, wearables have a long way to go to overtake the smartphone market. But, for a number of reasons, it's clear that wearable computing will be the future. Whilst you might treat your mobile as an extension of your body right now, they introduce an unnatural division between us and the real world. Think about pulling your phone out your pocket every time you want to look something up, or trying to enter text-based searches on a small screen and facing the wrath of Autocorrect: this just isn't how we're going to be interacting with information in the future. With wearables, we have the promise of information sent to us instantaneously, overlaid directly into our field of vision without the need for manual searching – all without the restrictions that a hand-held screen places on us.

The beginnings of the wearables revolution has not without its critics. Some complaints centre on privacy: if you are constantly wearing a lens, what is to stop you recording anyone and everyone you meet? We've already seen reactions against this - in Japan and South Korea any device is required to make a shutter sound when it takes a picture, alerting all those in the vicinity to the action. However, I feel that privacy is an ever-changing norm: for example, in the early nineteenth century when the camera became widely available to consumers, governments responded by banning them in public places – an abhorrent thought to today's selfie generation.

Whilst the software for wearables is advanced, hardware issues are currently slowing adoption. Not only are wearables still fairly expensive to own, but no one has produced a truly compelling design: most wearable devices today are still a bit bulky and unwieldy. Yet looking to the future, Moore's law will apply here as in all aspects of technology: further development of microelectronics will allow the next generation of devices to be smaller and more powerful; a product which truly targets the mass market.

As the barriers to the mass adoption of wearable devices become eroded, we will soon see these products shifting away from being aimed at early tech adopters, and towards the a broader base of users. Here they will become engrained in everyday lives in much same way as the smartphone is now, continuing to shake up our lives for years to come.

A version of this article appeared in Cambridge News.