AI needs humans for the “last mile”

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.


AI makes predictions from models that compute across vast datasets. It can be unreliable when the problem is new, unknown or unpredictable in nature. This is dangerous if the impact of failure is unacceptable. AI that is 99.9% reliable is more dangerous than AI that is 70% reliable, for the reason that humans will ignore a process or allow important skills to lapse when AI reliability is high.

Designers need to understand how new knowledge is created in a system if they are to design an AI system that replaces humans in routine and predictable tasks but still requires humans for critical tasks. The chain of discovery starts on the frontlines; where humans are uniquely positioned to notice an anomaly or a persistent problem. Machines can now recognize these situations if the data are clear enough, but it’s humans who are best suited to navigate the mess and make sense of the world.

Where there is limited environmental control, say, on a highway, a human driver is going to be needed in a self-driving car for a long time yet. For the human driver, who isn’t actually driving for most of the time, what do they do? How do they remain attentive, skilled and at-the-ready when most of the time there’s nothing to do? And the better the self-driving capability gets, the less they do, the less they pay attention but, paradoxically, the more their human skills will be needed in any emergency handover. Many industries have faced similar paradoxes – pilots and industrial control operators, for example.

It turns out that having people sit passively and watch for irregularities is not something we are good at. People are motivated to fill gaps in information. Curiosity is one of the greatest human cognitive assets. Designing AI systems that keep humans engaged and attentive to the system state is a core challenge and one that will require significant research and design experimentation as self-driving cars enter the market.

AI can become dangerous if humans do not have the agency to guide and intervene in a commonsense or intuitive way. The trick is to understand how AI fails, how humans fail and design for a natural complementarity.

At Sonder Scheme, we help humans win in the age of AI, with practical services, training and tools. We combine deep expertise in AI with decades of experience working with leading organizations including Adobe, Apple, Cisco, Fonterra, Google, Morgan Stanley, National Headstart Association, the New York Times, Quartz, Transpower NZ, US Department of Energy and the World Green Building Council.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.