People frequently ask for advice then ignore it, dislike being bored yet are happily lazy, hate to lose more than they love to win and expect a degree of intelligence in an AI that consistently lets them down. Empathy evolved to help humans learn from others’ mistakes. Symbolic and metaphorical thinking evolved so recently that our ability to distinguish between the literal and the metaphorical is poor. People see faces in clouds and attribute human characteristics to non-human objects. As Cassie Kozyrkov, Chief Decision Scientist at Google, says, “if I sew two buttons onto a sock, I might end up talking to it.”
For product managers and designers using AI tools, these features of human psychology are especially important because of the learning nature of modern AI systems. People come to an AI product with a mental model of how the AI works. This includes how it gathers its information, how it is supposed to interact and how truly human-like it is going to be. There are important real-world examples that can tell us what to be aware of when designing AI-enabled products so that users are able to get the best from the AI.
Algorithms often outperform humans when it comes to estimating anything from popularity of songs to romantic matches to geopolitical and economic forecasts. There is a widespread conclusion that people distrust algorithms. This has been questioned in research that suggests that people readily rely on algorithmic advice. Not only are people more likely to choose algorithmic advice over the advice of other people, they are likely to choose it over their own judgment. But fascinatingly, this effect is not uniform. Less numerate people and experts are more likely to ignore algorithmic advice and the accuracy of their judgments suffered accordingly.
This is critical knowledge for designers of products. For example, in the case of an AI-enabled product for medical specialists, the AI must work in tandem with the professional’s desire to use his or her years of diagnostic experience and sit comfortably with their intuition rather than go up against it. For products for use with less numerate users, choice architecture – nudging – needs to be in small steps and prioritize feedback. Design requires a solid understanding of the mental model the user is likely to apply to the AI and the ability to explain it in the most simple terms possible when users first encounter the AI.