Data scientists can over-trust explainability tools and put AI in production too soon

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

AI explainability and transparency are rated as the top concerns for regulators. Bias and fairness are increasingly top-of-mind, which raises the stakes on AI developers to be able to interrogate and understand their models.

There are a host of tools that have been developed – IBM’s AI Explainability 360, Google’s What-If tool and LIME (Local Interpretable Model-Agnostic Explanations) from the University of Washington, for example. These tools are designed to help data scientists understand the underlying model, its most important features and how it makes predictions.

New research raises concerns about how these tools are being used in practice. Researchers from the University of Michigan and Microsoft undertook a series of studies — both in-depth, in-person tests as well as surveys — and found predictable, yet complex, failures with the use of explainability tools.

Explainability tools are intended to be used for purposes which are specific to AI, yet if data scientists don’t fully understand the capabilities of the tool or have an inaccurate mental model of what it can do, they can over-trust the model. Accurate and realistic expectations tend to result in more principled evaluations and careful decision making. This can make up for differences in experience between individual data scientists. In contrast, without accurate mental models, even the most experienced data scientists can miss red flags.

Visualization tools also turned out to be problematic. Visualization is powerful for highlighting errors and understanding performance. However, visualizations can also be distracting, provide false comfort and prevent people from digging deeper. The researchers point out that visualizations can provoke users to “think fast,” relying too easily on intuition and “system 1.” This refers to Kahneman’s cognitive processes for humans, where system 1 tries to make quick decisions in an automatic way while system 2 performs more deliberative reasoning and engages more deeply before making a decision. Visualizations can encourage faster, more intuitive decision making which was found to be error prone when people didn’t have an accurate understanding of the tool or where the model itself was complex.

Finally, social context is important. Many explainability tools are open source, creating a level of social acceptance and a sense that the tools were less fallible or that the results required less scrutiny. The ultimate outcome was that data scientists were biased towards putting AI into production too early, with less critique and more trust than was warranted.

These results might at first seem counterintuitive; they are certainly counter to the intent of the designers. But in many ways it’s what we should expect. Humans have complex motivations and there are often tensions between social, technical and cognitive factors. What’s interesting about this research is how significant the role of social expectations and design was on people’s ability to question manipulated (and absurd) outputs. The experience of the data scientists offered little protection against inaccurate expectations and wizzy visuals.

This research highlights how excellence in human-centered AI takes account of many different aspects of the AI development process, including helping data scientists with managing complex and competing objectives and tasks.

Photo by Isaac Smith on Unsplash

At Sonder Scheme, we are on a mission to help humans win in the age of AI. We have created a human-centered AI design system that allows leading companies to create AI in an inclusive way. Companies like The New York Times, Starbucks, R/GA, Galp and the National Headstart Association tap us for engaging presentations at team/executive forums, in-depth workshops or long-term learning journeys. We support them with executive coaching and online access to our design system and up-to-the-minute insights into the frontiers of AI technology and ethics.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top