Why all AI design should include power mapping

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

AI is frequently used by people in positions of power on people who have less power. And as AI diffuses across more and more users, this power imbalance may concentrate even more. An effective way to understand how power is enacted in an AI system is to start with a power mapping exercise - and to think more about "studying up" - looking up at those who have the most agency and autonomy.

Not only is this good ethical practice but it can be used to improve the accuracy of predictions from an AI. This is because of what researchers call a "theory of agency," which says that prediction accuracy is a by-product of agency.

We can use performance management scoring as an example. Any proxy for performance is most directly a by-product of a manager's decision rather than the employee's actual performance. It's the manager's decision that becomes directly datafied, before other measures of employee performance. This means that an AI to predict how a manager will score an employee will...

- - - - -

Hi!

The rest of this post is available for our Pro Members. Please login if you're a member. And, if you're not, subscribe now to get access to all of our Pro Member content!

Subscribe here.

Email us with any questions.

Thanks!

 

At Sonder Scheme, we are on a mission to help humans win in the age of AI. We have created a human-centered AI design system that allows leading companies to create AI in an inclusive way. Companies like The New York Times, Starbucks, R/GA, Galp and the National Headstart Association tap us for engaging presentations at team/executive forums, in-depth workshops or long-term learning journeys. We support them with executive coaching and online access to our design system and up-to-the-minute insights into the frontiers of AI technology and ethics.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top