How to think about Personalization and not Over-personalize

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

Personalization is a valuable promise of AI but there is a core tension at play – how to personalize without reducing people’s ability to decide for themselves.

Think of a customer when they walk into a store. Imagine that, behind the scenes, an AI helps the employee with a personalized prediction about the customer. 

Predictions are at the heart of AI. And understanding how predictions drive customers is vital because, not only are predictions sometimes simply incorrect based on the way the math works, but when we try to personalize a prediction they can fail for many other reasons too. 

In AI design we worry a lot about incorrect predictions. An incorrect prediction is bad and they are going to happen, so the whole design process pivots around them. The stakes are high when working with human emotions so personalization is a particularly important area to design from an ethical perspective. An incorrect prediction can make people feel broken. Why don’t I feel like that? What’s wrong with me?

Let’s run through how true and false predictions might play out once AI is “in the wild.” 

The customer experience could be seamless, efficient and easy. But how good does it have to be the first time for the whole idea not to be abandoned? This likely will depend on how expectations are set and on people’s mental models. If this was based around a voice technology rather than a human, expectations would likely be very high. One failure can be enough that people abandon the idea and never return. You’ve lost your chance to show them more. 

How about the 1000th use? How many failures has the employee been through, with odd suggestions and the need for them to override the AI? What biases have been revealed, what biases have been eliminated? Has the AI adapted as the humans most surely will, turning a frictionless experience to a detached experience? What happens when that very predictability makes everything passive?

Predictions that make people feel creepy are predictions that break our intuition about what we know about ourselves and what others know about us. This can be for many reasons – we somehow lose the obscurity we rely on – called an obscurity lurch – or we have data about us appear in a place we never intended for it to be – called context collapse – or we sense that someone has figured us out in a way that feels just wrong.

Creepy is a privacy invasion. Our digital selves are inferences made up of data about us as well others’ data. We no longer need to care about what we volunteer as data as much as we should care about the outputs – how do others see us?

What about when a prediction is so correct that the customer didn’t even know it herself? Indeed a moment of delight! An unknown preference, a shift in our perception of the world, a moment of serendipity or cultural curation that humans delight in. Pleasure, novelty, learning. This is good, right? It’s the goal of AI in a consumer centric world – being able to predict what we like makes money but making us like what can be predicted makes even more.

How is this so? Because AI can know things we can’t. A lot of data extraction is passive, unavoidable and undetectable. Things that are just outside of our awareness, or sit below the level of consciousness, or prey on our weak wills, capitalizing on the delta between our current preferences and the preferences of our future selves. AI can even know things we have no human ability to know.

Over time, a human with agency should come to question this. A human with agency doesn’t take what is given to them and experience it the same as someone else. What one person loves, the other hates. As individuals we make sense of what happens to us, create our own meaning, have our own experiences. A human who never does this, who never reflects, who never actively chooses, who experiences the same thing as someone else, is a human who is more machine than human. 

This is the core tension with personalization – the seduction of ease, where we slide into technology overuse and dependency. We evolved for social problem solving. As much as we don’t like it, it’s unpredictability that fosters connection. We reflexively learn and adjust our mental models, knowledge and preference. We work with others to deal with the unplanned and we leverage trust in them as a shortcut to creating our own beliefs.

Personalization is one of the core promises of AI but it is in constant tension with our agency as individuals. There are ways to define the differences. What is the system prioritizing; human learning or machine learning? Does the system preserve privacy and reveal inferences in a meaningful way or is there no transparency? Are user’s actions distinct from their inner thoughts or does the system take advantage of their inner bias towards the familiar being perceived as safe? Is the user’s future theirs or is it guided by the system’s approximation of their past decisions?

As AI designers, these distinctions are ethical considerations. They speak to the true goal of the system. Preserving human agency while offering the benefits of personalization is the only human-centric approach.


At Sonder Scheme, we help humans win in the age of AI, with practical services, training and tools. We combine deep expertise in AI with decades of experience working with leading organizations including Adobe, Apple, Cisco, Fonterra, Google, Morgan Stanley, National Headstart Association, the New York Times, Quartz, Transpower NZ, US Department of Energy and the World Green Building Council.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top