AI is operating under the wrong paradigm, according to Tristan Harris, the director and a co-founder of the Center for Humane Technology. Harris, who is well known for his work as a design ethicist at Google where he studied the ethics of human persuasion, is passionately advocating for systemic change in the tech industry.
Harris understands design and is eloquent at explaining human-degradation in the face of tech design, which flows from the current dominant paradigm of tech giants.
He outlined this paradigm in a recent talk at Reinvent, where he detailed the link between the technology business model, current design approaches and the impact on individuals and society.
The dominant paradigm today is:
- Give users what they “want”
- Disrupt everything
- Technology is neutral
- Who are we to choose?
- Growth at all costs
- Design to convert users
- Obsess over metrics
- Capture attention
This enables the tech platforms to minimize their own responsibility and, at the same time, turn human attention into a commodity which can be bundled, bought and sold.
Technology design that supports this paradigm manipulates and exploits human psychological traits, leading to:
- Information overload because designs aim to hit the limits of human cognition
- Addictive use via exploitation of dopamine pathways
- Mass narcissism by encouraging endless social validation
- Fake news because tech exploits human confirmation bias
- Increasing polarization because outrage reinforces outrage
- Bots and deep fakes that destroy trust
The end result is that technology today undermines human weaknesses and overwhelms human strengths, which Harris calls “human downgrading.” Humans are left with short attention spans, social isolation, teen depression and suicide, the breakdown of sensemaking, conspiracies and extremism and a post-truth world.
This human downgrading makes it impossible for humans to work together to solve the biggest problems, in particular, the climate crisis. Because human attention is commodified, traded and exploited in ways that play to our “Paleolithic brain” we are unable to effectively utilize our high-order social and cognitive skills – collaboration, truth-finding, evidence-based discovery, sensemaking.
Harris proposes a new paradigm for technology design:
- See all design choices in terms of human vulnerability
- Find and strengthen human brilliance
- Recognize that we are constructing the social world
- Deal with the fact that choosing is inevitable
- Bind growth with responsibility
- Design to enable wise choices
- Obsess over what really matters
- Nurture awareness
Harris proposes that designers need to look at current problems through this lens. For example, take the problem of information overload. Designing an AI with a mindset of human upgrading rather than human downgrading would result in a completely different set of creative solutions. Instead of it being desirable to hit our cognitive limits with technology, designers would come up with ways to have AI help with ways to make us less vulnerable to information overload and able to make better choices as individuals.
He’s optimistic that it’s do-able because fundamental change would only require about 1000 people in Silicon Valley to change their ideas about technology.
But if the real problem is the business model of big tech, Harris does acknowledge that there is a role for regulation as well as external and internal pressure for change.