Ethics and governance in AI at the House of Beautiful Business, Lisbon 2019

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

Here I’ve published my workshop opener from the House of Beautiful Business’s fabulous 2019 event in Lisbon.

We have a problem in AI – on one hand we have engineers, AI scientists and researchers who understand and drive the technology. This gives us a techno-centric view of the solutions. They wrestle with such things as the math on various measures of fairness or how to develop a tech workaround for accountability. 

On the other hand, we have ethicists who are experts in assessing ethical issues but don’t necessarily have any practical actions for what to do in a given circumstance.

So there’s a huge gap – the line of sight, the clearest view, of the on-the-ground issues faced by people who are affected by the technology. This is an issue of governance. It is an issue of how companies and individuals step up to the challenge of managing thinking machines and machine employees. 

Privacy is disrupted

Privacy is fundamentally disrupted in the age of AI. Privacy regimens are designed around inputs – what information we offer up to companies. AI forces us to think about outputs – what inferences a company makes about us. These inferences aren’t just made from our data, they are made from the data of others as well. What are our rights to know how we are seen? 

And what about surveillance? In the US there’s a move to get facial recognition banned. It’s truly a unique technology – we are used to walking around in public and being obscure because we know that others don’t remember us. Or that even if our faces are caught on camera that it’s too time consuming to track us. Facial recognition and machine surveillance is a fundamental shift in the power balance and it’s seemingly happened overnight. 

But the biggest disruption in privacy is our right to grow up, to change, to explore, to make mistakes. Machines never, ever forget so if we, or our kids, or someone else that looks like us screws up, the machine remembers. And because it learns from historical data, this can be what it says about our future too.

Machines think in alien ways – undetectable patterns in our mouse clicks or how our eyes track can give away our doubts and hesitations. This opens up the opportunity to prompt us with a nudge that may shift our inner, unstated preferences to where the AI wants us to go. So what we potentially lose in the age of machines is our autonomy. 

Bias is perpetual

A lot has been written about bias so I won’t belabor it here. The simple truth is that data is biased. It reflects digital human experience which favors those who have been digitized longer and labeled as successful, say. Bias is difficult, but not impossible, to deal with. It can even be useful. Maybe we want to bias towards a certain good outcome, an aspirational outcome rather than simply reflect the “truth in the data.” 

The more difficult issue with bias is discrimination and proxy discrimination. Powerful AI will find correlations that humans can’t find. This is part of its value. But if an AI isn’t allowed to use data because it’s illegal to use – say race or gender – and if this data is predictive of a certain outcome, then an AI will naturally find proxies without a human knowing. As an AI looks for less and less intuitive proxies, the AI will not be able to disassociate these predictions from what shouldn’t be used. And if an AI can’t, then a human most certainly won’t be able to.

Ethics has many layers

Fairness is math. Well, sort of. While there are mathematical ways to define fairness, upwards of 20 different ways to do this, they are mutually unresolvable. So fairness quickly becomes judgment. 

AI is the arch manipulator. We are all by now very familiar with the addictive tech issues. But we are less aware of the new science of nudging for example. And I worry about this one. But maybe I shouldn’t because humans are, well, really good at avoiding nudging. Our psychological reactance tends to make us reject other morals or judgment. But then, I worry because we may be heading to a world where good intentions of improving us just become the new micro robot overlord.

There’s a really important component of ethics that we need to be alert to. This is where science doesn’t support what the AI says it can do. Divining inner emotional state from facial expression is the latest social science that doesn’t stand up to scientific rigor, which makes using it for recruitment, say, questionable in my mind.

Accountability isn’t a one-shot deal

Accountability is a constant challenge because someone can say humans are accountable but then the human says well the technology/data/decision is “right.” There are so many flavors of this that it quickly becomes less black and white than it sounds. 

Competition is under scrutiny

As the US catches up, in its own way, to Europe, antitrust is the new black. Does your pricing bot collude with competitors’ pricing bots? Does Alexa reduce choice because voice can only give you two, maybe four, offers and they are to the highest bidder? Or is the problem more that it’s another way to exploit the overlap between convenience and laziness?

The superplatforms are in the center of this. Everyone talks about their data, but I worry more about their infrastructure. After all, it’s not the data that inherently gives them an advantage, it’s the infrastructure that enables the optimization of algorithms that influence our choices. With new differential privacy and synthetic data technology, infrastructure matters more.

The real issue in competition is the winner-take-all aspect of AI. AI amplifies a digital advantage so AI will create a bigger advantage faster when it can act on it, and the earlier the advantage, the better.

Equality

Or rather inequality. Super companies, super cities, super employees, super citizens. You’re either in the club of owning and running the machines or you’re not. 

Employment – or lack of it – drives pessimism

We’ve studied this area for years and I’ve come to the conclusion that it’s fundamentally impossible to accurately predict what happens with jobs. It’s useful to analyze skills and value but as jobs are divided up into automatable tasks, new jobs arise and new problems emerge. There’s an information asymmetry between what we know we might lose and what we don’t know we might gain.

What is predictable is that we should be mindful of what people want. At the level of society and at the level of the individual.

At the level of the individual, people like to do certain things. It’s complex to know what these are but they are generally social, where a favor is done that makes people feel good, a promise made, a mastery of a skill. So augmenting that human with AI is probably better. There are also tons of great examples of where AI plus human – a hybrid approach – is better.

At the level of society, there are also many problems we need AI for, where we can’t see the patterns or relationships. But if there is fear and pessimism, this can turn an entire society against tech.

So what’s the balance? I expect it’s more a random walk than a planned path. There is no question in my mind that we are heading into an era where ethics in AI and governance of machines will give rise to a whole new set of processes and skills inside of organizations. People will need first to understand how machines learn then to understand how best to design, manage and mitigate for machine employees.

At Sonder Scheme, we help humans win in the age of AI, with practical services, training and tools. We combine deep expertise in AI with decades of experience working with leading organizations including Adobe, Apple, Cisco, Fonterra, Google, Morgan Stanley, National Headstart Association, the New York Times, Quartz, Transpower NZ, US Department of Energy and the World Green Building Council.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top