Facebook’s AI chief isn’t worried about AI dominance because AI doesn’t have testosterone

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

Last week, Yann LeCun, the chief AI scientist at Facebook and a professor at NYU, held a webinar lecture on the future of deep learning. LeCun is one of the world’s uber AI researchers. He is, quite literally, designing the future of intelligent machines, and because he is at Facebook, intelligent social machines. At the very end of this lecture, he presented his views of the future of artificial general intelligence.

Here’s the slide.

LeCun - no highlight

In LeCun’s view, super-intelligent artificial general intelligence will serve humans. It will not overtake us, it will not replace us, it will not degrade human experience.

The big caveat has to be, only if it’s done right. Which is where his comment on testosterone is weird at best, dangerous at worst. LeCun believes we do not have to worry about AI taking over the world since AI doesn’t have testosterone.

Testosterone’s affect on human behavior is perhaps the most oversimplified, meme-driven, misunderstood myth of human neurological science. Testosterone is not a one-shot route to domination. There’s a plethora of recent research around this and the conclusion is that testosterone’s influence on behavior is highly related to social context.

Testosterone does subtle things to behavior and is highly dependent all sorts of other things that might be going on. One of the world’s leading experts on the biological basis of human behavior, Robert Sapolsky, professor of biology and neurology at Stanford University and author of Behave, the Biology of Humans our Best and Worst, as well as The Trouble with Testosterone, explains this beautifully. Quoting directly from Behave:

“Everything can be interpreted every which way. Testosterone increases anxiety – you feel threatened and become more reactively aggressive. Testosterone decreases anxiety – you feel cocky and overconfident, become more preemptively aggressive. Testosterone increases risk taking – “Hey, let’s gamble and invade.” Testosterone increases risk taking -“Hey, let’s gamble and make a peace offer.” Testosterone makes you feel good – “Let’s start another fight, since the last one went swell.” Testosterone makes you feel good – “Let’s all hold hands.” It’s a crucially unifying concept that testosterone’s effects are hugely context dependent.”

It gets even more intriguing. Sapolsky says that how we think testosterone will affect people influences behavior:

“Testosterone doesn’t necessarily make you behave in a crappy manner, but believing that it does and that you’re drowning in the stuff makes you behave in a more crappy manner.”

And a final word that reinforces how much of testosterone’s action is social, not biological:

“Testosterone makes us more willing to do what it takes to attain and maintain status. And the key point is what it takes. Engineer social circumstances right, and boosting testosterone levels during a challenge would make people compete like crazy to do the most random acts of kindness. In our world riddled with male violence, the problem isn’t that testosterone can increase levels of aggression. The problem is the frequency with which we reward aggression.”

It’s not possible to know whether LeCun’s outdated view of testosterone’s role on male aggression is somehow embedded in Facebook’s AI. But it’s probably safe to assume that Facebook’s AI design has overlooked the potentially positive impact of testosterone influencing pro-social behavior in men.

We know that Facebook influences behavior. We know that Facebook’s AI governs what people see and that no one fully understands how this works, because AI is beyond human intuition. We know that AI requires a lot of human supervision and involvement, especially early in model development. We know that these tweaks and creative choices are indeed choices; made on the front lines of development, primarily by AI researchers and software engineers who have beliefs and ideas of their own. We know that we cannot know what these choices are, how they seed an AI’s early learning or how they may influence an AI’s predictions “in the wild.” We know that divisive content drives user engagement and that user engagement is all that matters to Facebook’s business model. We know that even when users flag violent content, they’ve typically engaged with it first, so this signals to the AI that the content is engaging. We know that this incentivizes creators to upload more of it.

We do know that we can question whether Facebook has a blindspot because the AI has been built by people who made decisions about labeling complex social behaviors that are easily oversimplified, in this case that the desire for dominance is pre-determined by biology. If the AI was initialized to understand male behavior as impervious to social context, we can question whether Facebook’s AI has been given a fair chance to promote equally status-seeking, pro-social male behavior over male aggression. We can question whether Facebook is more encouraging of violence than it needs to be, and would be, if the AI was more aware of the true nature of human desire for dominance. We can question whether we are missing out on the powerful amplification effect that Facebook may be able to have on behaviors such as kindness, generosity and empathy. We can also question whether it’s possible to structure a social media feed to, as Sapolsky might say, reduce the frequency of rewarding aggression.

We can question whether Facebook is more encouraging of violence than it needs to be, and would be, if the AI was more aware of the true nature of human desire for dominance. We can question whether we are missing out on the powerful amplification effect that Facebook may be able to have on behaviors such as kindness, generosity and empathy. We can also question whether it’s possible to structure a social media feed to, as Sapolsky might say, reduce the frequency of rewarding aggression.

Facebook has over-trusted its AI to handle violent content. Inside the practical reality of Facebook development teams, shortcuts and heuristics abound, only to be caught long after the early design decision is made. In the case of adjustments to the newsfeed made in 2017: “To define news, the engineers pulled a classification system left over from a previous project—one that pegged the category as stories involving “politics, crime, or tragedy.”” This means that for quite some time, “news” wasn’t sports, science, health or business or any other category. Nobody at the company noticed, highlighting again how AI is not intuitive to humans.

At every level of code, at every decision point in the numerous science, engineering and design tradeoffs which are happening in the field of AI, a handful of people are making deep and important decisions about intelligence. They are almost all trained in only a narrow set of fields. They can’t be expected to understand everything. However, they often aren’t good at making space for others who do, on others’ terms—psychologists, physicians, philosophers, biologists, poets, artists, historians, engineers from other disciplines who, when brought in to the design stages, invariably point out that things are more complex than the AI designers would want them to be. Maybe it’s just because they feel that they have to move fast. There’s simply no time to be correct on the details. There’s simply no time to include a broader set of perspectives and backgrounds—and, sometimes, even truth.

Perhaps the most troubling aspect of LeCun’s opinion, buried deep in a highly technical webinar, is that he seems to be saying that AI isn’t dangerous as a force against humanity because it has no desire to dominate. To say that AI can’t be existentially dangerous because it has no biology is bizarre. We see dangerous AI every day—failures, bias, discrimination, invasions of privacy, deep fakes, surveillance. So far, what people do with AI is much of what makes AI dangerous. And the testosterone in the predominantly male AI community is influencing them to achieve the status targets presented to them: commercial success and dominance. How would testosterone influence the AI community’s work if its status target was, instead, social good?

AI is complex and the science is beyond the reach of most people. But the effect of AI will be too significant to leave decisions to only those with technical expertise. It’s far too easy for people with narrow academic backgrounds to make incorrect assumptions about the needs, desires and influences of an entire population. Perhaps the only protection we have is to make sure that all AI is developed with many diverse perspectives, which means a diversity of knowledge, education, genders, races, experiences and world views.

At Sonder Scheme, we help humans win in the age of AI, with practical services, training and tools. We combine deep expertise in AI with decades of experience working with leading organizations including Adobe, Apple, Cisco, Fonterra, Google, Morgan Stanley, National Headstart Association, the New York Times, Quartz, Transpower NZ, US Department of Energy and the World Green Building Council.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.