The science of emotions isn’t keeping up with AI so this is what you can do

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

At the exact moment that there’s a perfect storm of push back on the technology to read faces, and Amazon adds the capability to its facial recognition software, a significant new review of the research suggests there’s no scientific justification for using facial expressions to determine someone’s emotional state.

There’s a common view that we can infer how someone feels from how they look. AI that automates this and then applies the knowledge to tasks – such as understanding how customers react to ads or how well-matched an applicant is for a job – is available as features in AI products by a range of companies including Amazon, Microsoft and Google. But now we know that use of the technology is way ahead of the science.

There’s no such thing as a facial “fingerprint” of emotions

It’s tempting to believe there are a basic range of emotions that all result in the same expression, no matter who we are or what context we are in. However, the researchers are clear about the state of the science: facial movements are not emotional expressions so there is no reliable fingerprint of emotional state based on the expression.

Facial movements are tied to immediate context rather than to an inner state. This means that it’s vital to know someone’s goal. For instance, if the goal of being angry is to overcome an obstacle, it may be more useful to scowl, smile or laugh, rather than furrow brows, widen eyes and press lips together, as represented in the picture below as one standard for anger.

We don’t know enough about emotions

Emotions can’t be analyzed based on similarities in muscle movement or a related set of facial movements. One recent study mined 7 million images from the internet and identified multiple facial configurations associated with the same emotion category label and its synonyms—17 distinct facial configurations were associated with the word happiness, 5 with anger, 4 with sadness, 4 with surprise, 2 with fear, 1 with disgust. The different configurations were more than just variations on a universal expression – they were distinctive sets of facial movements.

The researchers stress just how little we actually know about emotion, so much so that users of systems shouldn’t consider errors to be random, something that well-designed AI systems can deal with. Even brain imaging studies that seek to link neuronal activity with dynamic changes in the autonomic nervous system, while finding patterns on an individual basis, are not replicable. Our current biological measures are not sensitive or comprehensive enough and we should assume that all the variation is unexplained rather than random.

Culture is critical

In an era where there is more awareness and concern for AI-derived bias, the research gets even more dire. There is no basis for generalizability because the context and culture are not accounted for and likely matter more than what the muscles on people’s faces do. People outside of western cultures do not always infer internal psychological states from facial movements and are more likely to assume that other people’s minds are not accessible to them, a phenomenon known as “opacity of mind.” Instead, facial movements are perceived as actions that predict future actions in certain situations, say, a wide-eyed gasping face is labelled as “looking.”

Finally, the researchers highlight that most studies are methodologically flawed. Many are done using a limited set of emotions for labeling which has the effect of showing more consensus than there actually is. Studies which use posed expressions by actors, which are often exaggerated, are basically bunk.

This is what you can do if you use emotional analysis AI

We’ve put together some questions you can ask and practical things you can do to improve performance and to prepare for customer or employee questions.

  1. What other data are possible? Facial expressions convey rich information. The more information that can be gathered about the context that the person is facing, the better. Consider deeper information about the context: what’s the person’s goal in the moment, their inner state (say their metabolic state or past experience) and their external context (work, school, car, home)? These factors are likely to be even more important than their facial expression when it comes to assessing their internal emotional state.

  2. How is the AI trained, specifically the diversity of the dataset? If training has been done on “posed” expressions, assume any predictions relating facial configuration to emotional state are invalid. Metrics on repeatability are vital, but be realistic; even biological studies fail in their replicability. If our current biological measures are not sensitive or comprehensive enough then it is dangerous to think AI is better. It is inherently more difficult to manage AI failure when false positives and false negatives are unexplained rather than random so consider how further tuning of precision/recall tradeoffs even at the expense of performance.

  3. What tests are possible? Query and test to gain a transparent, explainable account of an emotional AI’s algorithm to supplement or extend the analysis beyond only facial expression data.

  4. Is there a human who’s responsible at all times? When used to replace humans in a high stakes situation, ensure there is a human-in-the-loop, a dynamic approach or some ability to give feedback or opt-out.

Most of the emotion AI that has been deployed is based on one way interactions where the AI receives limited or no feedback from the human that is being observed. In this regard, the researchers are optimistic about the potential for future research using digital humans. Digital humans are “vivid” and they illicit true emotional engagement from humans, making the researchers upbeat about their role, especially for high dimensional sampling of emotional events. There are a host of patterns and ensembles that humans track that machines currently do not. This is a potential research area for an unsupervised learning approach, where an AI can help untangle some of this, helping to categorize the emotional observations in a more sophisticated way.

One company, Soul Machines, uses two-way interaction between real and digital humans to build a genuine level of emotional engagement—rather than simply measuring the human response. According to Greg Cross, Chief Business Officer, the company has found that humans prefer to interact face-to-face with digital humans and that business metrics such as customer satisfaction scores and closure rates are 25% higher with two-way emotional interaction than with chatbots or other forms of voice assistant.

Watch it here:

Building ways to make machines relatable and trustable in this AI world is going to be critical and it will be up to the AI companies and users to determine how to make this technology ethical and effective while the science of emotions catches up.

Photo by Ayo Ogunseinde on Unsplash

At Sonder Scheme, we help humans win in the age of AI, with practical services, training and tools. We combine deep expertise in AI with decades of experience working with leading organizations including Adobe, Apple, Cisco, Fonterra, Google, Morgan Stanley, National Headstart Association, the New York Times, Quartz, Transpower NZ, US Department of Energy and the World Green Building Council.

Share on email
Share on facebook
Share on linkedin
Share on twitter
Scroll to Top

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.