New research from University of North Carolina, Chapel Hill and University of Maryland, College Park, claims to have developed AI that’s capable of detecting deception by looking at gait and gesture. Using a combination of psychology-based gait and gesture features, as well as deep features discovered in the data, the team was able to detect deceptive walking with an accuracy of 93.4%. This is a huge improvement of >16% over the current standard.
The researchers believe this is the first AI of its type. They have also released the first public dataset containing labeled data for gaits and gestures with deceptive and natural walks. Cues were found from smaller hand movements as well as the velocity of the hands and feet.
The results from the work seem a bit stereotypical. Deceivers were more likely to have their hands in the pockets, to look around to see if anyone is watching and to touch their hair more often. Interestingly, deceivers and non-deceivers looked at their phones at the same rate.
It’s easy to see the application for this — in public spaces, as part of the Ring doorbell system, in stores to anticipate and detect shoplifters. It is potentially a goldmine for the AI surveillance community. But we can also see the danger. Specifically, how the seeds of bad AI can happen very early on in the development.
While the researchers had participants perform an elaborate series of walks which were designed to induce deceptive behavior, the experiment is fairly small and limited. From 88 participants, researchers gathered 314 walking videos, which then resulted in 1144 gaits and gestures and their associated deception labels. That’s a fairly small amount of data that could theoretically be used by any number of interested parties, who then add to the dataset but propagate any existing weakness. In this case, bias is more a data gap than a data skew. 88 people’s walking styles shouldn’t seed (potentially) everyone in the world’s walking style as seen by AI.
AI can work in mysterious ways. There are well-known cases where AI engineers thought AI was learning, for example, how to identify a melanoma yet subsequent work revealed that the AI had, in fact, learned to identify a ruler instead. It turns out that doctors are more likely to measure a suspicious mole so an image containing a suspicious feature is more likely to include a ruler. In this research, the AI could well be learning to identify turned up hoodies and sunglasses as deceptive behavior.
The researchers point to a lot of literature about how people behave when they are intentionally trying to mislead others. Body movements are more implicit and are less likely to be under someone’s conscious control. But there is no conclusive, single movement that can be universally applied. In reality, we should be very cautious about any claims to read someone’s intent and inner state of mind from their physical appearance based on what we’ve learnt about the state of the science in using emotional analysis from facial expression.
Finally, there’s an issue with the intent of the research. This is really interesting work for grad students or AI researchers as an educational pursuit. But this shouldn’t be scaled up and used across potentially millions of people. Now there’s nothing to indicate that this is the intention here but, given how rapidly AI moves from lab to scale when people are motivated and unconstrained by any ethical oversight, it’s a genuine risk.
It is not the prediction that is the problem, it is the correlation & inference that is problematic. The limitations are in the underlying basis of the correlative assumption (primarily), but also in the training set & hypervariable differential physiological function, disabilities, etc. I would venture a guess that there are as many different gaits as there are shade gradients of skin tones, if not more because of variations in physiology. Moreover, I can tell you from person experience: I hobble more on some days than others, depending on how my RA is affecting my hip joints. The model would totally fall apart on any of the systemic arthritides simply because of physiological & not psychological correlations. This is yet another reason why it is important to clearly understand the underlying data + biology when applied to statistical modeling. People can generate cool math & code models that have no practical applications.– Poland
The most public failure of “suspicious walking” is the policy of stop and frisk by the NYPD. In her book, Bias, Uncovering the Hidden Prejudice That Shapes What We See, Think and Do, Jennifer Eberhardt points out that half of the nearly 1.3 million pedestrian stops where based on “furtive movement” yet there are no definitive descriptors for the movements that count as furtive or suspicious. Instead, this was determined to be racial bias — 54% of those stopped were black in a city where 23% of people are black — so one could make the case that using AI to scan for deception could reduce human bias.
The takeaway from this research is that companies that deploy AI need to be hyper-vigilant about their AI “origin story.” While there are no calls yet for bias and fairness to need to go back to the original research, data archeology and AI forensics will be an important part of being able to trust AI.