AI and the Future of Lie Detection

"We live in a world now where we know how to lie. With advances in AI, it is very likely that we will soon live in a world where we know how to detect truth. The potential scope of this technology is vast — the question is how should we use it?"

Some people are naturally good liars, and others are naturally good lie detectors. For example, individuals who fit the latter description can often sense lies intuitively, observing fluctuations in pupil dilation, blushing, and a variety of micro-expressions and body movements that reveal what’s going on in someone else’s head. This is because, for the vast majority of us who are not trained deceivers, when we lie, or lie by omission, our bodies tend to give us away.

For most of us, however, second guessing often overtakes intuition about whether someone is lying. Even if we are aware of the factors that may indicate a lie, we are unable to simultaneously observe and process them in real time — leaving us, ultimately, to guess whether we are hearing the truth.

Now suppose we did not have to be good lie detectors, because we would have data readily available to know if someone was lying or not. Suppose that, with this data, we could determine with near-certainty the veracity of someone’s claims. We live in a world now where we know how to lie. With advances in AI, it is very likely that we will soon live in a world where we know how to detect truth. The potential scope of this technology is vast — the question is how should we use it?

The Future of AI Lie Detection  

Imagine anyone could collect more than just wearable data showing someone’s (or their own) heartbeat, but continuous data on facial expressions from video footage, too. Imagine you could use that data, with a bit of training, to analyze conversations and interactions from your daily life — replaying ones you found suspicious with a more watchful gaze. Furthermore, those around you could do the same: imagine a friend, or company, could use your past data to reliably differentiate between your truths and untruths, matters of import and things about which you could not care less.

This means a whole new toolkit for investigators, for advertisers, for the cautious, for the paranoid, for vigilantes, for anyone with internet access. Each of us will have to know and understand how to manage and navigate this new data-driven public record of our responses.

The issue for the next years is not whether lying will be erased — of course it will not —but rather, how these new tools should be wielded in the pursuit of finding the truth. Moreover, with a variety of potential ways of mis-reading and misusing these technologies, in what contexts should they be made available, or promoted?

The Truth About Knowing the Truth

Movies often quip about the desire to have a window into someone else’s brain, to feel assured that what they say describes what they feel, that what they feel describes what they will do, and what they will do demonstrates what everything means for them. Of course, we all know the world is not so neat, and one might fall prey to searching for advice online. What happens when such advice is further entrenched in a wave of newly available, but poorly understood, data?

What will happen, for example, when this new data is used in the hiring process, with candidates weeded out by software dedicated to assessing whether and about what they’ve lied during an interview? What will happen when the same process is used for school selection, jury selection, and other varieties of interviews, or when the results are passed along to potential employers. As the number of such potential scenarios grows, the question we have to ask is when is our heartbeat private information?

Is knowledge of our internal reactions itself private, simply because until now only a small segment of perceptive people could tell what was happening? Communities often organize around the paths of least resistance, creating a new divide between those who understand and can navigate this new digital record, and those who cannot.

Imagine therapists actively recording cognitive dissonance, news shows identifying in real time whether or not a guest believes what they are saying, companies reframing interviews with active facial analysis, quick border security questioning. The expanding scope of sensors is pushing us away from post-truth to an age of post-lying, or rather, an end to our comfort with the ways in which we currently lie. As with everything, the benefits will not be felt equally.

We might even be able to imagine the evolution of lie detection moving towards brain-computer interfaces — where one’s right to privacy must then be discussed in light of when we can consider our thoughts private.

In court rooms, if we can reliably tell the difference between reactions during a lie and during the truth, do witnesses have a right to keep that information private? Should all testimonials be given in absolute anonymity? Researchers at the University of Maryland developed DARE, the Deception Analysis and Reasoning Engine, which they expect to be only a few years away from near perfect deception identification.

How then should we think about the 5th amendment of the US constitution, how should we approach the right to not incriminate oneself? With the advent of these technologies, perhaps the very nature of the courtroom should change. Witnesses are not given a polygraph on the stand for good reason: it’s unreliable — but there may be little stopping someone with a portable analytics system to tell their vitals or analyze a video feed from a distance, and publish the results for the court of public opinion. How should our past behavior be recorded and understood?

How we design nudges, how we design public spaces, how we navigate social situations, job offers, personal relationships, all depend on a balance of social convention by which we allow ourselves — and others — to hide information. Yet, what should we do with a technology that promises to expose this hidden information? Is a world based on full-truths preferable to the one we have now? Will we have a chance to decide?

Advances in AI and the democratization of data science are making the hypothetical problem of what kind of world we prefer an all too real discussion, one we need to have soon. Otherwise we’ll have no say in determining what lies ahead.

The article is also featured on The Decision Lab, and is written by Danny Goh, Josh Entsminger, Mark Esposito, and Terence Tse.

Danny Goh

Serial entrepreneur and an early-stage investor, co-founded and been CEO of Nexus FrontierTech, investing in early-stage start-ups with 20+ portfolios; currently serves as an entrepreneurship expert with the Entrepreneurship Centre at Said Business School, University of Oxford.

Josh Entsminger

Serves as a senior fellow at Ecole Des Ponts Business School’s Center for Policy and Competitiveness, a research associate at IE business school’s social innovation initiative, and a research contributor to the world economic forum’s future of production initiative.

Mark Esposito

Professor of business and economics at Hult International Business School and at Thunderbird Global School of Management at Arizona State University; a faculty member at Harvard University since 2011; a socio-economic strategist researching the Fourth Industrial Revolution and global shifts.

Terence Tse

Professor at ESCP Business School and a co-founder and executive director of Nexus FrontierTech, an AI company. He has worked with more than thirty corporate clients and intergovernmental organisations in advisory and training capacities.

Don't miss these stories: