AI Lie Detecting Systems Used At Checkpoints – Making Tradecraft More Difficult

Using AI Lie Detectors at Borders Poses Unique Problem for Intel Community


The government of Kenya has joined the ever growing list of countries (including the U.S.) who are using a form of lie detector powered by artificial intelligence (AI) driven technology at their borders. It is unclear at this time if Kenya is using the same system as the U.S. (described below) or another in parallel implementation. What we do know is that the use of AI driven lie detection as a screening tool is increasing.

This is not good news for U.S. military, law enforcement or intelligence operatives who may be required to travel abroad in alias personas and march through those foreign checkpoints. Retired CIA officer, former Chief of CIA’s counterintelligence efforts and author of To Catch a Spy, The Art of Counterintelligence, James Olson, noted in a recent interview how digital technologies were making the “traditional tradecraft” of espionage used by those intelligence officers in the field less effective.

The U.S. system was developed by the Department of Homeland Security (DHS) as part of their Borders Research Project. That system is now commercially available. The DHS AVATAR (Automated Virtual Agent for Truth Assessment in Real-Time)was created by the University of Arizona and then spun out into a startup, Discern Science International, Inc.

In August 2018, the AVATAR was described by its creators as follows:

As users answer interview questions posed to them by an interactive electronic interviewer, the system records facial expressions in high-definition video. At the same time, its many sensors measure and record thousands of signals from the subject’s voice, body and eyes. All of this information is routed through a complex analytical algorithm, and the results are produced almost instantly: Green means the subject is clear to pass, yellow means there are some issues to be investigated, and red means there are serious issues that require deeper investigation.

The recent Financial Times piece goes on to note how the current iteration of the AVATAR system has an accuracy rate of 70-92%, which is greater than the 54% rate of detection of deception by a human interview. The difference lays in the fact that humans are “prone to bias.”

This widespread usage of this technology will certainly help those whose mission is to detect deception.

Those whose operational security depends on their being able to successfully operate in alias have another challenge to hurdle.