Computer and behavioral scientists at the University at Buffalo say they are working on a system to compute a numerical score that determines the likelihood that someone is about to commit a terrorist act. Their technology will track faces, voices and other biometrics against scientifically tested behavioral indicators to provide that numerical score for an individual.
“The goal is to identify the perpetrator in a security setting before he or she has the chance to carry out the attack,” said Venu Govindaraju, Ph.D., professor of computer science and engineering at the University at Buffalo School of Engineering and Applied Sciences. Govindaraju is co-principal investigator on the project with Mark G. Frank, Ph.D., associate professor of communication.
The project will focus on developing real-time indicators specific to an individual during extensive interrogations but will also be available during faster, routine security screenings.
“We are developing a prototype that examines a video in a number of different security settings, automatically producing a single, integrated score of malfeasance likelihood,” he said.
They say the key advantage to their system will be that it incorporates machine learning capabilities, allowing it to “learn” from its subjects during the course of a 20-minute interview.
That’s critical, Govindaraju said, because behavioral science research has repeatedly demonstrated that many behavioral clues to deceit are person-specific.
“As soon as a new person comes in for an interrogation, our program will start tracking his or her behaviors, and start computing a baseline for that individual ‘on the fly’,” he said.
They caution that no technology, no matter how precise, is a substitute for human judgment, but that it can help decide who to watch more closely.
“Random screening is fair, but is it effective?” asked Frank. “The question is, what do you base your decision on -- a random selection, your gut reaction or science? We believe science is a better basis and we hope our system will provide that edge to security personnel.”
Their technology would also never have bias or fatigue or moods, they note. Something even the most well-intentioned humans have difficulty avoiding.
They expect to have a working prototype of the full system within a few years.
Funding provided by the National Science Foundation, the Department of Defense and the University at Buffalo Office of the Vice President for Research.
Comments