4/14/21 Deepfake research, professor

Edward Delp is the director of the Video and Image Processing Laboratory and researches ways to detect manipulated media.

Doctoral students and faculty in Purdue’s School of Electrical and Computer Engineering are working on the development of technology that detects and exposes deepfakes.

Edward Delp, a professor in ECE, is the director of the Video and Image Processing Laboratory on campus. The laboratory focuses on developing tools for image analysis and detection among other areas of research in digital imaging. Included within that is research into deepfake detection.

Deepfakes — manipulations of an existing image, via the use of machine learning or artificial intelligence — are a rapidly growing problem in today’s world, Delp said.

“I think you’re going to see more and more of these manipulated videos appearing in our society in all kinds of scenarios,” Delp said. “Not just in sort of the political scenario, which everybody was worried about with the last election, but I think you’re going to see them being used for all kinds of things.”

He provided a few examples of the diverse scope that deepfake technology can cover: Falsified medical information, doctored research papers, manipulated videos of public figures and revenge pornography. Delp and his team have analyzed real-life instances of deepfakes used in these areas and have worked on improving their ability to detect not only if an image was manipulated, but how, with what tools and by whom.

Delp said the team uses an assortment of tools to identify deepfakes. They feed examples of doctored media to a machine-learning system that then analyzes new images for inconsistencies or other cues that illustrate they have been tampered with.

“For the face-type (deepfakes), you can look at blinking or maybe skin tone or the way the person moves their head or how the background looks,” Delp said.

The team also analyzes metadata to help track and identify deepfakes. Alain Chen, a doctoral student in the VIPER Lab on campus, said metadata analysis provides basic information about an image or video: the author, the date it was created and the date it was modified.

“We realized it’s a pretty strong evidence to indicate whether a video is being illegally manipulated or not,” Chen said.

Another tool the team uses to detect deepfakes is something that isn’t restricted to a laboratory — their eyes.

VIPER lab doctoral student Hanxiang Hao said that certain discrepancies in images or videos, like the viral Tom Cruise deepfake on TikTok, can be spotted by the human eye. The smoothness of a face compared to its background or a flickering of lights between frames can, to an attentive viewer, blow a deepfake’s cover, Hao said.

Manually detecting manipulated media will become more difficult as deepfake technology improves, Hao said. Deepfakes will eventually reach a point at which the human eye cannot discern what’s real and what’s not.

For this reason, machine-learning detection will become increasingly crucial, he said.

Delp said another issue beyond humans’ physical capacity to see is the psychology behind their decision to believe or not. People will scrutinize videos less if they agree with the message being delivered.

“The problem you’re going to run into is something called confirmation bias,” Delp said. “In other words, if the video confirms what I believe, it may not be necessary for it to be that good.”

This concept was at play in a distortion of a video of Speaker of the House Nancy Pelosi from August, said Emily Bartusiak, a doctoral candidate in the VIPER Lab. The video, in which Pelosi’s speech is slowed down so that she sounds slurred, is an example of a ‘cheapfake’ — a manipulated form of media that doesn’t use AI.

“It doesn’t even require machine learning, but it did have a very big effect in the media,” Bartusiak said. “Once that was disproven, then it was kind of like, ‘Oh no!’ But it had already done its damage.”

Doctored videos and audio don’t work solely in isolation, either. VIPER Lab student Sriram Baireddy said manipulated media across a variety of platforms can create a convincing story when taken together.

“Say you read an article that maybe misleads you a little bit,” Baireddy said, “but then if you have associated audio or associated video that goes along with it, it goes a lot further in convincing whoever’s consuming that media that, ‘Oh, it could be a valid piece of news.’”

Both Bartusiak and Baireddy said social media companies like Facebook and Twitter have already begun initiatives to monitor their platforms for deepfakes and other manipulated media.

But Delp wants more proactivity, with a technical tweak.

“I’m hoping that many of the social network companies will eventually put tools in their system,” Delp said. “So when somebody uploads a video, it can be flagged as potentially being falsified.”

Recommended for you