The ability to discern real news from fake news is coming to a head.
“Deepfakes” — videos or pictures modified by a computer that are intended to trick their viewers — are a growing threat across news and social media platforms.
Edward Delp, the Charles William Harrison Distinguished Professor of Electrical and Computer Engineering at Purdue, alongside doctoral student David Güera, developed a computer program to help detect these new threats. The software works by “teaching” an algorithm what a deepfake looks like using examples, which then allows the program to identify misleading content in the future.
“The program is a computer algorithm. The algorithm works by machine learning called deep learning. You train the algorithm,” Delp said. “Train it by showing it a lot of fake videos and then it detects and learns. Where the algorithm learns by looking at examples, then it can detect afterwards.”
Delp said many deepfake videos are simple video modifications, such as slowing down the playback speed or voice overlays, like what was seen recently in the well-known fake of Speaker of the House Nancy Pelosi. More complex, higher quality deepfakes may feature face swapping and overlays that can only be done by sophisticated software and computers, out of reach of the common user.
Delp’s Video and Imaging Processing Laboratory team won a Defense Advanced Research Projects Agency contest in media forensics for the algorithm that helps detect deepfakes and video manipulation. The technology to detect the videos is constantly evolving. According to Delp, it takes up to 100 hours for their algorithm to process and detect the fake videos.
Every day, thousands of videos are posted online, which means that any potential filtering program would need to be able to quickly and efficiently detect fake content. Even then, such media would likely be removed only after being published, meaning that the video would likely have already had an impact.
For these reasons, deepfakes are expected to contribute to the upcoming 2020 election in the United States, likely acting as an agent of spreading fake news.
However, Delp believes deepfakes pose a threat to more than just election cycles.
“Deepfakes pose a threat to more than just political videos,” he said. “It can be use in making fake child pornography, revenge porn and financial bribes. Just general criminal behavior.”
Social media platforms are attempting to control the spread of fake media, with most major companies having policies forbidding disinformation and intentionally misleading content. However, implementation of these rules is often slow and after the content has already been published and viewed. Nonetheless, Delp said prominent media platforms could combat the spread of fake news before and after it’s posted.
“One way to decrease it, content providers put information in their content so that they can know immediately if the content has been manipulated,” he said. “Major media producers could implement these algorithm tools to detect fake videos before they’re uploaded or labeling them as manipulated if they are published.”