How Data Scientists Fight Deepfakes in Cyberspace

Technologists are using advanced technologies like artificial intelligence and blockchain to protect against fake videos.

By Jennifer Goforth Gregory

By Jennifer Goforth Gregory February 23, 2023

Tom Cruise has one of the most familiar faces in America. From his trademark tapered haircut to his piercing emerald eyes to his unmistakably mischievous grin, he’s got a leading-man mug that audiences recognize anywhere. So when goofy videos of Cruise began surfacing on TikTok in early 2021, showing him impersonating a snapping turtle, speaking Japanese and dancing in a bathrobe, viewers were convinced it was him. After all, seeing is believing.

“He looks very good for his age,” one person commented. “This is 100% the real Tom Cruise,” said another.

But it wasn’t the real Tom Cruise. Instead, it was a digital doppelganger created by visual effects artist Chris Umé with the help of Cruise lookalike Miles Fisher.

“I like to mesmerize people,” Umé told CNN in an interview.

Umé’s work is an example of a deepfake: artificial intelligence-generated audio or video that impersonates real humans.

Related

AI Helps Musicians and Scientists Finish Beethoven’s 10th Symphony

And Cruise isn’t the only celebrity subject. In 2019, AI startup Dessa created a fake recording of podcaster Joe Rogan that perfectly mimicked his voice. And in 2022, a YouTube user known as DesiFakes uploaded an iconic scene from the movie Pulp Fiction that appears to star comedian Jerry Seinfeld instead of the original actor, Alexis Arquette. Indeed, pretty much any celebrity is fair game for the deepfake treatment.

While celebrity deepfakes tend to be fun and entertaining, the ability to create such convincing content raises significant questions and concerns that are no laughing matter. Because deepfakes can be virtually indistinguishable from the real thing, critics wonder what the impact might be if the wrong person were depicted saying or doing the wrong thing — a businessperson sharing false news with shareholders, for example, or a politician making a false declaration of war.

The potential to spread mis- and disinformation is real. In fact, a deepfake video of FTX founder Sam Bankman-Fried, the former crypto exec accused of fraud, recently was shared on Twitter, offering “compensation” for users with the goal of stealing money. For data scientists and security professionals, it’s therefore critical to develop cloud-based detection tools that can spot deepfakes and distinguish them from authentic content.

Fun Hobby or Future Social Ill?

People turn to the internet and social media for all manner of information. But whether you go online looking to digest the daily news, research a homework assignment or self-diagnose a medical malady, one piece of advice has always held true: Look for trusted sources.

In a world with deepfakes, that advice isn’t always easy to follow.

Still, consumers and data scientists alike are trying to make sense of them. So much so that internet searches for “deep fake” have increased by 533% over the past five years, according to the website Exploding Topics. Meanwhile, deepfake audio and video top several lists of the top data science trends to watch in 2023 and beyond.

Related

The Potential and Risks of AI-Generated Text

Federico Ast, Ph.D., is among those keeping tabs on deepfakes. Currently, deepfakes represent only a tiny portion of internet content because they’re very expensive to create, he said. As costs go down, however, he predicts the amount of deepfake content online will grow, making it easier to fool the public and engineer AI-driven scams, opening myriad “Black Mirror” scenarios that could range from simple fraud to large-scale social mayhem.

“Imagine that your mother, friend or partner gets a video call from someone who looks and sounds exactly like you saying you’re in trouble and need money,” said Ast, CEO at Kleros.io, which created Proof of Humanity, a blockchain-based verification protocol and registry that could help combat deepfakes.

“Or your bank gets a call from ‘you’ asking to make a wire transfer. Or a video of a fake CEO of a big bank saying the organization is facing a liquidity crisis, and this triggers a run on the bank.”

The potential scenarios are as endless as they are concerning.

Fighting AI with AI

Technology is both the cause of deepfakes and a likely solution to them. In fact, organizations already are creating AI-based tools that can help reduce deepfakes by using deep learning to spot them.

Tech companies like Intel are leading the pack. In November 2022, it released FakeCatcher, a cloud-based tool that it claims can accurately detect fake videos 96% of the time. FakeCatcher uses AI to analyze the blood flow of humans in videos, using up to 72 different detection streams.

“Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human — subtle ‘blood flow’ in the pixels of a video,” Intel said in a press release

“When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.”

Related

Cloud and Edge Computing Get Better Together

The University of Buffalo Media Forensics Lab uses a similar approach. Instead of looking for blood flow, however, it analyzes biometric features like eyes. Because many deepfakes do not use realistic eye movements, its algorithm detects deepfakes by scrutinizing the location and shape of the eyes as well as gestures like blinking, said Siwei Lyu, director of the Media Forensic Lab. It only takes about 500 images or 10 seconds of video to create a realistic deepfake, he told USA Today.

In addition to creating detection tools, the Media Forensics Lab is making it harder to make deepfakes in the first place. To produce their content, deepfake creators use algorithms that analyze video content and extract images from it. Lyu has therefore devised a way to thwart deepfakes by adding noise to videos that are uploaded to social media, which contaminates deepfake algorithms and slows down the deepfake creation process.

Proof of Humanity takes a different approach. Instead of looking for fake videos, it looks for fake people using a blockchain-based registry whose goal is “social validation.”

Related

AI Helps Robots Read Human Body Language to Be Better Coworkers

“Proof of Humanity … [maintains] a decentralized list of humans based on video submissions and social validation,” it explained on its website

“When applying to the list, users need to provide a name, a short description, a photo and a short video which allows others to verify that the user is indeed human. Then, he or she must be vouched for by an already confirmed identity.”

Social media platforms could reduce deepfakes by using Proof of Humanity as a universal log-in.

The Future of Deepfake Detection

While Ast does not think any single tool or approach will eliminate deepfakes entirely, he expects deepfake detection to become an important part of the infrastructure to reduce deepfakes. By working together, he said the right people and the right technology can keep deepfakes from escalating beyond the realm of entertainment.

“I don’t think we will eliminate deepfakes only with technology. There will be room for human detectors,” Ast said. “Reducing deepfakes will be achieved by expert humans working with the help of technology.”

Jennifer Goforth Gregory is a contributing writer. Find her on Twitter @byJenGregory.

© 2023 Nutanix, Inc. All rights reserved. For additional legal information, please go here.