How to Detect Deep Fakes? A Deep Dose of Skepticism (2024)

If you’re a regular on social media, you’ve probably seen an uptick in videos showing your favorite celebrities singing along to a song or heartwarming videos of old photos being brought to life. Yet as technology becomes increasingly available, affordable, and accurate in manipulating media, the implications may take on a more sinister tone.

“Deepfakes” rely on easy-to-use manipulations based on artificial intelligence that allow anyone to swap two identities in a single image or, more often, a video. Facial modification algorithms allow substituting the attributes of one image (e.g., face, skin tone, gender) with those of another image. Voices can also be deepfaked using “voice skins” or “voice clones” that can replicate the exact tone, emotion, inflection and cadence of someone’s voice.

This practice is also booming in the advertising world in so-called synthetic advertising. The results are pretty amazing. This technology can bring people back to life, as Peter Cushing did for Rogue One: A Star Wars Story, or make them look younger, like Carrie Fisher/Princess Leia in the same Star Wars movie.

With these deepfake tools, you can end up sounding like any celebrity! Or you can make a celebrity or other public figure say something they never did.

That’s where the trouble starts. As fun as deepfakes can be when created in jest, their potential for deception is worrying many, from social media platforms worried about the accuracy of the videos people post all the way to the U.S. Congress, which is concerned about protecting consumers from “Manipulation and Deception in the Digital Age.” Deepfakes are also becoming more and more prevalent. According to one report, at the beginning of 2019 there were 7,964 deepfake videos online, and just nine months later, that number jumped to 14,678. Just as the images are being manipulated, so are we.

Principles of Deepfakes and How to Avoid Falling for One

If there are two things I've learned over and over in my persuasion-related research over the past 20 years, it is the following two principles:

1) It's easy to fool people.

People are gullible. Even when they know it’s an advertisem*nt, they tend to believe what they see! Advertisers have long capitalized on this gullibility: Ads often rely on tricks, special effects, or artificial intelligence tools like deepfakes to create make-believe sounds, images, or videos, that people actually believe. In a research project on greenwashing, we found that consumers naturally conclude that a car is better for the environment if its website features nature-evoking images. Just like kids believe Hot Wheels can really fly, adults easily believe that skin creams can make us look younger.

Every time I run a psychology experiment, I am amazed at how easily people are swayed by what they see or read. For instance, in a research project focused on how different types of celebrity endorsem*nts affect consumers, I experimentally tested consumer reactions to a visual showing Sarah Jessica Parker holding a new brand of energy drink. Even with my novice photo-editing skills, I was able to insert an image of that bottled drink onto the original picture to use in my study.

Merely presenting this doctored picture as from a "forthcoming ad for the energy drink" (that is as a traditional celebrity endorsem*nt), as a 'scene from a forthcoming movie' (that is as a product placement), or as a "photo taken by a passerby" (that is depicting SJP as a genuine user of the brand) generated totally different reactions to the energy drink brand and even willingness to pay for that brand.

2) It's also easy to learn not to get fooled.

Fortunately, consumers can be vigilant enough to assess when they’re getting fooled and to learn not to get fooled. The key is to detect that someone or some message is trying to persuade us and to access what researchers call “persuasion knowledge,” a set of defensive tools against getting tricked into believing a message, a claim, or a sales attempt.

Many studies, including my own, have shown that persuasion knowledge allows consumers to resists influence attempts. For instance, in research on how we can prevent TV viewers to not just believe what they see in TV series, we found that simply reminding them that what they see in TV series is fictional, i.e. "not real," is enough to reduce the influence.

Persuasion Essential Reads

Sensory Primes for Sustainable Eating

Step one is to detect a persuasion attempt. While some deepfakes are easy to detect (for instance, because the lip-syncing or eye contact is off or glitches give away the manipulations), advances in technology are making it increasingly difficult to know what is real and what is fake. Still, even high-quality deepfakes may have flickering between frames that reveal that the original image was manipulated. The issue is that most people are not trained to watch out for deepfakes – while they’re certainly becoming more prevalent, they are not yet in the mainstream enough to be top of mind for most people. By being aware that deepfakes are something to watch out for, we can become more alert and be on the lookout.

Once detected, figure out the source of and the motivation behind the deepfake. Early deepfakes were often used in the form of “revenge p*rn,” mapping the faces of female celebrities onto p*rn stars. Their goal was to shame the celebrity in question. Although some deepfakes are meant for satire or humor, many unfortunately aim solely to embarrass a Hollywood celebrity, a political figure, or even regular people. Knowing the reason for someone to manipulate one’s face or voice helps assess whether the manipulation is in jest, for fun or satire, or whether it’s for less noble purposes.

One caveat: it’s easier to detect a persuasion appeal when it looks like a normal commercial than when it is as sophisticated as a deepfake. Deepfakes in fact have been called out for their high potential for misinformation and disinformation for the very reason that they’re hard to detect. For instance, a State Farm commercial that aired on ESPN in 2020 was in fact a deepfake: 1998 video footage of former ESPN SportsCenter anchor Kenny Mayne was manipulated to look like Kenny Mayne made shockingly accurate predictions about the year 2020.[CR3] Amusing, yes, but troublesome nonetheless.

Because it might be too difficult for regular consumers to detect deepfakes, social media platforms like Facebook and Instagram are investing in AI-powered deepfake detection so they can remove them if they violate their platform policies or warn Facebook users that the original image or video was manipulated.

But I would argue that consumers can combat even higher levels of falsity by digging into even higher levels of awareness and trusting their gut when it comes to questioning the authenticity of a suspicious looking media clip. In other words, combating deepfakes requires a deep dose of skepticism. Don’t just trust everything you see. Assess the source and what motivations might be at play, and continue to approach our ever-evolving tech world with a healthy dose of caution.

References

Campbell C., Plangger K., Sands S. & Kietzmann J. (2021) Preparing for an era of deepfakes and AI-generated ads: A framework for understanding responses to manipulated advertising, Journal of Advertising, in press.

Kietzmann, Jan, Jeannette Paschen and Emily Treen (2018), "Artificial intelligence in advertising: How marketers can leverage artificial intelligence along the consumer journey," Journal of Advertising Research, 58 (3), 263-267.

Parguel, Béatrice, Florence Benoît-Moreau and Cristel A. Russell (2015), “Can Nature-Evoking Elements in Advertising Greenwash Consumers? The Power of ‘Executional Greenwashing,” International Journal of Advertising, 34 (1), 107-134.

Russell, C. A. and D. Rasolofoarison (2017), “Uncovering the Power of Natural Endorsem*nts: A Comparison With Celebrity-Endorsed Advertising and Product Placements,” International Journal of Advertising, 36(5), 761-778.

Russell, C. A., D. W. Russell, E. McQuarrie, and J.Grube (2017), “Alcohol Storylines in Television Episodes: The Preventive Effect of Countering Epilogues,” Journal of Health Communication, 22(8) 657-665.

How to Detect Deep Fakes? A Deep Dose of Skepticism (2024)
Top Articles
Latest Posts
Article information

Author: Chrissy Homenick

Last Updated:

Views: 6118

Rating: 4.3 / 5 (54 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: Chrissy Homenick

Birthday: 2001-10-22

Address: 611 Kuhn Oval, Feltonbury, NY 02783-3818

Phone: +96619177651654

Job: Mining Representative

Hobby: amateur radio, Sculling, Knife making, Gardening, Watching movies, Gunsmithing, Video gaming

Introduction: My name is Chrissy Homenick, I am a tender, funny, determined, tender, glorious, fancy, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.