The Potentially Deadly Danger of DeepFakes

DeepFakes are Deep Trouble

By Debbie Burke

Have you seen the YouTube video of The Dancing Queen, where Queen Elizabeth boogies down? Or the shots of the Pope wearing a white satin Balenciaga puffer coat? Or the re-creation of the horror film The Shining where Jack Nicholson’s scary face is replaced by comedian Jim Carrey’s mug?

They are deepfakes, created by using artificial intelligence (AI) programs to swap faces, bodies, and voices of real people. In the past few years, entertaining and satiric videos like these have gone viral on YouTube, Twitter, TikTok and other social media sites.

But deepfakes also have a destructive side and can cause disruption and public panic. For instance, in March 2022, a video was widely posted of the Ukrainian president telling his troops to lay down their weapons and surrender to the Russians. It was soon exposed as a deep fake, but the repercussions could have been grave if more people had believed it.

In March 2023, a series of fake photos of former President Trump being arrested appeared on Twitter. Despite their clumsy execution, showing the former President with three legs, the photos received five million hits before being pronounced as bogus.
In May 2023, the stock market experienced shock waves after a deep fake photo showed an explosion at the Pentagon that didn’t happen.

Experts already anticipate that deep fakes will be used to disrupt the 2024 elections.

Deep fakes have been the subject of recent novels, including the bestselling Monroe Doctrine series by James Rosone and Miranda Watson. In that fictional story, deep fake videos lead to nuclear strikes and global war.

Where did the term “deepfake” come from? In 2017, a Reddit user with the handle “deepfakes” posted pornographic videos of female celebrities whose faces had been superimposed on other women’s bodies. Such falsified images came to be known as “deep fakes.” The technology is often referred to as “Photoshop on Steroids.”

More than 90% of deep fake images are estimated to be pornographic, including so-called “revenge porn.” Countless victims have had their images altered without their knowledge or permission and posted online publicly, causing embarrassment and damage to their reputations. But individuals aren’t the only victims of misused technology.

Deep fakes can create events that never happened, potentially changing world history.

Ironically, the original purpose of deep fake technology was to preserve history.

In 2017-18, computer scientist, Supasorn Suwanjankorn, wanted to combine photos and recordings from Holocaust survivors to create a multidimensional effect that wasn’t possible from simply reading texts, listening to tapes, and looking at photos.

He developed software to take two-dimensional photos and model them into 3D images. He ran many photos through computers that studied the faces and gestures to recreate the shapes, contours, expressions, and movements to match the actual person. Computers also listened to hours of audio to learn to replicate voice, tone, and diction.

The results were blended into virtual replicas of actual people with their unique mannerisms, telling their stories, even though they were no longer alive.

The revolutionary software quickly caught on and advanced at lightning speed as giant tech companies recognized the profit potential of deep fakes. Dr. Suwanjankorn grew concerned about possible misuse. In a 2018 TED talk, he said, “Our goal was to build an accurate model of a person, not misrepresent them.”

In a further ironic twist, he wound up “fighting against my own work” as he developed countermeasures at the AI Foundation to recognize and prevent abuse of deep fake technology.

How are deep fakes created? Without getting too mired in “geek speak” jargon, two machine learning algorithms are pitted against each other to study and recreate original images. They’re in competition, each striving to make the most realistic, authentic-appearing replication. The image is refined thousands of times until the human eye can no longer detect differences between the fake and the original.

Early versions of the software required sophisticated engineering knowledge and expertise, as well as massive hard drive capacities for the enormous amount of data generated.

However, all that changed in late 2022 with the release of ChatGPT, Midjourney, Bard, DALL-E, and other AI programs.

Suddenly, anyone and their dog could download free or low-cost AI-generating programs to create deep fakes. Within five days of its release, the free version of ChatGPT attracted a million users. After two months, 100 million people were using it. As of April 2023, the estimate was 173 million users.

Scammers are quick to jump on cheap new technology. Cryptocurrency trading and investment schemes use the new tool to target a growing number of victims.

Impersonation scams proliferate. Employees are intimidated by demands from bosses to make money transfers or to reveal their highly secured login credentials; although the bosses sound real, they are deep fakes. Grandparents receive desperate calls for money from faked grandchildren.

The FBI, Department of Defense, Department of Homeland Security, and other government agencies try to fend off constant cyberattacks by foreign countries including Russia and China. Enemy nations flood social media with false information wrapped in deep fake cloaks.

Distrust of news sources is already high. Adding deep fakes into the mix leads to growing doubt because you can no longer believe what you see with your own eyes.

Deep fakes impact the justice system. Increasingly, surveillance videos are used as evidence in court. This 2020 article by John P. LaMonica in the American University Law Review says:

“Deepfakes bring the possibility of unprecedented levels of distrust in the government and other public institutions if videos emerge featuring public figures saying or doing things that never happened. Among the challenges specific to trust in public institutions is that which courtrooms will face in light of the current standards used to admit digital photography and video as evidence.”

Tech giants including Microsoft, Google, as well as universities offer rewards for the most effective detection tools. Digital watermarking has been adopted by some news agencies to verify the authenticity of videos they publish. But every day, 3.7 million videos are uploaded to YouTube and 34 million uploaded to TikTok. There is no practical way to sift out genuine videos from harmless pranks and malicious deep fakes.

In March 2023, more than 1000 computer scientists and engineers, including Elon Musk, Steve Wozniak, and Andrew Yang published an open letter asking for a six-month pause on new AI development. (Signatures of support now number more than 33,000). Quoted from the letter:
“As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out of control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

The sound you didn’t hear was developers screeching to a halt. Why would they? Estimated revenue from ChatGPT is $200 million by the end of 2023, $1 billion in 2024.

Digital forgers with malicious intentions are on technology’s cutting edge and constantly discover new tricks to advance their interests. Engineers working to catch bad actors are always on the defensive, playing Whack-a-Mole.

Even if Congress enacts laws controlling AI, criminals aren’t likely to pay attention, nor are bad actors around the globe.

Can deep fakes be detected?

Intel recently introduced software called “Fake Catcher” that detects differences in blood flow changes beneath the skin between real and fake images. It claims accuracy in more than 90% of samples.

Researchers at University of California, Santa Barbara are working on “PhaseForensics” which examines if lip and mouth movement match various sound frequencies.

But as of now, there’s no detection software that’s accessible and affordable to regular consumers. Until there is, what can we do? Develop a healthy skepticism about photos and videos. Look for other trusted sources to verify if they are real or bogus. Use common sense.

The late comedian Redd Foxx used to joke about being caught in a compromising situation: “Who you gonna believe? Me or your lying eyes?” He could have been predicting deep fakes.

No longer can we take what we see at face value because the face may have been changed. MSN

Debbie Burke’s new thriller, Deep Fake Double Down, explores what happens when deep fake video is used as evidence against an innocent person. Available at online booksellers or ask your favorite independent bookstore to order it.

Check out these great articles

Golf course

Summer Golf

If there is ever a time when a golf addict is at his or her best, or maybe worst, it’s during summer months!

Read More »
It's June Cleaver

It’s June Cleaver!

She will forever be known to millions of people around the world as the mom, June Cleaver on the TV show, Leave It To Beaver.

Read More »

Subscribe to the Montana Senior News

Sign up to recieve the Montana Senior News at home for just $15 per year.