Deepfakes and the Evolving Misinformation Ecosystem | ALA Midwinter 2021

Deepfakes, a portmanteau of “deep learning” artificial intelligence (AI) and “fake media,” are becoming more common, and a better understanding of what they are and how they work “is vital in the current information landscape,” said John Mack Freeman, Suwanee branch manager for Gwinnett County Public Library, GA, in an hour-long presentation as part of this year’s Core Top Tech Trends panel at the American Library Association’s Midwinter Virtual Meeting.

screenshot of an example of a meme created with the Reface AppDeepfakes, a portmanteau of “deep learning” artificial intelligence (AI) and “fake media,” are becoming more common, and whether they’re used for entertainment or deception, a better understanding of what they are and how they work “is vital in the current information landscape,” said John Mack Freeman, Suwanee branch manager for Gwinnett County Public Library, GA, in an hour-long presentation as part of this year’s Core Top Tech Trends panel at the American Library Association’s Midwinter Virtual Meeting. In a change from prior years, all Top Tech Trends Panelists focused on a common theme—the potential dangers of new technologies that libraries may be using now or in the future.

Deepfakes “are media that take a person in an existing image or video and replaces them with someone else’s likeness using artificial neural networks,” Freeman explained. “The term is often used interchangeably with phrases like AI-generated media, generative media, and personalized media. It can include any or all of audio, images, video, or speech synthesis to create a realistic, artificial result.”

Freeman went on to describe a “spectrum of synthetic audio-visual media.” On the one end are deepfakes—very convincing productions developed using AI, virtual performance mapping, voice synthesis, and face swapping. These currently require a high degree of technical skill to create and are time-consuming and expensive to produce. “Common parlance” has broadened the term to include rotoscope face swapping, face-altering filters on social media platforms, and generally deceptive editing. But those techniques require much less skill to use. So, on the other end of the spectrum, these “cheapfakes” are “easier to see as synthetic, and much less likely to pass muster upon investigation.”

While there is generally a steep learning curve, much of the software used to produce deepfakes is open source and freely available. “Deepfakes of various quality are certainly not beyond the abilities of a devoted amateur,” Freeman said. In addition. there are online portals offering to create deepfakes and voice cloning for nominal fees.

Freeman described the technology used to create deepfakes as inherently neutral, noting that “one of their most prevalent uses…is simply to entertain,” with many enthusiasts simply creating memes for social media. At the higher end, movie studios employ the technology to make older (or even deceased) actors look young in flashback scenes, and advertising companies have used deepfakes to put a new spin or humorous twist on older commercials. Last year, India’s Bharatiya Janata Party used deepfake technology to overcome language barriers, making a politician appear to speak in different languages and dialects in a video ad. And the ALS Association’s Project Revoice is a non-profit voice cloning initiative developed to help people with the neurodegenerative disease preserve their unique voice using Augmented/Alternative Communication devices when they lose the ability to speak.

However, “while the technology has certainly been put to some positive uses, the concerns and interest around deepfakes primarily come from the issues where it’s causing the biggest problems,” Freeman said. The most widespread problematic use—thus far—has been involuntary pornography, in which the likenesses of female celebrities are substituted onto the bodies of pornographic actresses. “The victims are mostly high-profile women in the entertainment industry from the United States, South Korea, the United Kingdom, Canada, and India,” Freeman explained.

The potential for deepfakes to deceive is already being recognized by other kinds of criminals. “According to an August 2019 report in the Wall Street Journal, a criminal used deepfake software to impersonate an executive’s voice,” Freeman said. “The CEO of a UK energy company thought he was speaking to the CEO of its German parent company, who asked him to send funds to a Hungarian supplier.” Within an hour, $243,000 had been transferred to the criminal’s account. As the technology becomes more widespread, it is likely that such phishing attempts against individuals will become more sophisticated and easier to generate.

In the current era of conspiracy theories, misinformation, and hyper-partisanship, it doesn’t require much imagination to consider ways the technology could be used to deceive and smear politicians, celebrities, or even ordinary people on social media platforms. As an example, Freeman mentioned a 2018 fake interview between Allie Beth Stuckey of Blaze TV and United States representative Alexandria Ocasio-Cortez. The video wasn’t technically a deepfake, but it used deceptive editing to make it appear that Ocasio-Cortez was stumped by Stuckey’s questions. The post received over a million views before a New York Times reporter pointed out that there was no disclaimer describing the video as satire. The network added the disclaimer, and Stuckey claimed on Twitter that this should have been obvious to viewers even without the disclaimer.

“There is nothing stopping anyone from publishing a piece of malicious content that is intended to deceive and then, once found out, claiming that it is merely satire, and that those who are opposed to it are against artistic expression, don’t get the joke, or are pro censorship,” Freeman said.

Librarians should be monitoring this trend, because while the majority of deceptive videos would currently fall under the “cheapfake” umbrella, this won’t always be the case. And “if deepfakes proliferate, then authority and information [will] continue to decay,” Freeman said. Viewer confidence in the veracity of video content could erode, and politicians and others may begin leveraging that lack of confidence to disparage accurate content. “There have been repeated instances of politicians claiming unflattering videos were deepfakes, and proving those claims correct—or not—has proven impossible,” Freeman noted.

In addition, it presents a new challenge in terms of information literacy education. “Many of us are involved in teaching information literacy in [K–12] schools, colleges, universities, and to the public, and as you all know, the topic is already big enough and hard enough to teach without something else gumming up the works,” Freeman said. “In my opinion, our current information literacy tools are not up to the challenge of widespread synthetic media, and it presents issues to tomorrow’s students and critical thinkers.” Freeman’s presentation includes an extensive bibliography of resources for those interested in learning more about deepfakes and the emerging challenges posed by synthetic media.

Freeman was joined on the three-hour Top Tech Trends panel by Jeanie Austin from the University of Illinois at Urbana-Champaign; Callan Bignoli, library director for the Olin College of Engineering in Needham, MA; Thomas Ferren, program officer for ALA's new Core division; and TJ Lamanna, emerging technologies librarian at the Cherry Hill Public Library, NJ. The presentations are available on demand for registered attendees, and Freeman’s presentation opens the session following a brief introduction. Check back for additional coverage of Top Tech Trends.

Author Image
Matt Enis

menis@mediasourceinc.com

@MatthewEnis

Matt Enis (matthewenis.com) is Senior Editor, Technology for Library Journal.

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?