With Deepfakes, Prevention Is A Fool’s Errand — Here’s How You Can Prepare And Respond
By Phil Singer, CEO
Doctored videos and audio clips have circulated online for years, but until recently they were patently unrealistic, the currency of comedians looking to parody political or cultural figures. Now, thanks to advances in an artificial intelligence technique called machine learning, manipulated or fabricated footage is becoming more convincing — and the growing risk for businesses, politicians and others is no joke.
Deepfakes, as these AI-generated videos and audio recordings are called, can be deployed by bad actors to spread misinformation, entangle a company or campaign in controversy, or completely destroy a person’s reputation. Fed one or more of a few key inputs — images, video and audio — deepfake software can make a person realistically appear to do or say almost anything.
The result could be easily mistaken as evidence of an offensive remark, unlawful or unethical behavior, or any number of other compromising situations. And the fallout could be swift and severe.
As this technology becomes more sophisticated, accessible and easy to use, trying to prevent an attack is an exercise in futility. But the difference between a full-blown crisis — a stock plummeting, a campaign scrambling or any number of other worst-case scenarios — and a brief, contained interruption will be strategic preparation and the ability to act quickly and decisively when an attack happens.
The key is to have a single, dedicated team focused on the challenge, rather than a dispersed and disjointed network. From trained investigators to seasoned communications and media experts, everybody needs to be working together and speaking the same language from the get-go.
During the preparation phase, both investigative research and media expertise are critical. The first step is a comprehensive audit to identify and map out any internal or external threats or vulnerabilities. For example, if a known adversary has been active on social media around a specific issue, that individual might attempt to derail a company or campaign announcement by releasing a deepfake. A skilled research team can identify these threats and assess the attendant level of risk.
For organizations and campaigns looking for investigative support, a combination of technical expertise and relevant experience is the ideal package: An investigator must know how to comb through dense public records such as litigation histories and business documents, but they must also have experience with the relevant industry or political environment to be able to connect the dots between seemingly unrelated pieces of information.
Armed with a clear view of the threat landscape, it becomes possible to explore various crisis scenarios and develop tight response plans and protocols. Each scenario should be described in as much detail as possible, including sketching a potential chronology of events. Based on what each scenario entails, there should be a clearly defined messaging framework to inform all public statements, as well as responses to anticipated external inquiries, whether they come from reporters, customers, campaign supporters or any other stakeholder.
Additionally, it is critical to draw up a response playbook for each situation that minimizes the risk of rash decision-making when things go wrong. One scenario might necessitate an immediate press release upon news of the deepfake attack, while another might be best addressed through an executive statement posted on Twitter. By attending to these and other consequential details, meticulous preparation is the only way to avoid being caught flat-footed.
Close collaboration between technical experts and media pros is also essential in the aftermath of an attack. Since no technology exists to automatically detect deepfakes, forensic analysis — such as examining the footage in question for signs of manipulation and tracing IP addresses and other identifiers to find out the likely source — is the only solution to quickly verify that the footage in question is, in fact, fake. But once there is verification, it’s useless without the ability to clearly communicate the highly technical findings in understandable and accurate language.
Moving beyond an attack, the same team that helped prepare and respond is best positioned to help perform a postmortem while also actively repairing any damage done. Technical experts can get to work building an internal report of the attack, including a thorough, minute-by-minute analysis of what happened and recommendations for how to get ahead of possible copycat attacks.
At the same time, it is important for campaigns, organizations or individuals to reclaim their voice in the media, and go from defense to offense. For example, if an attack derailed the announcement of a major business acquisition, it is important to develop and execute a media plan that generates the kinds of positive headlines that might have been lost. Strategic media engagements can help make a deepfake scandal a small footnote in an organization’s story rather than a lengthy chapter.
The reality is that the deepfake problem is not around the corner; it is already here. According to research, deepfake videos online doubled over a nine-month period in 2019. And while only a small percentage have featured prominent corporate figures, deepfake audio recordings have already been used to extort hundreds of thousands of dollars out of an energy company.
So serious is the risk that U.S. intelligence officials included a warning in the annual Worldwide Threat Assessment: “Adversaries and strategic competitors probably will attempt to use deepfakes or similar machine-learning technologies to create convincing—but false—image, audio, and video files to augment influence campaigns directed against the United States and our allies and partners.”
Now is the time for campaigns, companies and individuals to start building out their defenses before it’s too late. Because without the right set of preparations, reputations and balance sheets, even elections are vulnerable.