By David DiMolfetta

The era of shapeshifting is here.

Using algorithms already widely available online, one can transform the entirety of their facial structure using deepfake technology that digitally alters an individual’s face to synthetically replace it with another likeness.

For some, deepfakes provide comedic relief, like this YouTube video that alters actor Bill Hader’s face to look like actor Tom Cruise when Hader impersonates him. The gimmick creates an uncanny effect that amplifies Hader’s talent as an impressionist.

For others, this technology leads to trauma and damage on all scales.

Last October, BBC reported a pedophilic deepfake bot spread over 100,000 fake nude images of women on Telegram, many of which were depicted as underage in the photos. The big danger was that the bot creating these fakes used real images of women before instantaneously creating a nude version of the victim pictured.

“It’s devastating, for victims of fake porn. It can completely upend their life because they feel violated and humiliated,” Nina Schick, author of the book Deep Fakes and the Infocalypse, told the BBC last October.

The Telegram incident arguably fell under the umbrella of revenge porn, a form of online sexual harassment where sexually explicit images or videos of individuals are spread without consent. While revenge porn and non-consensual photography laws are present in most states of the U.S., deepfake regulation exists in just a few states, including Virginia and California.

Only in January of this year have lawmakers considered broad regulation of this deceptive technology. Through the 2021 National Defense Authorization Act (NDAA), it directs the Department of Homeland Security to produce assessments on deepfakes, how they are disseminated, and how they cause harm or harassment. It also directs the Pentagon to conduct assessments on the harms caused by deepfakes with members of the U.S. military.

The Act is not pure deepfake regulation, but it can lead to further guardrails down the line, Matthew Ferraro, a former CIA officer and a current term member of the Council on Foreign Relations, told Cyberscoop.

“As the dark and deeply disturbing events of January 6 show, disinformation can have very real consequences,” he said. “This is quickly becoming a regulated space where there’s going to be sufficiently good laws — state and federal — that are going to impact people who create, use or disseminate deepfakes.”

On a small scale, misuse of deepfake technology can lead to consumer identity theft through falsified facial or audio recognition. At major policy levels, a well-produced deepfake video can harm or misinform citizens.

A deepfake of India’s Bharatiya Janata Party (BJP) President Manoj Tiwari criticizing the incumbent Delhi government of Arvind Kejriwal during India’s legislative assembly elections last February went viral on WhatsApp. The event marked the debut of deepfakes in election campaigns in India.

Ironically, the use of deepfakes in U.S. politics first gained public attention with this doctored video of former President Barack Obama talking about the dangers of deepfakes in 2018. The matter brought much scrutiny into whether or not the general public can trust addresses given to them by their leaders, as some supporters of former President Donald Trump believed his election concession speech in January was a deepfake. Harvard University’s Belfer Center for Science and International Affairs said in a report that forensic experts at the FBI and other branches of the U.S. justice system have the tools to discern such visuals from reality.

Sensity, an Amsterdam-based platform designed for detecting deepfakes, reported on Twitter that the amount of deepfake videos that spread online doubles every six months. In December 2018, approximately 8,000 deepfake videos were uploaded online, while in December 2020, some 85,000 deepfakes were known to exist.

Deepfake technology has mainly targeted individuals – particularly women – but there is a growing expectation that the increased availability of such technologies will soon be used by organized crime against businesses, according to a report from Westpac Institutional Bank.

“The rapid advancement of such technologies in the past year or two means we now live in a world where technologies to create highly convincing manipulated media – images, audio, and video – are increasingly available to anyone, anywhere, surprisingly cheaply.”

In a time rife with division and mistrust, consumers are especially vulnerable. As technology gives bad actors increasingly advanced tools, careful regulation of these technologies is needed to allow all people to prosper while knowing their identity is safe, their businesses can be run, and that their democracies are kept in check.