Deepfake Democracy: AI Technology Complicates Election Security

10 months ago 73
News Banner

Looking for an Interim or Fractional CTO to support your business?

Read more

A hand putting a ballot into a box against an orange background

Source: Saphiens via Alamy Stock Photo

Recent events, including an artificial intelligence (AI)-generated deepfake robocall impersonating President Biden urging New Hampshire voters to abstain from the primary, serve as a stark reminder that malicious actors increasingly view modern generative AI (GenAI) platforms as a potent weapon for targeting US elections.

Platforms like ChatGPT, Google's Gemini (formerly Bard), or any number of purpose-built Dark Web large language models (LLMs) could play a role in disrupting the democratic process, with attacks encompassing mass influence campaigns, automated trolling, and the proliferation of deepfake content.

In fact, FBI Director Christopher Wray recently voiced concerns about ongoing information warfare using deepfakes that could sow disinformation during the upcoming presidential campaign, as state-backed actors attempt to sway geopolitical balances.

GenAI could also automate the rise of "coordinated inauthentic behavior" networks that attempt to develop audiences for their disinformation campaigns through fake news outlets, convincing social media profiles, and other avenues — with the goal of sowing discord and undermining public trust in the electoral process.

Election Influence: Substantial Risks & Nightmare Scenarios

From the perspective of Padraic O'Reilly, chief innovation officer for CyberSaint, the risk is "substantial" because the technology is evolving so quickly.

"It promises to be interesting and perhaps a bit alarming, too, as we see new variants of disinformation leveraging deepfake technology," he says.

Specifically, O'Reilly says, the "nightmare scenario" is that microtargeting with AI-generated content will proliferate on social media platforms. That's a familiar tactic from the Cambridge Analytica scandal, where the company amassed psychological profile data on 230 million US voters, in order to serve up highly tailored messaging via Facebook to individuals in an attempt to influence their beliefs — and votes. But GenAI could automate that process at scale, and create highly convincing content that would have few, if any, "bot" characteristics that could turn people off.

"Stolen targeting data [personality snapshots of who a user is and their interests] merged with AI-generated content is a real risk," he explains. "The Russian disinformation campaigns of 2013–2017 are suggestive of what else could and will occur, and we know of deepfakes generated by US citizens [like the one] featuring Biden, and Elizabeth Warren."

The mix of social media and readily available deepfake tech could be a doomsday weapon for polarization of US citizens in an already deeply divided country, he adds.

"Democracy is predicated upon certain shared traditions and information, and the danger here is increased balkanization among citizens, leading to what the Stanford researcher Renée DiResta called 'bespoke realities,'" O'Reilly says, aka people believing in "alternative facts."

The platforms that threat actors use to sow division will likely be of little help: He adds that, for instance, the social media platform X, formerly known as Twitter, has gutted its quality assurance (QA) on content.

"The other platforms have provided boilerplate assurances that they will address disinformation, but free speech protections and lack of regulation still leave the field wide open for bad actors," he cautions.

AI Amplifies Existing Phishing TTPs

GenAI is already being used to craft more believable, targeted phishing campaigns at scale — but in the context of election security that phenomenon is event more concerning, according to Scott Small, director of cyber threat intelligence at Tidal Cyber.

"We expect to see cyber adversaries adopting generative AI to make phishing and social engineering attacks — the leading forms of election-related attacks in terms of consistent volume over many years — more convincing, making it more likely that targets will interact with malicious content," he explains.

Small says AI adoption also lowers the barrier to entry for launching such attacks, a factor that is likely to increase the volume of campaigns this year that try to infiltrate campaigns or take over candidate accounts for impersonation purposes, among other potentials.

"Criminal and nation-state adversaries regularly adapt phishing and social engineering lures to current events and popular themes, and these actors will almost certainly try to capitalize on the boom in election-related digital content being distributed generally this year, to try to deliver malicious content to unsuspecting users," he says.

Defending Against AI Election Threats

To defend against these threats, election officials and campaigns must be aware of GenAI-powered risks and how to defend against them.

"Election officials and candidates are constantly giving interviews and press conferences that threat actors can pull sound bites from for AI-based deepfakes," says James Turgal, vice president of cyber-risk at Optiv. "Therefore, it is incumbent upon them to make sure they have a person or team in place responsible for ensuring control over content."

They also must make sure volunteers and workers are trained on AI-powered threats like enhanced social engineering, the threat actors behind them and how to respond to suspicious activity.

To that end, staff should participate in social engineering and deepfake video training that includes information about all forms and attack vectors, including electronic (email, text and social media platforms), in-person and telephone-based attempts.

"This is so important — especially with volunteers — because not everyone has good cyber hygiene," Turgal says.

Additionally, campaign and election volunteers must be trained on how to safely provide information online and to outside entities, including social media posts, and use caution when doing so.

"Cyber threat actors can gather this information to tailor socially engineered lures to specific targets," he cautions.

O'Reilly says long term, regulation that includes watermarking for audio and video deepfakes will be instrumental, noting the Federal government is working with the owners of LLMs to put protections into place.

In fact, the Federal Communications Commission (FCC) just declared AI-generated voice calls as "artificial" under the Telephone Consumer Protection Act (TCPA), making use of voice cloning technology illegal and providing state attorneys general nationwide with new tools to combat such fraudulent activities.

"AI is moving so fast that there is an inherent danger that any proposed rules may become ineffective as the tech advances, potentially missing the target," O'Reilly says. "In some ways, it is the Wild West, and AI is coming to market with very little in the way of safeguards."

Read Entire Article