UN adviser says AI could have ‘massive’ impact on voters: 2024 will be a ‘deeply rigged election’

Artificial intelligence (AI)-generated deepfakes are likely to have a ‘massive’ impact on voters in the upcoming election and there’s not much we can do right now to stop them, says a United Nations (UN) AI Advisor.

Speaking to Fox News Digital, Neil Sahota said his sources have warned that the growing use of deepfake ads could very well be “the greatest threat to democracy”.

“A lot of people, and I think the media as well, are calling the 2024 election a ‘deepfake election’ that’s probably going to be marred by tons and tons of deepfakes,” Sahota said. “There’s not much we can do right now to stop all of this.”

While the UN and various other organizations and companies are working quickly to deploy software capable of detecting deepfakes, Sahota noted that common verification tools, such as watermarks, are relatively easy to circumvent in their current versions.


Logo GhatGPT openAI

The ChatGPT logo and the words Artificial Intelligence AI are seen in this illustration taken on May 4, 2023. (Reuters/Dado Ruvic/Illustration)

Additionally, the chances of successfully detecting AI-generated content vary greatly depending on the medium. For example, deepfake videos often leave multiple identifying tags.

An analyst can watch the body language of the person in the video. They can determine if the audio syncs correctly with the individual’s mouth and monitor changes in lighting and shadows as well as potential artifacts in each still image. Unfortunately, this analysis takes time and resources in a time when things can go viral overnight.

“If someone posts a very damaging fake video two days before the election, that may not be enough to counter that video, prove it and get people to believe it,” Sahota said.

Deepfakes have already impacted the political system around the world. In April, the Republican National Committee (RNC) created the first fully AI-created political ad targeting the Biden administration on China and crime. Sahota said the Democratic National Committee (DNC) declines to say whether he created similar AI content.

AI has also had an impact on recent elections in Turkey. Sahota said more than 150 deepfake videos have been captured and debunked on social media.


Volodymyr Zelensky

This illustration photo taken on January 30, 2023 shows a phone screen displaying a statement from META’s security policy chief with fake video (right) of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their arms shown behind -map, in Washington, DC. (Photo by OLIVIER DOULIERY/AFP via Getty Images)

“People need information to be informed voters. If you don’t know what to trust, then you have these AI systems that, well, know you love your best friend and can send you a Very specific fake targeted advertising. What are you doing?” he added.

For years, various organizations and individuals have worked to teach AI in psychology, behavioral science, and linguistics. These AI systems get to know an individual’s opinions, hobbies, and interests. Sahota said he even knows what words will influence you, connect you and persuade you.

While many researchers are still looking for big fake “home runs,” like Volodymyr Zelenskyy telling Ukrainian troops to surrender, bad actors are also “micro-targeting” people to influence certain subsets of the population. .

A recent Hillary Clinton deepfake showed the former presidential candidate saying she loved Florida Governor Ron DeSantis and would support him if he ran for president. Sahota said these videos manipulate people’s decisions on a smaller scale, which is often overlooked.


Counterfeit Bruce Willis

The hyper-realistic image of Bruce Willis is actually a deepfake created by a Russian company using artificial neural networks. (Deep Cake via Reuters)

While AI videos are a legitimate concern, Sahota said the use of psychological AI tools has already been “honed” in marketing, where people can create a sort of “chamber effect.” ‘echo’.

If a person subscribes to someone’s newsletter or sees something on their feed, the AI ​​algorithm reinforces that over time. This begs the question, does a person choose to vote for someone because it is their own idea, or has it been taken into their conscience?

“It’s like the movie Inception,” Sahota said. “Somebody actually planted this in your mind. And the best way to get buy-in is for you to think it’s your own idea. And that’s why a lot of these AI tools are unfortunately used.”

Perhaps the most important concern for the UN is what happens when someone claims to have been the victim of a deepfake but tries to delete a legitimate video, photo or audio recording.


"fedha," an AI-generated newscaster created for the Kuwait News service, is depicted on a screen.

Journalists look at an AI-generated news anchor named “Fedha”, which the Kuwait News service recently launched. (YASSER AL-ZAYYAT/AFP via Getty Images)

“Someone says something a little wrong, okay, we can sort of understand that. But someone who actually said something now tries to spit it out like a deepfake. How do you refute the negative? He there’s no way to do a real analysis to do that. And even if you do the real analysis, some people will continue to be suspicious of the results,” Sahota said.

According to Sahota, the Federal Election Commission (FEC) is at an impasse on what to do about deepfakes, as it is unclear whether regulating the content of machines is within its purview. With misinformation and misleading claims reaching “epic proportions”, Sahota said a change in mentality might be needed.


“This kind of cultural change takes time, and it’s a big change,” he added. “There’s going to be a lot of resistance to that. And unfortunately this spin game of ‘oh, I didn’t really say that.’ That, for sure, is going to happen. We’ve already, well, we’ve already seen over 100 years of abuse in American politics. So that’s the biggest challenge.

For more culture, media, education, opinion and channel coverage, visit foxnews.com/media.


Leave a Comment