AI deepfakes put democracy at risk. Here are 4 ways to fight back

Join Fox News to access this content

Plus special access to selected articles and other premium content with your account – for free.

Please enter a valid email address.

By entering your email address and pressing Continue, you agree to the Fox News Terms of Service and Privacy Policy, which includes our Financial Incentive Notice. To access the content, check your email and follow the instructions provided.

To have problems? Click here.

NEWYou can now listen to Fox News articles!

With the recent explosion of AI, stunning images, videos, audio files and text can now be easily generated by anyone with just a few simple inputs. While this technology offers many amazing benefits, it also presents significant dangers.

One of the most pernicious of these is the creation of deepfakes – highly realistic but manipulated or fabricated content that falsely depicts real people doing or saying things they never did. Our ability to distinguish fact from fiction, as well as democracy itself, are in the crosshairs.

In recent months, deepfakes have become commonplace like never before. In February, ads on Facebook and Instagram were discovered using AI videos falsely depicting Piers Morgan, Oprah Winfrey and other celebrities endorsing pseudo-scientific self-help courses.

In January, Taylor Swift became the victim of deepfake pornography, as fake explicit images of the pop star flooded Twitter/X, garnering millions of views.

AI-GENERATED PORN, INCLUDING FAKE CELEBRITY NUDES, PERSISTS ON ETSY AS DEEPFAKE LAWS 'LAG BEHIND'

deepfake photo illustration

A study reported by MIT Technology found that 96% of deepfake videos online were pornographic and non-consensual, targeting almost all women.

Celebrities are far from the only victims. Ordinary people, especially women and girls, are increasingly being targeted. A study reported by MIT Technology found that 96% of deepfake videos online were pornographic and non-consensual, targeting almost all women.

NBC News recently reported that middle school students in Beverly Hills, California, were caught creating and circulating fake nude photos of their classmates. Similar incidents are occurring in high schools across the country.

As this technology rapidly improves, Oren Etzioni, a computer science professor at the University of Washington who studies deepfake detection, said: “We're going to see a tsunami of these explicit AI-generated images. »

Deepfakes put democracy itself at risk. The United States, the United Kingdom and about 70 countries representing nearly half the world's population are holding national elections this year.

AMERICANS WORRY THAT THESE “SCARY” DEEPFAKES ARE MANIPULATING PEOPLE IN THE 2024 ELECTIONS, “VERY FALSE”

These will be the first elections in which sophisticated deepfake technology will be easily accessible not only to government entities and nefarious actors, but to anyone in the world with a phone or laptop.

We have seen glimpses of false interference in the political arena. A viral deepfake video from 2022 falsely showed Ukrainian President Volodymyr Zelenskyy proclaiming his capitulation. In 2023, AI-generated videos promoting the Chinese Communist Party were shared by pro-China bot accounts on Facebook and Twitter/X.

Here in the United States, the presidential election is already disrupted. In January, AI-generated images shared on social media falsely showed former President Trump with young girls on Jeffrey Epstein's plane.

Last month, days before the New Hampshire presidential primary, thousands of calls were made, dissuading recipients from voting with the message: “Your vote makes a difference in November, not this Tuesday.” This message, imitating President Biden's voice, was generated by AI. The author of the fake audio said it only took him 20 minutes and cost $1.

HOW DEEPFAKES ARE ON THE EDGE OF DESTROYING POLITICAL ACCOUNTABILITY

Nina Jankowicz, former executive director of the Homeland Security Disinformation Task Force, recently warned that the Russian government has used doctored pornography to tarnish the reputation of female politicians in Ukraine and Georgia. She warned that similar strategies would likely be deployed against women leaders in the West.

Deepfakes also undermine trust in authentic media. Renée DiResta of the Stanford Internet Observatory highlighted how claims about AI are now being used to discredit legitimate content, citing denials of actual videos of the Hamas attacks on October 7 as an example.

So far, remedies are either non-existent or simply ineffective. Several states have passed a series of laws mandating the labeling of deepfakes or banning those that misrepresent candidates. A few federal bills have been proposed, but nothing has passed.

Social media giants including Meta and YouTube have implemented rules against manipulated media that is deliberately misleading, including deepfakes. The effectiveness of these policies is far from perfect. Often, by the time these deepfakes are reported, they have already reached millions of users. In early February, Meta's supervisory board criticized the company's regulations, calling them “inconsistent.”

WISCONSIN LEGISLATURE PASSES LAWS RESTRICTING AI-PRODUCED DEEPFAKE CAMPAIGN MATERIALS

The rise of AI deepfakes comes at a time when many social media companies, particularly Twitter/X, are abandoning efforts to moderate controversial content, particularly related to politics. Katie Harbath, Facebook's former director of public policy, noted that companies increasingly want to avoid controversies related to excessive moderation. .'”

What can we do now? Where do we start?

First, helpful AI can be used as a tool against harmful AI. It is a learning mechanism. AI can be tasked with becoming a deepfake detector, identifying subtle patterns that humans might not notice.

Recently, some major social media and AI companies, including Meta, Google, and OpenAI, have started partnering to watermark and label AI-generated content. This type of cross-platform collaboration is essential and must be strengthened and expanded.

PENTAGON TURNS TO AI TO HELP DETECT DEEPFAKES

Second, the prospect of legal sanctions and fines for the dissemination of deepfakes would have a significant deterrent effect. The vast majority of Americans favor federal measures, with 84% supporting legislation that would ban nonconsensual deepfake pornography, according to the Artificial Intelligence Policy Institute.

Strict federal laws must be established to explicitly protect victims of deepfakes. Even preliminary federal rules and fines could significantly reduce the spread.

Third, social media companies must be held accountable for promoting deepfake content. Currently, Section 230 protects web companies from liability for user-generated content.

CLICK HERE FOR MORE FOX NEWS REVIEWS

Effective reform would be to hold companies accountable for deepfake content in which they play an active role in disseminating. This includes through targeted advertising or algorithmic boosting, where content is served to users who otherwise would not have seen it.

Currently, major social networks benefit from controversial and deepfake content that goes viral. This increases user engagement and advertising revenue, leading to the enforcement of laissez-faire.

Fourth, there is an urgent need to improve media literacy. Studies show that education on these topics can significantly decrease the overall persuasive power of deepfakes and other online manipulations.

CLICK HERE TO GET THE FOX NEWS APP

The MIT Center for Advanced Virtuality is a good example that offers online media literacy courses for students and teachers. Similar initiatives can help improve downstream media literacy and critical thinking skills among middle and high school students.

A united front combining technological, legislative and educational efforts is necessary. This involves increased collaboration between social media and AI companies, policymakers, educators and users. The stakes couldn't be higher.

CLICK HERE TO READ MORE FROM MARK WEINSTEIN

Source

Leave a Comment