Algorithms Are Pushing AI-Generated Falsehoods at an Alarming Rate. How Do We Stop This?

Smartphone in hand with a news website on the screen.
Generative artificial intelligence (AI) tools are supercharging the problem of misinformation, disinformation, and fake news. (Image: Tero Vesalainen via Shutterstock)

Generative artificial intelligence (AI) tools are supercharging the problem of misinformation, disinformation, and fake news. OpenAI’s ChatGPT, Google’s Gemini, and various image, voice, and video generators have made it easier than ever to produce content, while making it harder to tell what is factual or real.

Malicious actors looking to spread disinformation can use AI tools to largely automate the generation of convincing and misleading text.

This brings up pressing questions: How much of the content we consume online is true and how can we determine its authenticity? And can anyone stop this?

It’s not an idle concern. Organizations seeking to covertly influence public opinion or sway elections can now scale their operations with AI to unprecedented levels. And their content is being widely disseminated by search engines and social media.

AI-generated fakes are everywhere

Earlier this year, a German study on search engine content quality noted “a trend toward simplified, repetitive, and potentially AI-generated content” on Google, Bing, and DuckDuckGo.

Traditionally, readers of news media could rely on editorial control to uphold journalistic standards and verify facts. But this tech is rapidly changing this space.

Woman with glasses reading a newspaper at home.
Traditionally, readers of news media could rely on editorial control to uphold journalistic standards and verify facts. (Image: Image Point Fr via Shutterstock)

In a report published this week, the internet trust organization NewsGuard identified 725 unreliable websites that publish AI-generated news and information “with little to no human oversight.”

Last month, Google released an experimental AI tool for a select group of independent publishers in the United States. Using generative AI, the publisher can summarize articles pulled from a list of external websites that produce news and content relevant to their audience. As a condition of the trial, the users have to publish three such articles per day.

Platforms hosting content and developing generative AI blur the traditional lines that enable trust in online content.

Can the government step in?

Australia has already seen tussles between government and online platforms over the display and moderation of news and content.

In 2019, the Australian government amended the criminal code to mandate the swift removal of “abhorrent violent material” by social media platforms.

The Australian Competition and Consumer Commission’s (ACCC) inquiry into power imbalances between Australian news media and digital platforms led to the 2021 implementation of a bargaining code that forced platforms to pay media for their news content.

While these might be considered partial successes, they also demonstrate the scale of the problem and the difficulty of taking action.

Our research indicates that during these conflicts, online platforms were initially open to changes and later resisted them, while the Australian government oscillated from enforcing mandatory measures to preferring voluntary actions.

Ultimately, the government realized that relying on platforms’ “trust us” promises wouldn’t lead to the desired outcomes.

Our study’s takeaway is that once digital products become integral to millions of businesses and everyday lives, they serve as a tool for platforms, AI companies, and big tech to anticipate and counter government.

With this in mind, it is right to be skeptical of early calls for regulation of generative AI by tech leaders like Elon Musk and Sam Altman. Such calls have faded as this tech has a hold on our lives and online content.

A challenge lies in the sheer speed of change, which is so swift that safeguards to mitigate the potential risks to society are not yet established. Accordingly, the World Economic Forum’s 2024 Global Risk Report has predicted mis- and disinformation as the greatest threats in the next two years.

The problem gets worse through this tech’s ability to create multimedia content. Based on current trends, we can expect an increase in deepfake incidents, although social media platforms like Facebook are responding to these issues. They plan to automatically identify and tag AI-generated photos, video, and audio.

Based on current trends, we can expect an increase in AI deepfake incidents.
Based on current trends, we can expect an increase in deepfake incidents. (Image: Teacher Photo via Shutterstock)

What can we do?

Australia’s eSafety commissioner is working on ways to regulate and mitigate the potential harm caused by generative AI while balancing its potential opportunities.

A key idea is “safety by design,” which requires tech firms to place these safety considerations at the core of their products.

Other countries like the U.S. are further ahead with regulating this tech. For example, U.S. President Joe Biden’s recent executive order on the safe deployment of AI requires companies to share safety test results with the government, regulates red-team testing (simulated hacking attacks), and guides watermarking on content.

We call for three steps to help protect against the risks of generative AI in combination with disinformation.

  1. Regulation needs to pose clear rules without allowing for nebulous “best effort” aims or “trust us” approaches.
  2. To protect against large-scale disinformation operations, we need to teach media literacy in the same way we teach math.
  3. Safety tech or “safety by design” needs to become a non-negotiable part of every product development strategy.

People are aware that AI-generated content is on the rise. In theory, they should adjust their information habits accordingly. However, research shows users generally tend to underestimate their own risk of believing fake news compared to the perceived risk for others.

Finding trustworthy content shouldn’t involve sifting through AI-generated content to make sense of what is factual.

Stan Karanasios, Associate Professor, The University of Queensland and Marten Risius, Senior Lecturer in Business Information Systems, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Follow us on XFacebook, or Pinterest

  • Troy Oakes

    Troy was born and raised in Australia and has always wanted to know why and how things work, which led him to his love for science. He is a professional photographer and enjoys taking pictures of Australia's beautiful landscapes. He is also a professional storm chaser where he currently lives in Hervey Bay, Australia.

RECOMMENDATIONS FOR YOU