AI and the spread of fake news sites: How can we counteract them?

0


This online disinformation campaign blamed on Russia involves not just the spreading of anti-Ukrainian fake news but also challenges Western media outlets to verify it – Copyright AFP Federico Parra

Around the world concerns are growing about types of misinformation spread through the use of artificial intelligence. In particular, malicious sites are becoming more sophisticated and distinguishing genuine news sites from fake ones can be challenging.

According to the BBC: “There are hundreds of fake news websites out there, from those which deliberately imitate real life newspapers, to government propaganda sites, and even those which tread the line between satire and plain misinformation.”

The advance of AI programs, especially Large Language Models (LLMs), which train to write fluent-reading text using vast data sets, have made the task of differentiating between sites more difficult. For example, the instant video generator Sora, which produces highly detailed, Hollywood-quality clips, further raises concerns about the easy spread of fake footage.

Virginia Tech researchers have outlined two different facets of the AI-fuelled spread of fake news sites. Then researchers have provided updates to Digital Journal.

The first area comes from Cayce Myers on what legal measures can and cannot achieve. Myers says, about the current concerns: “Regulating disinformation in political campaigns presents a multitude of practical and legal issues. Despite these challenges, there is a global recognition that something needs to be done. This is vitally important given that the U.S., U.K., India, and the E.U. all have important elections in 2024, which will likely see a host of disinformation posted throughout social media.”

Myers highlights deepfakes, which are easy to create and disseminate, as posing logistical problems. He states: “Technological developments such as Sora show why so many people are concerned about the connection between AI and disinformation.”

The second point comes from Julia Feerrar, concerning how to guard against disinformation. Feerrar notes: “AI-generated and other false or misleading online content can look very much like quality content. As AI continues to evolve and improve, we need strategies to detect fake articles, videos, and images that don’t just rely on how they look.”

Feerrar recommends assessing whether news comes from a reputable, professional news organization or from a website or account that looks suspicious.

Feerrar recommends the following approaches when evaluating the veracity of digital news articles:

  • Fake news content is often designed to appeal to our emotions — it’s important to take a pause when something online sparks a big emotional reaction.
  • Verify headlines and image content by adding fact-check to your Google search.
  • Very generic website titles can be a red flag for AI-generated news.
  • Some generated articles have contained error text that says things along the lines of being ‘unable to fulfill this request’ because creating the article violated the AI tool’s usage policy. Some sites with little human oversight may miss deleting these messages.
  • Current red flags for AI-generated images include a hyper-real, strange appearance overall, and unreal-looking hands and feet.


AI and the spread of fake news sites: How can we counteract them?
#spread #fake #news #sites #counteract

Leave a Reply

Your email address will not be published. Required fields are marked *