The long-lasting falsehoods over the 2020 election have made voters and election watchers more attuned to the potential for disinformation, though experts said recent technology advances are making it more difficult for users to discern fake content.
False content has emerged online throughout this election cycle, often in the form of artificial intelligence (AI) deepfakes. The images have sparked a flurry of warnings from lawmakers and strategists about attempts to influence the race’s outcome or sow chaos and distrust in the electoral process.
Just last week, a video falsely depicting individuals claiming to be from Haiti and voting illegally in multiple Georgia counties circulated across social media, prompting Georgia Secretary of State Brad Raffensperger (R) to ask X and other social platforms to remove the content.
Intelligence agencies later determined Russian influence actors were behind the video.
Thom Shanker, director of the Project for Media and National Security at George Washington University, noted the fake content used in earlier cycles was “sort of clumsy and obvious,” unlike newer, AI-generated content.
“Unless you really are applying attention and concentration and media literacy, a casual viewer would say, ‘Well, that certainty looks real to me,’” he said, adding, “And of course, they are spreading at internet speeds.”
News outlets are also trying to debunk fake content before it reaches large audiences.
A video recently circulated showing a fake CBS News banner claiming the FBI warned citizens “to vote with caution due to high terrorist threat level.” CBS said the screenshot “was manipulated with a fabricated banner that never aired on any CBS News platform.”
Another screenshot showing a CNN “race alert” with Vice President Harris ahead of Trump in Texas reportedly garnered millions of views over the weekend before the network confirmed the image was “completely fabricated and manipulated.”
False content like this can go unchecked for longer periods of time, as they often posted into an “echo chamber,” and shown only to users with similar interests and algorithms, said Sandra Matz, a professor at Columbia Business School.
“It’s not necessarily that there’s more misinformation, it’s also that it’s hidden,” Matz said, warning it is not possible for experts to “easily access the full range of content that is shown to different people.”
Social media companies have faced even more scrutiny after four news outlets released last week separate investigations into X, YouTube and Meta — the parent company for Facebook and Instagram. All of the probes say those major companies failed to stop some content containing election misinformation before it went live.
Since purchasing X, Elon Musk and the company have faced repeated criticism for scaling back content moderation features and reinstating several conspiracy theorists’ accounts.
Concerns over disinformation on the platform increased earlier this year when the billionaire became a vocal surrogate for Trump and ramped up his sharing of false or misleading claims.
Read the full story at TheHill.com.