AI’s influence on the 2024 US election: a threat to democracy?
As the 2024 US election approaches, AI disinformation and deepfakes are poised to disrupt the democratic process. Experts warn that the growing sophistication of AI technology poses significant risks to election integrity, voter privacy, and public trust
October 18, 2024
As the 2024 US presidential election draws ever closer, the discussion about artificial intelligence (AI) influencing political discourse has become increasingly pertinent.
Election officials, cybersecurity experts, and tech leaders are rigorously raising the alarm about the sophistication of AI’s threats to election integrity, voter privacy, and public trust in democratic institutions.
Evolution of AI: from tool to threat
“We did have AI during the last election, but it wasn’t as sophisticated as it is today. Advances in AI, such as generative AI or deepfakes, have evolved from mere misinformation into sophisticated tools of deception,” says Dr Srinivas Mukkamala, chief product officer at Ivanti.
“AI has made it increasingly challenging to distinguish between genuine and fabricated information,” Mukkamala adds.
A recent study by Ivanti, an IT security and systems management company, revealed that 54% of office workers were unaware that AI can impersonate anyone’s voice.
“This statistic is concerning,” says Mukkamala, “considering these individuals will be participating in the upcoming election. We cannot risk critical decisions being influenced by disinformation.”
Dr Srinivas Mukkamala, Chief Product Officer at Ivanti
Paul Teather, CEO of the AI-powered market intelligence platform Amplyfi, elaborates on this, explaining how GenAI has “democratised” the ability to create fake content, as it has become easier and requires less skill to pull off.
“There have already been numerous examples of this, including some by candidates and their campaigns, e.g. Taylor Swift endorsing Trump.
“People no longer need a photography studio to create near-flawless images based just on their ideas,” Teather continues, adding that this has led to a proliferation of AI-generated disinformation.
“This increases both the tolerance of GenAI (it is seen as normal, rather than evil) and better recognition of it (people checking GenAI images’ hands for errors).”
He continues that while voters are improving at recognising telltale signs of AI manipulation, the sheer volume of disinformation has become overwhelming.
Paul Teather, CEO at Amplyfi
AI-driven misinformation and the weaponisation of disinformation
According to Simon Horswell, senior fraud specialist at Onfido, this proliferation of fake content is evidenced by a 3000% increase in deepfake attempts in 2023.
“We’re seeing a real, concerning uptick of fraudsters using deepfakes to trick businesses and mislead consumers,” he says.
This explosion of AI-driven fake content is creating new avenues for voter manipulation; Horswell adds: “Deepfakes have become a vector to produce fraud at scale.”
With AI-generated videos and articles becoming increasingly convincing, fraudsters and political actors alike can use these technologies to spread disinformation at an unprecedented scale.
Simon Horswell, Senior Fraud Specialist at Onfido
Dr Ilia Kolochenko, CEO of ImmuniWeb, emphasises that AI can create millions of “malicious brainwashing messages”, which can be disseminated across social media platforms, further amplifying these efforts.
This trend might be particularly concerning in the context of the US election, where small margins can tip swing states.
These AI-generated narratives have the potential to significantly alter voter perceptions, especially in an environment where social media algorithms prioritise engagement over accuracy, according to Amplyfi CEO Paul Teather.
Beyond disinformation, True the Vote, a group known for election denialism, is threatening the use of AI-driven cameras to monitor ballot drop boxes across various states.
This initiative, which has been dubbed a ballot box “surveillance reality show,” aims to live-stream footage of voters in the name of transparency.
However, local officials warn that this type of surveillance could lead to voter intimidation, particularly among marginalised communities, and raise serious privacy concerns.
Paul Bischoff, consumer privacy advocate at Comparitech, argues that “the problem with the conspiracy group monitoring ballot boxes is not the use of AI, but the camera surveillance itself.”
He notes that some voters might choose not to vote if they know they’re being watched.
The ethical implications of AI surveillance go beyond privacy concerns; they also call into question the fairness and integrity of the electoral process.
Chris Hauk, consumer privacy advocate at Pixel Privacy, expresses concern over conspiracy groups’ potential use of AI.
“While AI will certainly be used as a weapon during the election season, I believe it will likely be limited to using deepfake videos, photos, and audio to push voters in the direction the fakers are promoting. We’ve already seen AI-generated videos on the internet that paint both US presidential candidates in an unfavourable light.
“It’s unclear what ‘AI-driven’ video monitoring will involve, but statements like these from groups like True the Vote could simply be an attempt to scare some voters from voting,” Hauk explains. “We may see similar attempts by both sides as they work to tailor the vote total to their needs.”
Is AI also the solution?
While using AI for nefarious ends is problematic, it may also be part of the solution. Simon Horswell describes an “AI vs AI” battle, where advanced AI systems are trained to detect and combat deepfakes and AI-generated misinformation.
AI’s ability to learn and adapt continuously is one of its key strengths. That’s why training on large datasets of both real and fake media is needed to provide advanced protection without impacting the user experience, Horswell argues.
“Companies can train AI algorithms to recognise the subtle differences between authentic and synthetic images or videos, which are often imperceptible to the human eye,” he explains.
Biometric AI-powered tools are already being used to verify the authenticity of images, voices, and fingerprints, offering a defence against the surge in AI-generated fraud.
“AI can automate the verification process and run comprehensive checks based on an individual’s unique physical characteristics, such as facial features, voice or fingerprints.
“AI powers liveness checks, whereby the algorithm checks for facial movements, skin textures and micro-movements, and seeks to identify abnormalities such as unnatural blinking patterns or lip movements found in deepfakes,” Horswell outlines.
Similarly, Lewis Duke from Trend Micro advocates using AI-powered detection tools to combat deepfakes and misinformation.
“Detecting and combating AI-generated disinformation presents several challenges, but automated detection tools can identify discrepancies in content that humans often miss,” he says.
Duke notes that combined with public education efforts to promote critical thinking and fact-checking, these tools could provide a robust defence against AI-driven disinformation in the election cycle.
AI innovation vs voter privacy
Rob Shavell, co-founder and CEO of DeleteMe, points out that the availability of personal data online makes voters, particularly those in marginalised communities, more vulnerable to targeted disinformation campaigns.
Shavell explains that the primary methods by which personal information is utilised to disrupt voting processes can be categorised into two groups.
“Targeted misinformation, such as “spoofed official statements changing location or date of voter’s specific polling places,” which we have seen examples of in both current and previous election cycles.”
These deceptive tactics could be distributed through robocalls, text messages, and viral social media campaigns.
“And using elections as an opportunity for fraud: these tend to be similar spoofed campaigns claiming to be either aiding in voter registration efforts or soliciting donations on behalf of some local candidate.”
These fraudulent activities are directed at specific groups, such as seniors, immigrants, and minority communities with limited knowledge of formal voter registration procedures.
“What has changed in the era of widely available personal data is the ability to micro-target specific audiences — particularly minorities and other at-risk groups — in a way that was previously more labour-intensive,” he adds.
“But there are few federal laws protecting people’s personal information in the United States. Passing a National Privacy Law, like the ADDPA proposed in previous years, would go a long way to ensuring that the US can mitigate personal information risks associated with AI,” Shavell concludes.
Securing elections in the age of AI
According to Amplyfi’s Teather, misinformation has always been indistinguishable from facts, but GenAI could tip the scales in ways we haven’t yet fully grasped.
To combat this, experts like Chris Black, AI evangelist at Vizrt, suggest using technologies like C2PA standards, which track the authenticity of media content through encrypted metadata, ensuring content credibility and transparency in election reporting.
“Governments and tech developers are taking greater steps to combat emerging threats and safeguard democratic integrity,” he says.
“However, at the same time, some governments have deployed facial recognition technology to profile individuals based on ethnicity, enabling tracking and detention. On a smaller scale, individuals equipped with consumer AI tools are exposing privacy vulnerabilities,” he adds.
“Harvard students have shown how smart glasses (like Meta Raybans) can access personal information with just a glance, exemplifying how these tools can be misused.”
Chris Black, AI Evangelist at Vizrt
However, some steps can be taken to improve democracy in the future.
Dave Merkel, CEO of Expel, stresses that no company or government entity should consider itself immune to attack and that cybersecurity and vigilance are essential to protecting democratic institutions.
“Adversaries are already looking for ways to orchestrate attacks that shift opinions,” Merkel says, emphasising the need for robust cyber defences to safeguard election infrastructure.
Moving forward, Teather envisions a future where voters can rely on AI advisors to help them navigate the overwhelming flood of information they receive daily.
He explains that these AI systems would act as personal assistants, helping individuals filter out disinformation and make more informed decisions.
But this vision requires a combination of technology, regulation, and education. AI tools must be integrated with blockchain and decentralised platforms to ensure content authenticity.
At the same time, he adds, governments and tech companies must invest in media literacy programs to help voters become more critical of the information they consume.
So, what is the fate of the US election?
Dr Ilia Kolochenko summarises: “Social networks and other online platforms will probably play a huge role in the upcoming US elections.
“If cybercriminals manage to outsmart moderation and content filtering mechanisms, we can expect a real and serious interference with the US elections and unprecedented threat to the democracy.”
Rob Shavell predicts: “Everyone is concerned about protecting voters on election day, and many localities are setting up barbed wire and bulletproof glass to enhance security. Unfortunately, most of the real damage will occur before voters even get to the polls.”
Dr Srinivas Mukkamala concludes: “When all is said and done, though, scepticism is the best defence against deepfakes. It is essential to avoid taking information at face value and critically evaluate its authenticity.”
Before you go – stay in the know by signing up for our weekly TechInformed Editorial Roundup newsletter.
Amy Stettler
SVP Global Marketing, TechInformed
With over 30 years of global marketing experience working with industry leaders like IBM, Intel, Apple, and Microsoft, Amy has a deep knowledge of the enterprise tech and business decision maker mindset. She takes a strategic approach to helping companies define their most compelling marketing stories to address critical obstacles in the buyer – seller journey.
James Pearce
Editor, TechInformed
As founding editor of TechInformed in 2021, James has defined the in-depth reporting style that explores technology innovation and disruption in action. A global tech journalist for over a decade with publications including Euromoney and IBC, James understands the content that engages tech decision makers and supports them in navigating the fast-moving and complex world of enterprise tech.
Let’s Connect
We’d love to talk about how we can help you build your next project
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.