Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge Artificial Intelligence

Safeguarding elections in the age of artificial intelligence

Gareth Cox
Gareth Cox • 4 min read
Safeguarding elections in the age of artificial intelligence
How can we ensure that AI enhances rather than endangers electoral integrity? Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

The 2024 global election cycle marked a turning point for generative AI's role in politics, as concerns over AI-generated deepfakes manipulating elections grew to a fever pitch. In the United States, New Hampshire voters reportedly received a robocall impersonating President Biden, urging them to forgo voting in the state's primary and save their votes for the November general elections - an alarming example of how artificial intelligence (AI) can be used maliciously to suppress votes.

While large-scale AI-driven electoral interference did not fully materialise in 2024, the threat remains. As millions of Singapore citizens prepare to head to the polls this May, the risk of AI-generated disinformation and cyberattacks is at an all-time high, making voter vigilance more important than ever. Singaporean political figures have already been targeted by deepfake technology, such as a manipulated video depicting Senior Minister Lee Hsien Loong endorsing a fraudulent investment opportunity that was widely circulated last year.

Incidents like these underscore a pressing challenge: How do we protect the integrity of elections when we can no longer trust what we see?

Why election seasons are prime targets for AI-generated cyberattacks

Elections are the perfect storm for AI-driven cyberthreats due to their high stakes, emotionally charged atmosphere, and the sheer volume of information voters must navigate. Disinformation campaigns thrive in such environments, where deepfake content can be weaponised to mislead voters, discredit political figures, or even fabricate endorsements.

Beyond disinformation, AI can also be used to execute cyberattacks against political campaigns and electoral infrastructure. For instance, Iranian hackers allegedly attempted to infiltrate Donald Trump's 2024 campaign using AI-generated phishing emails designed to steal credentials from campaign staff. The ability to launch such sophisticated attacks at scale makes AI a formidable tool for bad actors.

See also: DBS cracks the code to industrialising AI

Singapore's fight against AI-generated disinformation

Singapore's highly digital and connected population is no stranger to AI-driven disinformation on social media platforms. Fortunately, proactive measures are already in place.

Laws such as the Protection from Online Falsehoods and Manipulation Act (POFMA) and the new Elections (Integrity of Online Advertising) (Amendment) Bill serve as key safeguards against disinformation. Under the updated Elections Bill, disseminating deepfakes and digitally altered content targeting political candidates is prohibited during election periods - a new law which will take effect for the first time this general election. While the bill primarily holds candidates responsible for refuting misleading content, it also requires social media platforms to remove or restrict access to such material when necessary.

See also: Alibaba rolls out latest flagship AI model in post-DeepSeek race

While these legislations provide a strong foundation, enforcement remains a challenge in today's digital age. AI-generated disinformation tends to spread faster than regulations can react, making proactive monitoring and public vigilance equally important.

More can be done beyond legislation

While robust legislation is critical, legal frameworks alone cannot stem the tide of AI-generated disinformation and cyberattacks. Governments must complement regulatory measures with advanced technological defences to safeguard election integrity.

One strategy is leveraging AI-powered cybersecurity tools that align with the MITRE ATT&CK framework. Leveraging the MITRE ATT&CK framework is essential for understanding adversary tactics and techniques and providing a structured approach to threat detection and response. However, real-world use cases reveal that relying solely on the framework is insufficient-attackers constantly evolve their methods, bypass known techniques, and exploit blind spots that MITRE does not always cover. To stay ahead, security teams must combine MITRE ATT&CK with real-time threat intelligence, behavioural analytics, and proactive threat hunting.

Governments can also adopt AI-driven cybersecurity solutions, such as automated fact-checking systems for content verification and real-time network monitoring platforms to detect threats like phishing attempts or unauthorised access.

How voters can protect themselves

Ensuring election integrity is not solely the responsibility of governments; voters also play a crucial role. They should be discerning with the content they consume and learn how to identify when content has been digitally manipulated. Often, red flags include unnatural facial expressions, mismatched audio, and inconsistencies in speech patterns.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

When in doubt, voters should use fact-checking tools or deepfake detection software to verify content authenticity. Additionally, they should report suspicious material and share information responsibly to prevent disinformation from spreading further

A simple rule of thumb: if the content appears overly inflammatory, perfectly aligned with one's biases, or too shocking to be true, it likely isn't. AI-driven disinformation thrives in the gap between what's real and what's emotional; the more we shrink that gap with awareness and critical thinking, the safer we'll be.

AI will continue to evolve, making disinformation and cyberattacks increasingly difficult to detect. Soon, even the most discerning individuals may struggle to differentiate reality from manipulation. By embracing cutting-edge cybersecurity measures, strengthening public awareness, and implementing decisive policies, we can ensure that AI enhances rather than endangers electoral integrity.

Gareth Cox is the vice president for APJ at Exabeam

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2025 The Edge Publishing Pte Ltd. All rights reserved.