Latest Headlines
ARTIFICIAL INTELLIGENCE AND ELECTORAL INTEGRITY
SONNY IROCHE urges that the instrument be used wisely, transparently, and in the service of democracy
As an advocate for the ethical and responsible use of AI in all areas of life, and as a researcher aware of the technology’s dual potential for both good and harmful outcomes, I firmly oppose the use of deepfakes and misinformation in elections or any other processes. The rise of artificial intelligence as a transformative force across various industries has brought significant risks to our democratic systems. Elections, which were once influenced by traditional methods like physical campaigns, radio shows, and televised debates, are now increasingly governed by algorithms, data, and digital platforms.
Furthermore, as a proponent of ethical and responsible AI applications in every facet of our lives, and as an AI researcher who recognizes the dual nature of this technology, its potential for both good and harm, I strongly denounce the use of deepfakes and misinformation in the electoral processes.
As Nigeria approaches its 2027 general elections, concerns are mounting that AI could be deployed not merely to influence voters but, in extreme cases, to distort or undermine the integrity of the electoral process itself.
This is not a mere speculative fear. But very likely possibilities because around the world, early evidence suggests that AI is already reshaping political communication, voter perception, and even electoral outcomes. Nigeria, with its large population, vibrant but polarized political environment, and high dependence on social media platforms such as WhatsApp, Facebook, and X (Twitter), presents a particularly fertile ground for AI-driven electoral misinformation and deepfakes.
The Mechanics of AI-Driven Electoral Manipulation come in different forms.
AI does not “rig” elections in the traditional sense of ballot stuffing, snatching or physical tampering. Rather, it operates through more subtle, but potentially more powerful, mechanisms, such as, ethno- religious sentiments, perception manipulation, information distortion, and trust erosion.
One of the most significant threats is synthetic media, particularly deepfakes. These include AI-generated videos, audio recordings, and images that can convincingly portray political candidates saying or doing things they never said or did. As an AI researcher and scholar I wish to advise and caution that such tools could be used in Nigeria to fabricate campaign speeches, concession messages, or even violent incidents at polling stations.
The danger lies not only in the creation of false content but in its speed and scale. AI systems can mass-produce propaganda, generate fake news articles, and flood digital platforms with coordinated narratives. This industrial-scale misinformation can overwhelm fact-checkers and create confusion among voters, especially in rural environments where media literacy is poor or non-existent.
Another powerful tool is voice cloning. AI can replicate the voice of a political figure with remarkable accuracy, enabling the creation of fake audio messages, particularly dangerous in Nigeria, where voice notes are widely circulated on messaging platforms. Such messages could falsely announce election results, spread panic, or discourage voter turnout.
Closely related is the use of AI-powered bots and coordinated networks. These automated accounts can amplify specific political narratives, create the illusion of widespread support or opposition, and manipulate trending topics. As noted in recent analyses, algorithmic systems increasingly determine what voters see, thereby shaping political perceptions at scale.
Beyond misinformation, the risk of institutional disruptions is also high. While misinformation is the most visible threat, AI’s potential impact extends deeper into the electoral process.
One emerging concern is the possibility of AI-generated election artefacts, such as fake result sheets or forged documents. Analysts have warned that AI could produce highly realistic documents, even mimicking handwriting; making it difficult to distinguish genuine results from fabricated ones.
Additionally, AI could be used to launch cyberattacks on electoral infrastructure, including voter databases, result transmission systems, and electoral commission networks. In a country like Nigeria, with evolving digital infrastructure, these vulnerabilities could be exploited to disrupt the voting process or delay result announcements, thereby undermining public confidence.
Perhaps the most insidious effect of AI is what scholars describe as “epistemic erosion”, the gradual breakdown of trust in information itself. When voters can no longer distinguish between real and fake content, they may begin to distrust all sources, including legitimate electoral outcomes.
Nigeria’s unique vulnerability and socio-political environment amplifies these risks in several ways.
First, the country has a highly polarized political landscape, where misinformation can easily inflame ethnic, religious, and regional tensions. Secondly, low levels of digital literacy mean that many citizens may struggle to identify AI-generated content. Thirdly, the widespread use of encrypted messaging platforms like WhatsApp makes it difficult to track or counter the spread of false information.
Moreover, Nigeria’s regulatory framework has not yet fully adapted to the realities of AI. While existing laws address traditional misinformation, they do not adequately cover the complexities of synthetic media and algorithmic manipulation. This is where the services of AI experts by INEC, media houses and political parties are required, in order to identify and debunk deepfakes in Nigeria’s electoral process.
Encouragingly, we understand that the INEC has begun to respond by establishing an AI division aimed at improving voter engagement and combating disinformation. However, experts caution that institutional capacity, funding, and public awareness must be significantly strengthened to keep pace with rapidly evolving AI technologies.
AI interventions in election processes have been reported in other climes, and would not be peculiar to Nigeria, if it does happen.
Nigeria is not alone in facing these challenges. Several recent elections around the world provide instructive examples of how AI can influence democratic processes.
One of the most well-known cases predates generative AI but illustrates the power of data-driven manipulation: the activities of Cambridge Analytica. The firm used data harvested from social media to create detailed psychological profiles of voters and deliver highly targeted political advertisements. It reportedly worked on over 200 elections globally, including Nigeria’s 2015 presidential election. This case demonstrated how data and algorithms could be used to influence voter behavior on a massive scale.
More recently, generative AI has introduced new dimensions to electoral interference:
• United States (2024): AI-generated robocalls impersonated President Joe Biden, misleading voters about primary election participation.
• India (2024): Political campaigns reportedly spent millions on AI-generated content, including deepfakes of deceased figures and fabricated endorsements.
• Pakistan (2024): AI-generated speeches enabled an imprisoned political leader to “address” supporters virtually, demonstrating both the creative and potentially manipulative uses of the technology.
• Europe (e.g., Slovakia and Moldova): Deepfake audio and video clips falsely depicted political leaders engaging in controversial activities, influencing public perception.
Across these cases, a consistent pattern emerges: AI is not necessarily used to directly alter vote counts but to shape the informational environment in which voters make decisions.
The Strategic Implications for Nigeria
If deployed effectively, AI could influence Nigeria’s 2027 elections in several strategic ways:
One, Pre-election phase: Manipulating public opinion through targeted misinformation, deepfakes, and algorithmic amplification.
Two, Election day: Spreading false information about polling locations, violence, or results to suppress turnout or create chaos.
Three, Post-election phase: Undermining confidence in results through fabricated evidence of fraud or manipulation.
In this sense, AI becomes a tool not just of influence but of strategic destabilization, capable of eroding the legitimacy of democratic institutions.
Mitigation and Safeguards: Addressing these risks requires a multi-layered approach: Regulation: Clear rules on the use of AI in political campaigns, including mandatory disclosure of AI-generated content.
• Technology: Investment in detection tools capable of identifying deepfakes and synthetic media, particularly in local languages.
• Public Awareness: Nationwide digital literacy campaigns to help citizens recognize manipulated content.
• Institutional Capacity: Strengthening the Ministry of Communications, Innovation and Digital Economy and its agencies, INEC and cybersecurity agencies to respond to AI-driven threats.
• Platform Accountability: Collaboration with social media companies to detect and remove harmful contents. It is important to caution that no single solution will suffice. As AI systems evolve, so too must the strategies used to counter them.
In summary, the 2027 Nigerian general elections may well be the first in the country’s history to be significantly shaped by artificial intelligence. While AI offers opportunities for improved voter engagement and electoral efficiency, its misuse poses a serious threat to democratic integrity.
The central challenge is not merely technological but philosophical: how to preserve truth, trust, and legitimacy in an age where reality itself can be manufactured.
Nigeria stands at a critical juncture. The choices made today, by policymakers, technologists, and citizens, will determine whether AI becomes a tool for democratic strengthening or a weapon of electoral manipulation.
The lesson from global experience is clear: AI does not rig elections by itself. People do, using AI as an instrument. The task, therefore, is to ensure that this instrument is governed wisely, transparently, and in the service of democracy rather than its subversion.
Iroche is an Oxford trained AI researcher and Scholar. He is the Founder & CEO of GenAI Learning Concepts Ltd







