Back to overview

What Have we Learned About AI in Elections?

AI in Elections Around the World
After hosting five workshops across five countries with Electoral Management Bodies and Civil Society Organizations on the use of AI in elections, what has International IDEA learned from these exchanges?

Controversial uses of Artificial Intelligence (AI) in elections have made headlines globally. Whether it’s fully AI generated mayoral contenders, incarcerated politicians using AI to hold speeches from prison, or deepfakes used to falsely incriminate candidates, it’s clear that the technology is here to stay. Yet, these viral stories only show one side of the picture. Beyond the headlines, AI is also starting to be used in the quieter parts of elections, the day-to-day work of electoral management - from information provision and data analysis to planning, administration and oversight. How Electoral Management Bodies (EMBs) choose to design, deploy and regulate these tools will shape key aspects of electoral processes far-reaching implications for trust in public institutions and democratic systems. The International Institute for Democracy and Electoral Assistance (IDEA) has been seizing this critical juncture to open dialogues among EMBs on how the potential of AI to strengthen democracy can be realized, while avoiding significant pitfalls.

Over the past year, International IDEA has convened EMBs and civil society organizations (CSOs) at regional workshops across the globe to advance AI literacy and institutional capacities to jointly envision how to best approach AI while acknowledging its disruptive risks.

These workshops revealed that, in many contexts, AI is already entering electoral processes faster than institutions can fully understand or govern it. Nearly half of all participants of the workshop rated their understanding of AI as low. However, a third of the participating organizations indicated that they are already using AI in their processes related to elections. Nevertheless, both AI skeptics and enthusiasts shared a cautious outlook during the workshops.  

Furthermore, EMBs have been flagging an immense dual burden, of both developing internal capacity to embrace technological innovation as well as mitigating disruptions to electoral information integrity by bad faith actors. Increasingly, private AI service providers are approaching EMBs with promised solutions to transform and automate core electoral functions from voter registration and logistics planning to voter information services and online monitoring. Yet, these offers can often be driven by commercial incentives and speedy deployment timelines, and not all products are designed with the specific legal, technical and human-rights sensitivities of elections in mind. With something as sacred as elections, it has become ever more important that the products on offer give due consideration to the election-related sensitivities for cybersecurity, data protection, and accuracy and other human rights related concerns. For this to work in practice, electoral authorities need to know how to diligently assess vendors and tools for compliance with regulatory provisions.

AI is also contributing to broader changes in the electoral environment that extend far beyond the process of electoral administration. Political actors are increasingly experimenting with AI-enabled tools in electoral campaigns, from microtargeted, online advertising and chatbots to answer voter questions to synthetic images, audio and video deepfakes. While not all examples are used with a harmful intension, in many contexts they have been used to confuse voters, defame competing candidates or manipulate public debate, resulting in public disillusionment and fatigue around what can be trusted in the electoral news cycles.  

Furthermore, platforms and intermediaries use algorithmic recommender systems and automated content moderation to curate political information heavily shaping what citizens see, share and trust during electoral periods. Civil society organizations and fact-checkers are turning to AI to detect coordinated inauthentic behaviour, track information integrity, and monitor campaign spending and political advertising online. These developments on the one hand certainly create new opportunities for participation and oversight, but on the other hand they also raise serious concerns about transparency, abuse of personal data, unequal access to advanced campaigning tools, online harassment and the potential chilling effects on freedom of expression and association.  

These tensions framed the discussion during International IDEA’s AI for Electoral Actors workshop series, which gathered EMBs over the past year for regional workshops in Malaysia, Albania, South Africa, Panama, and Senegal.

The workshop discussions were guided by five essential pillars for democratic AI: AI literacy, AI ethics and human rights, AI content curation and moderation, AI regulation and legislation, and AI to improve electoral management. By comparing and contrasting how different regions tackle these pillars, we can now paint a global picture of how EMBs are approaching AI and how that may impact the future of elections. A key point of consensus, however, and the necessary first step in addressing any AI-related opportunities and risks in elections, was that AI should be adopted in electoral administration only when it serves a concrete purpose and is proportionate to contextual ethical, regulatory, and cybersecurity-related risks.

Pillar 1: AI Literacy

While the enthusiasm for AI’s potential to make elections more secure and efficient was present throughout the workshops, many participants expressed serious concerns about gaps in institutional capacity and expertise. Several electoral authorities, including Mexico, Kenya, and Malaysia shared how they have already employed AI for low-risk applications, such as social media sentiment analysis or closed-domain Q&A-style voter information chatbots. In applications where precise accuracy and human oversight are paramount, such as electoral forecasting or polling site resource allocation, EMBs are still erring on the side of caution. During workshops in Senegal and Albania, participants raised concerns about low levels of practical experience and lacking technical expertise which are essential to manage the intricacies of AI software and avert potential security breaches. In addition, participants in the South Africa workshop voiced the particular vulnerabilities of smaller countries, whose electoral bodies lack sufficient material and human resources to provide adequate system supervision.  

In the absence of internal capacity, some EMBs turn to outsourcing digital infrastructure or data management to independent service providers. Surveys conducted during the workshops show that vendors approach EMBs with a wide range of products. The most common services on offer are privately-hosted LLMs for internal information analysis and voter education, oftentimes in the form of chatbots. For organizations that are already spread thin, this option has immediate appeal but involves concessions in stewardship to external, often non-public actors that in many cases cannot be held democratically and constitutionally accountable. Raising AI literacy is a crucial mechanism to empower electoral authorities to not only have a stronger independence, but also to recognize where outsourcing to AI system vendors may create systematic vulnerabilities. As mentioned in South Africa, it also helps EMBs understand what AI, in practice, is and is not capable of doing. This supports electoral actors in discerning useful applications of AI for electoral administration from hype and staying cognizant of cases where systems already in place are more suitable.

Pillar 2: AI Ethics and Human Rights

In all workshops, but particularly in Senegal and South Africa, discussions on AI in elections centered around inclusion, proportionality, and accountability. AI tools such as identification and data processing must balance efficiency with equality and non-discrimination, as uneven infrastructure or data quality can exclude rural voters, women, or persons with disabilities and other marginalized groups. Risks of exclusions were noted to be higher in certain applications; biometric tools have been noted to have been highly scrutinized for discriminatory risks. Thus, participants agreed that AI adoption should be a measured choice, not a technological reflex, guided by demonstrable need, clear added value and sufficient institutional capacity to manage risks. In essence, AI tools should be used to fix problems rather than simply replacing functioning current systems. However, when deemed appropriate, EMBs should integrate Human Rights Impact Assessments into software and digital infrastructure procurement and pilot phases, as well as evaluations and require that systems are explainable and auditable. This helps to ensure AI use is proportionate, privacy-respectful, and supported by transparent, diverse human oversight that upholds fairness and public trust. There is still a long road ahead, as persistent skepticism was shown across all five regions, as our surveys averaged a low level of trust in key actors’ ability to follow ethical or legal principles.

Moreover, fewer than one in five respondents reported that their organizations have internal human rights or ethical review protocols for adopting new technologies. It was then no surprise that data protection and minimization emerged as priority concerns during the workshops, with participants emphasizing that electoral datasets must be simultaneously representative, secure, and restricted to necessity in order to ensure inclusivity without compromising individual rights. This was a concern in the Western Balkans and Eastern Europe, where civil society representatives warned that algorithmic systems could unintentionally reinforce ethnic polarization. Thus, balancing transparency with privacy is essential for public trust, with data minimization being suggested to become standard practice to limit collection and retention while safeguarding against breaches and misuse.  

Pillar 3: AI Content Curation and Moderation

In electoral contexts, AI-generated content not only increasingly shapes information through algorithmic amplification. Rather than changing people’s opinion per se, a growing concern is that mis- and disinformation are directed at electoral administration to confuse voters where and when elections take place or what the requirements are, which can gravely undermine the perceived integrity and legitimacy of elections. Furthermore, misinformation and deepfakes also disproportionately target female politicians and journalists which results in chilling effects that deteriorate women’s political representation. However, EMBs cannot face this challenge alone, as information spreads on social media platforms in the hands of private companies. As demonstrated by Brazil’s Superior Electoral Court (TSE) and Mexico’s Instituto Nacional Electoral (INE), establishing direct communication channels with digital platforms is essential to flag manipulated content in real time and counteract the spread of harmful narratives.

One discussed way forward are partnerships between platform holders and EMBs that enable electoral actors to access key insights on the electoral information environment and algorithmic transparency. However, while such initiatives have proven effective during high-stakes elections, participants warned of their ad hoc fragility since cooperation remains voluntary and dependent on the goodwill of global technology companies. Without structural agreements or enforceable frameworks, these partnerships risk remaining reactive rather than preventive. The imbalance in size and capacity between digital platforms and EMBs further complicates the issue as electoral authorities are forced into precarious dependencies.

In South Africa, participants expressed frustration with the limited transparency of very large platform holders (VLPHs) and AI developers whose systems shape the information environment during elections. Media monitoring and threat detection are hindered by the lack of access to data or explanations about how algorithms rank or suppress content. This opacity creates an asymmetry of power: while platforms can influence public debate on a scale, EMBs remain largely unaware of the underlying mechanisms. Structural and sustainable engagement with VLPHs must be ensured in ways that do not place a disproportionate burden on EMBs, as expecting electoral authorities to regulate platforms alone is unrealistic given their limited capacities and resources.

Pillar 4: Regulation and Legislation

Legislative landscapes vary drastically on both national and regional levels. Some jurisdictions rely on sector-specific rules, while others adopt horizontal, cross-cutting frameworks as the two main approaches to regulating AI. However, regardless of the model, coherent, human rights-respecting regulatory foundation is a universal necessity to ensure that electoral AI safeguards democracy. Yet, across the regions of the workshop, regulatory instruments for electoral AI remain fragmented and incomprehensive, leaving structural gaps of vulnerability in the absence of binding provisions. Meanwhile, the landscape of AI-related principles and standards is growing. Actors such as the EU, the OECD and the AU have put forward frameworks to help govern the AI space. Now, governments but also EMBs are confronted with how to navigate those principles and translate them into practice. Collaboration and open dialogue between EMBs are crucial steps to formulate frameworks that understand region-specific electoral challenges and integrate well with existing legislation.

In Latin America, the discussion focused on identifying standards for AI regulation that are most apt to respond to the regional context among a plethora of international contenders, ranging from the UNESCO’s Recommendation on the Ethics of AI to the OAS Framework on Data Governance and AI. On a national level, the legislative scope differs greatly. Countries like Brazil have been early in developing a model of institutional response, whereas other nations in the region are still at an early conceptual stage. The key challenge is to ensure that these differences reflect informed, democratic choices rather than gaps in capacity or resources that could leave some countries with weaker protection and less influence over how AI develops.  

Across the Atlantic, workshop participants shared issues of compliance and regulatory priorities. Participants in South Africa shared that EMBs in the region are overlooking regional standards, such as the AU’s Continental AI Strategy, in favor of more globally prominent instruments like the GDPR. This prioritization can weaken efforts to design context-contingent regulations. Participants in Senegal for example, stressed the need for more context-driven regulation to address situational challenges in data localization and algorithmic audits - AI components that systematically underserve regions outside the West.

Pillar 5: AI to Improve Electoral Management

Across all regions, it is clear that AI is already being used to strengthen the foundations of electoral management. While it supports operations such as voter registration and post-election audits, its effectiveness depends on maintaining democratic oversight and human judgment to ensure AI remains an assisting, but never replacing human decision-making.

In Mexico, the National Electoral Institute (INE) uses AI-powered chatbots to provide real-time voter information, helping citizens locate polling stations, verify registration, and get answers to common questions. Similarly, electoral bodies in Senegal and Benin have piloted AI systems to detect duplicate entries in voter lists, addressing long-standing administrative challenges. These examples show that, when adapted to local contexts, technology can help build cleaner and more credible voter registries. However, participants cautioned that innovation must go hand-in-hand with transparency, as citizens need to know how data is processed and errors corrected to maintain trust.

Survey responses reflected similar trends. Many EMBs expressed interest in using AI for voter-roll management and data analysis to improve accuracy and detect irregularities, as well as for post-election audits. Yet, concerns remain about limited digital capacity, unclear procurement standards, and difficulty assessing vendor solutions.

Participants agreed that AI should assist, not replace, human decision-making. EMBs must remain fully accountable for outcomes, with AI serving only to support administrative processes. This principle must guide every stage ranging from design and deployment to evaluation through continuous human oversight. Used wisely, AI can modernize elections, improve efficiency, and expand citizen access to reliable information. But technology alone cannot make elections fairer or more transparent; that depends on whether EMBs have the institutional frameworks, ethical standards, resources, and independence to ensure AI serves democracy rather than convenience.

Lessons Learned

Across these five pillars, several lessons emerge from this first cycle of AI literacy workshops. First, building institutional capacity and guidelines should not be a luxury but the necessary starting point for any responsible use of AI in elections. Many electoral authorities are already considering using AI to address concrete, resource-intensive tasks, often while working under tight time and resource constraints and facing quickly changing expectations. Sustained funding for capacity-building - from basic AI literacy to specialized legal, technical and human rights skills - is therefore essential to ensure that EMBs can evaluate and govern AI systems on their own terms, rather than becoming reliant on external vendors and technologies they have had little opportunity to scrutinize. A second lesson is the importance of continuous regional and cross-regional peer exchange. 

The workshops showed how valuable it is for EMBs to compare practices, stress-test ideas and collectively distinguish hype from genuinely useful applications. These exchanges should be complemented by structured, ongoing dialogue between electoral authorities and civil society, who are often the first to detect new forms of AI-driven manipulation and vulnerabilities in the electoral information environment. A third lesson is that AI in elections must never be treated as a purely technical matter: human rights and ethics have to be integrated at every stage of the AI “lifecycle”, from problem definition and procurement to deployment, oversight and evaluation. Finally, legislation and regulation cannot be an afterthought. Electoral authorities need clarity on how emerging AI norms interact with existing electoral, data protection and media frameworks, what minimum safeguards they should expect from providers, and where their own responsibilities begin and end. 

Taken together, these insights point to the need for a shared blueprint for democratic uses of AI in elections -one that combines institutional capacity, predictable funding, regional and multi-stakeholder cooperation, robust human rights safeguards and clear regulatory expectations, so that AI strengthens rather than undermines electoral integrity and public trust. 

About the authors

Juliane Müller
Juliane Müller
Associate Programme Officer
Cecilia Hammar
Programme Assistant, Digitalization and Democracy
Enzo Martino
Programme Assistant
Close tooltip