Image created by author using DALL-E 3

OpenAi + The Global Election Cycle

Alexis C. Crews
6 min readFeb 16, 2024

--

AI amplifies the worst aspects of human nature, what does that mean for elections?

In January, I took some time to ‘red team’ OpenAI’s new election integrity system, announced during Sam Altman’s appearance at Davos. I chose to do this for several reasons, most importantly because of my experience working on the 2020 election at Meta (formerly Facebook) and my involvement in politics since 2012, both in campaigns and in the Senate. As the Elections Fellow at the Integrity Institute, focused on global elections, I was extremely curious about how OpenAI’s integrity system would function across more than 70 elections in democratic, hybrid regime, and authoritarian countries. I am also a CFR term member and have participated in countless meetings about the state of democracy and the global impact of elections, including areas important to me, such as violent extremism.

It’s crucial to understand that AI amplifies the worst aspects of human nature. While social platforms with public and private spaces create echo chambers, AI tools provide low-cost opportunities for bad actors, both domestic and international, to amplify their messaging, create GPTs through APIs, and figure out how to subvert current systems. My approach to this problem would be twofold: preventative and reactive. Although most scenarios we’re addressing will likely occur regardless of the guardrails in place, this doesn’t mean we should accept band-aid solutions for gaping wounds, especially during a technological renaissance. It’s also important to note that there is no perfect solution; it’s about iteration and creating broadly workable solutions, including space for necessary carve-outs.

OpenAI is, first and foremost, not a distribution platform like Instagram, Facebook, TikTok, and Snapchat. Instead, it’s a platform where you can either create content or develop tools (GPT) that serve multiple purposes. We’re familiar with ChatGPT 3.5, 4, and Dall-E 3, along with the ChatGPT Store that offers tools enabling the AI, in some cases, to create essays indistinguishable from human writing (Humanizer Pro) and ‘bypass the most advanced AI detectors’. Understanding the potential impact that AGI could have on future generations is astounding, but first, we need to establish guardrails quickly.

NASS, the National Association for Secretaries of State, is the primary organizing authority for all SOS offices across the U.S. However, according to insiders, NASS is not sufficiently robust, necessitating that individual states devise their own policies and methods to address social media platforms and AI more broadly. One SOS official expressed concern about the volume of misinformation and disinformation and how their office and local election offices will become overwhelmed, making it impossible for the average citizen to discern truth from falsehood. Those working in this field are already overwhelmed, preparing for an election cycle rife with false narratives without AI and have limited capacity to prepare themselves. They rely on platforms, and in this case, OpenAI, to mitigate potential harm as much as possible.

The outcomes of these elections will also determine the extent to which users lose faith in OpenAI, which could lead to user attrition and ultimately less data for the LLM to train on. Depending on how many mistakes the company makes, OpenAI could become the next Facebook circa 2016, a company now attempting to redeem itself in the election space and demonstrate active risk mitigation.

Immediately after reading OpenAi’s election blog, I started to jot down a series of questions that I wish I had clarity on, they fall within these four categories:

  • Access to Authoritative Voting Information & Prevention of Voter Interference
  • What happens to debunked content?
  • Election Prioritization — Which countries should be prioritized and why?
  • Knowledge Sharing and Liability

Below, I share a bit of my thinking from a policy, operations, strategic and human rights lens. These questions led me to red team OpenAi’s products to understand how close they were in being able to live up to their manifesto.

Access to Authoritative Voting Information & Prevention of Voter Interference

How is OpenAI approaching elections in countries outside India, the EU, and the United States, particularly in hybrid regimes and authoritarian nation-states where citizens still have the right to authoritative and accurate voting information? Does it maintain relationships with election authorities in each country? Does it collaborate with watchdog and international election monitoring organizations? How is it addressing slurs and politically charged language used to deter voting, and is it working with local vendors to focus on language nuances in addition to creating lists of political figures, political parties, and contentious issues? Domestic and international actors are scrutinizing all elections leading up to the EU Parliamentary (June 6–9) to identify gaps in the system, with 27 countries and over 30 languages (not including English), and varied voting rules per country relating to voter registration, casting votes in person, electronically, or absentee, which closely resembles the U.S. system. How will they penetrate and expose weaknesses within the platform’s defenses?

What happens to debunked content?

Elections are rapidly evolving, and something I learned firsthand in 2020 was that a minor change, such as an early vote date alteration by an authoritative source, could quickly spiral into a disinformation campaign. What happens to content created by one of OpenAI’s tools that was accurate on Monday but debunked by an election authority on Tuesday? What about the content that is already live and the users who used OpenAI to access this information — are they receiving notifications that the response to their query is no longer valid and is now false information? I’m concerned about users who unknowingly share false information, and the audience on other distribution platforms including traditional news outlets, Facebook, Instagram, Discord, TikTok, and X, who can view and process it. Does that information stay up? Is it removed? How is OpenAI collaborating with these companies to reduce the virality of content and ultimately remove it?

Election Prioritization — Which countries should be prioritized and why?

How is OpenAI prioritizing harm and risk, and where do elections in non-ideologically western markets stand? AGI is supposed to open up the world and push humanity forward, but what does that mean if fair and free elections in a handful of countries are top of mind for the company? This isn’t just a question for OpenAI but for all platforms involved in this global election cycle. What do investments in countries across the African Union or Latin America look like compared to investments in technology, human specialists, and government affairs teams for India, the EU, or the U.S.? Are decisions driven by market size, user base, and media discourse? Or are these companies, including OpenAI, concerned about more regulations that could constrain business and impede growth? Are they apprehensive about their relationships with governments that could result in legal takedown requests or demands for access to user data from journalists, activists, and dissenters?

Knowledge Sharing and Liability

What is the best way to place the responsibility on the user to find authoritative information themselves? Is it preferable to direct the user to an authoritative website, or does it make sense to share some information and then suggest the user conduct further research? Or does Google’s Bard have the right approach by deferring almost every election-related query to a Google search? The answer is unclear, and there is no definitive solution regarding how much is enough to prevent liability on the company’s part versus providing services that users depend on.

I have many more questions related to watermarks, international and domestic regulation and how prioritization models are crafted, but as I dig deeper into this year’s election cycle and start looking at election interference reports from Taiwan and gearing up for Pakistan’s election — I just have to wonder will OpenAi have enough time to address these questions and more before India National Election and hopefully by the EU Parliamentary Election.

To learn more about my red teaming efforts, questions used and responses provided by OpenAi, please reach out and I will be happy to share my findings.

--

--

Alexis C. Crews

Integrity Institute Resident Fellow. CFR Term Member. Ex-Meta.