Enhanced deliberation
We now have the chance to scale and improve such deliberative processes exponentially so that citizens’ voices, in all their richness and diversity, can make a difference. Taiwan Province of China exemplifies this transition.
Following the 2014 Sunflower Revolution there, which brought tech-savvy politicians to power, an online open-source platform called pol.is was introduced. This platform allows people to express elaborate opinions about any topic, from Uber regulation to COVID policies, and vote on the opinions submitted by others. It also uses these votes to map the opinion landscape, helping contributors understand which proposals would garner consensus while clearly identifying minority and dissenting opinions and even groups of lobbyists with an obvious party line. This helps people understand each other better and reduces polarization. Politicians then use the resulting information to shape public policy responses that take into account all viewpoints.
Over the past few months pol.is has evolved to integrate machine learning with some of its functions to render the experience of the platform more deliberative. Contributors to the platform can now engage with a large language model, or LLM (a type of AI), that speaks on behalf of different opinion clusters and helps individuals figure out the position of their allies, opponents, and everyone in between. This makes the experience on the platform more truly deliberative and further helps depolarization. Today, this tool is frequently used to consult with residents, engaging 12 million people, or nearly half the population.
Corporations, which face their own governance challenges, also see the potential of large-scale AI-augmented consultations. After launching its more classically technocratic Oversight Board, staffed with lawyers and experts to make decisions on content, Meta (formerly Facebook) began experimenting in 2022 with Meta Community Forums—where randomly selected groups of users from several countries could deliberate on climate content regulation. An even more ambitious effort, in December 2022, involved 6,000 users from 32 countries in 19 languages to discuss cyberbullying in the metaverse over several days. Deliberations in the Meta experiment were facilitated on a proprietary Stanford University platform by (still basic) AI, which assigned speaking times, helped the group decide on topics, and advised on when to put them aside.
For now there is no evidence that AI facilitators do a better job than humans, but that may soon change. And when it does, the AI facilitators will have the distinct advantage of being much cheaper, which matters if we are ever to scale deep deliberative processes among humans (rather than between humans and LLM impersonators, as in the Taiwanese experience) from 6,000 to millions of people.
Translation, summarization, analysis
The applications of AI in deliberative democracy are still in the exploratory phase. Instantaneous translation among multilinguistic groups is the next frontier, as is summarization of collective deliberations. According to recent research, AI is 50 percent more accurate than human beings when it comes to summarization (as evaluated by trained undergraduates comparing AI summaries and human coders’ summaries of deliberation transcripts). Some amount of human judgment will, however, likely be necessary for many of these tasks. In such cases AI can still serve as a useful aid to human analysts, facilitators, and translators.
More ways that AI can enhance democracy are on the horizon. OpenAI, the company that launched ChatGPT, recently introduced a grant program called Democratic inputs to AI. The grants subsidized the 10 most promising teams in the world working on algorithms that serve human deliberation (full disclosure: I am on the board of academic advisors that helped formulate the grant call and select the winners). These tools can hopefully soon be deployed to serve, among other goals, global deliberation on AI governance.
Addressing risks
Deploying AI in democracy has its risks—like data bias, privacy concerns, potential for surveillance, and legal challenges—in almost every field. It also raises the problem of the digital divide and the potential exclusion of illiterate and techno-skeptical groups. Many of these problems will need to be addressed politically, economically, legally, and socially first and foremost, rather than through technology alone. But technology can help here too.
For example, privacy and surveillance concerns may be remediated by something such as zero-knowledge protocols (also called zero-knowledge proofs, or ZKP), which aim to verify or “prove” identity without collecting data on participants (for example, through text messaging authentication or through blockchain). ZKP can be used both for online voting and in deliberative contexts—for example, to share sensitive information or play the role of whistleblower. Meanwhile, generative AI can make previously scarce knowledge and tutoring resources available to everyone who needs them. As a custom-tailored interlocutor for citizens, it can explain technical policy issues in people’s particular cognitive style (including through images) and convert their oral input into written input as needed.
Despite its limitations and risks, AI has the potential to bring about a better, more inclusive version of democracy, one that would in turn equip governments with the legitimacy and knowledge to oversee AI development. AI regulation is likely to be better enforced and more effective in AI-empowered democracies.
Still, there is a risk that democracy itself could be a casualty of the AI revolution. Urgent investment is needed in AI tools that safely augment the participatory and deliberative potential of our governments.