Why political technologies need practitioner and citizen oversight

Written by deliberAIde (Guest Contributor)


From campaign tech to citizen engagement

“Political technologies” typically refer to campaign management software, digital voting platforms, and fundraising tools – digital plumbing for electoral democracies that do little to enable ordinary people to meaningfully contribute to political discourse or outcomes. However, they also include technologies that support how citizens discuss and deliberate amongst themselves on current affairs and complex socio-political matters.

While social media platforms fit this description, other, more specialised technologies also do. This includes tools designed to enhance deliberative political participation processes – ones enabling citizens to be involved in political decision-making by having face-to-face conversations about divisive issues.

Making deliberation scalable and actionable

The kinds of democratic innovations – like citizens’ assemblies – that such technologies support enable people with varying opinions and perspectives to develop a mutual understanding of the full opinion landscape on particular issues and to collectively resolve difficult trade-offs or conflicts through respectful, professionally facilitated discussions. These insights are then passed onto political decision-makers to inform them of the collective public will on those matters.

Such processes are increasingly reshaped by new technologies built to support professional practitioners in process design, facilitation and learning, and/or to transcribe, analyse, and summarise what people say during deliberations. These technologies promise to make such processes cheaper, more scalable, and easier to use to inform political decision-making.

But there’s an important question that is rarely asked: if these tools are intended to strengthen democracy and influence political decision-making, who gets to shape them? 

Right now, the answer is:

  1. Developers of the foundational technologies that many of the functionalities of these tools depend on (e.g. generative AI models), mostly Big Tech.

  2. Specialised political tech developers, such as deliberAIde.

  3. Funders – usually behind closed doors and with profit maximisation, not the public interest, as their main motive.

But if we want these technologies to genuinely improve democratic processes and political decision-making, this urgently needs to change.   

Tools for democracy, built without democracy 

Political technologies are increasingly used in deliberative political participation contexts in ways that determine how citizen inputs are turned into ‘insights’, making decisions on things like:   

  • What counts as a ‘constructive’ contribution; 

  • What topics or themes are highlighted or ignored; 

  • How trade-offs and disagreements are weighed and framed.   

Each of these are value-laden choices, not neutral software engineering problems. Yet despite this, most of these technologies are still built conventionally: a small product team lacking much epistemic and socio-cultural diversity defines a roadmap, gathers occasional feedback from target users, and ships features. Target users are invited to test these features, but are typically not invited to share power with developers over what gets built, for whom, and with what guardrails.

The result is familiar: tools that are technically impressive but misaligned with the needs and preferences of users and other stakeholders – too complex, insensitive to language and accessibility barriers, or unable to reflect the nuanced, messy realities of citizen deliberation.   

From 'user-centred” to genuinely co-created 

Most teams building political technologies would say their tools are “user-centred”. After all, many gather feedback on their tools from target users prior to release, through focus groups or stakeholder surveys, and use it to refine their tools.

However, given their potentially significant implications, democratic standards for political technologies need to be higher than “we asked and listened a bit, then decided ourselves”. The goal should be the development of democratic tech: technologies that are built and governed democratically through participatory co-design, where tools are shaped in sustained collaboration with practitioners and others who stand to be affected by their use from the very start. This involves giving target users and affected groups tangible opportunities to suggest and even directly decide what features get built and which are ruled out.

Some organisations within the political tech ecosystem have already started experimenting with participatory co-development processes. For example, the team behind the deliberation-supporting platform deliberAIde has treated each of their pilot tests – from local conferences in rural districts to EU-level citizens’ panels – as a chance to co-design their platform with deliberation practitioners, utilising their expansive domain expertise to make their tools as practically useful as possible.   
 

What participatory co-development actually entails 

Some concrete examples from deliberAIde’s participatory development journey thus far include:   

  • Anonymisation by default: Practitioners and participants voiced concerns about AI systems analysing named transcripts of sensitive discussions, so the team added automatic anonymisation of all personally-identifiable information as a core feature.

  • Human-verifiable AI outputs: Practitioners refused to use AI-generated summaries of discussions out of scepticism regarding their faithfulness. This feedback pushed the team to link every AI-generated insight back to verbatim quotes, enabling users to check, contest, and correct insights before they are shared with decision-makers.

  • Multilinguality: Collaboration with practitioners in multilingual settings led the team to prioritise multilingual transcription, with their platform now supporting 100+ languages, because initial versions failed to cater to people who couldn’t speak English or German (the only two languages initially supported).

Each of these design choices resulted from regular conversations with target users on what would make the tools as useful in practice as possible.  

Who governs the stack? 

There’s another layer to this story. The key features of many political technologies now rely on large AI models built by a handful of corporations. Those technology providers currently determine which languages and dialects get transcribed, what speech is considered “toxic” by content moderation algorithms, and which ideas and arguments their models prioritise when summarising long-form inputs like discussion transcripts.     

If tech developers uncritically plug these AI models into downstream applications, they risk importing the hidden biases and value judgements of their developers, which no amount of good intentions can remove or compensate for.    

Thus, the participatory co-development of political technologies requires attention at two levels:   

  • Application-level governance: the co-design of specific tools or platforms through user councils and/or public-interest advisory boards that have real influence over product decisions.

  • Infrastructure-level governance: laws, regulations, or policies that enable transparency and auditability in foundational AI systems, and that ideally mandate their democratic development and governance.

Without both, political tech developers may successfully democratise the front-end of their tools while relying on a back-end that remains largely unaccountable.   


What funders, policymakers, and tech developers can do moving forward

Actors can take steps now to usher in a new wave of participatively-built political technologies.

Funders:
   

  • Treat participatory development as a necessary cost of building political tech, not an optional extra. Co-design takes time and resources; grants and procurement must reflect that.

  • Make structured user and stakeholder involvement an expectation for tools used in public engagement, deliberation, and democratic decision-making.

  • Support experiments on democratic governance models for political tech, such as citizen panels on feature roadmaps, randomly selected oversight bodies, or alternative ownership structures like steward ownership.


Regulators:   

  • Establish requirements for transparency, contestability, and human override in all AI-powered tools used in political decision-making contexts.

  • Encourage or mandate mechanisms that let citizens and practitioners challenge how AI has processed their input, in case distortions occur.  


Tech Developers:   

  • Move from “user research” to shared decision-making over consequential choices like whose voices are reflected in outputs, which and how insights are presented to users, and what sorts of features are considered out of bounds.

  • Openly document consequential value judgements (e.g. “we chose A over B because…”) and actively invite feedback and critique.


👉 While political tech developers like deliberAIde are at the beginning of their journeys in operationalising these goals, the mere fact that they are pursuing them is a beacon of hope that will hopefully inspire other technology developers to do the same.

Address

Mindspace, Hausvogteipl. 12,
D–10117 Berlin

PARTISAN

Legal notice

Partisan GmbH
c/o Mindspace, Hausvogteiplatz 12 10117 Berlin, Germany
Represented by: Josef Lentsch
+49 1577 4051911

Address

Mindspace, Hausvogteipl. 12,
D–10117 Berlin

PARTISAN

Legal notice

Partisan GmbH
c/o Mindspace, Hausvogteiplatz 12 10117 Berlin, Germany
Represented by: Josef Lentsch
+49 1577 4051911

Address

Mindspace, Hausvogteipl. 12,
D–10117 Berlin

PARTISAN

Legal notice

Partisan GmbH
c/o Mindspace, Hausvogteiplatz 12 10117 Berlin, Germany
Represented by: Josef Lentsch
+49 1577 4051911

Address

Mindspace, Hausvogteipl. 12,
D–10117 Berlin

PARTISAN

Legal notice

Partisan GmbH
c/o Mindspace, Hausvogteiplatz 12 10117 Berlin, Germany
Represented by: Josef Lentsch
+49 1577 4051911