Sitemap

Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation

6 min readMay 24, 2025

--

By Stefaan G. Verhulst

As participatory practices are increasingly tech-enabled, ensuring engagement integrity is becoming more urgent. While considerable scholarly and policy attention has been paid to information integrity (OECD, 2024; Gillwald et al., 2024; Wardle & Derakhshan, 2017; Ghosh & Scott, 2018), including concerns about disinformation, misinformation, and computational propaganda, the integrity of engagement itself — how to ensure collective decision-making is not tech manipulated — remains comparatively under-theorized and under-protected.

I define engagement integrity as the procedural fairness and resistance to manipulation of tech-enabled deliberative and participatory processes. My definition is different from prior discussions of engagement integrity, which mainly emphasized ethical standards when scientists engage with the public (e.g., in advisory roles, communication, or co-research). The concept is particularly salient in light of recent innovations that aim to lower the transaction costs of engagement using artificial intelligence (AI) (Verhulst, 2018). From AI-facilitated citizen assemblies (Simon et al., 2023) to natural language processing (NLP) -enhanced policy proposal platforms (Grobbink & Peach, 2020) to automated analysis of unstructured direct democracy proposals (Grobbink & Peach, 2020) to large-scale deliberative polls augmented with agentic AI (Mulgan, 2022), these developments promise to enhance inclusion, scalability, and sense-making. However, they also create new attack surfaces and vectors of influence that could undermine legitimacy.

This concern is not speculative. Electoral integrity, particularly in the digital context, has already been compromised through tactics like coordinated inauthentic behavior (Wirtschafter, 2024; Bradshaw & Howard, 2019), deepfakes (Appel & Prietzel, 2022; Chesney & Citron, 2019), and micro-targeted manipulation (Stockwell et al., 2024; Zuboff, 2019). Yet while electoral manipulation has garnered widespread attention — highlighted in cases like the recent Romanian presidential election — the integrity of more subtle and emergent forms of democratic participation, such as citizen assemblies, participatory budgeting, or digital consultations, remains a blind spot.

Historically, engagement integrity has been threatened by material inducement or organized capture — e.g., participants being paid by political interests (Fung, 2006; Mansbridge, 1999). However, the risks are now amplified by algorithmic curation, agentic AI, and the increasing opacity of decision-support tools (Eubanks, 2018). Biases embedded in AI agents — whether through training data, model architecture, or interaction design — can subtly shape the framing of issues, the prioritization of topics, or the weighting of arguments. Moreover, without transparent guardrails, AI-driven deliberative tools could be co-opted by actors with vested interests, skewing outcomes under the veneer of neutral facilitation.

The emerging literature on collective intelligence (Pentland, 2014; Mulgan, 2017; Landemore, 2020) has largely focused on optimizing deliberative outcomes, often overlooking how such systems can be gamed. The increasing reliance on automated tools for recruitment, moderation, synthesis, and recommendation introduces vulnerabilities to adversarial manipulation (Brundage et al., 2020). These vulnerabilities may range from input poisoning — where individuals or bots flood systems with misleading contributions — to agenda-setting bias, whereby AI subtly privileges certain values, voices, or solutions.

The challenge, then, is to reframe our approach to AI-augmented participatory democracy to include integrity-by-design. This may require not only robust technical defenses but also ethical and procedural norms that guide both human and machine actors. The co-creation of codes of conduct for participants and facilitators — human or AI — could help mitigate manipulation and increase transparency. Similarly, systematic red-teaming of engagement platforms and collective intelligence exercises, akin to practices in cybersecurity and AI alignment, can surface vulnerabilities before they are exploited (Brundage et al., 2018).

To advance the field and practice of engagement integrity, several pressing research questions emerge:

  • What constitutes manipulation or tampering in the context of participatory engagement augmented by AI?
  • How can we audit AI agents involved in deliberation for bias, representativeness, and procedural fairness?
  • How do we red team AI-driven engagement processes?
  • What governance mechanisms (e.g., citizen juries on AI design, data trusts, or engagement ombudspersons) can be deployed to oversee hybrid participatory processes?
  • How can we develop agentic approaches to monitor and detect manipulation or gaming of participatory engagements?
  • What is the role of design (e.g., interface nudges, participation incentives, algorithmic transparency) in safeguarding engagement integrity?
  • How do we ensure social license for engagement platforms, especially those powered by AI, and what participatory models best foster trust and legitimacy?
  • Can a universal framework or protocol for engagement integrity be developed that parallels the standards emerging for information integrity (e.g., the EU’s Code of Conduct on Disinformation)?
  • How do we measure the long-term impacts of AI-augmented engagement on civic trust, participation, and democratic resilience?

In conclusion, as deliberative democracy enters the algorithmic era, we must recalibrate our attention not only to what is said or shared (information integrity) but also to how we gather, who gets to speak, and how outcomes are shaped (engagement integrity). Without such attention, the promise of AI-augmented participatory risks becoming a technocratic illusion — easily hijacked, quietly biased, and dangerously opaque.

Sources

Appel, M,. & Prietzel, F. (2022). The detection of political deepfakes. Journal of Computer-Mediated Communication. 27(4).

Bradshaw, S., & Howard, P. N. (2019). The Global Disinformation Order. Oxford Internet Institute.

Brundage, M., Avin, S., Clark, J., et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Brundage, M. et al. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv:2004.07213.

Chesney, R., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753–1820.

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Fung, A. (2006). Varieties of Participation in Complex Governance. Public Administration Review, 66, 66–75.

Gillwald, A., Berger, G., & Orembo A. (2024). Possible Approaches to Promoting Information Integrity and Trust in the Digital Environment. UNESCO.

Ghosh, D., & Scott, B. (2018). Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet. New America.

Grobbink, E,. & Peach, K. (2020). Experiments in collective intelligence design. Nesta.

Landemore, H. (2020). Open Democracy: Reinventing Popular Rule for the Twenty-First Century. Princeton University Press.

Mansbridge, J. (1999). Should Blacks Represent Blacks and Women Represent Women? A Contingent “Yes”. The Journal of Politics, 61(3), 628–657.

Mulgan, G. (2022). Another World is Possible: How AI Can Support Collective Intelligence. Nesta.

Mulgan, G. (2018). Big Mind: How Collective Intelligence Can Change Our World. Princeton University Press.
Nimmo, B., & Francois, C. (2019). Information Warfare and Election Interference. In P.W. Singer & Emerson T. Brooking, LikeWar: The Weaponization of Social Media.

OECD (2024), Facts not Fakes: Tackling Disinformation, Strengthening Information Integrity, OECD Publishing.

Pentland, A. (2014). Social Physics: How Social Networks Can Make Us Smarter. Penguin.

Simon, J., Mulgan, G., & Duffy, B. (2023). Collective Intelligence Design Playbook. Nesta.

Stockwell, S., Hughes., M., Swatton., P., & Zhang., A. AI-Enabled Influence Operations: Safeguarding Future Elections. The Alan Turing Institute.

Verhulst, S.G. (2018) Where and when AI and CI meet: exploring the intersection of artificial and collective intelligence towards the goal of innovating how we govern. AI & Soc 33, 293–297

Wardle, C., & Derakhshan, H. (2017). Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe.

Wirtschafter, V. (2024). The impact of generative AI in a global election year. Brookings Institution.

Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.

--

--

No responses yet