From Top-Down to Community Consensus: A Case for AI Referendums

To shift AI governance from a top-down, hierarchical model toward distributed consensus with meaningful public participation, we must move deliberately and incrementally. AI is not simply a technical infrastructure; it is cultural architecture and is being embedded into our everyday lives and contributing to our own human evolution. If we want this cultural architecture shaped by collective insight rather than narrow corporate or political incentives, we need a clear pathway for community agency. 

Below is a manageable framework for building that distributed future, if there ever comes a time when we need to hold a referendum or special vote on the future of this technology.

1. Prioritise Broad Public Education and Accessibility

Before regulation, voting, or oversight, there must be understanding. Distributed consensus is impossible if the public cannot see or comprehend what is being deployed.

Demystify the technology.
AI systems, particularly with the expansion towards ambient or general AI, must be explained in plain, accessible language. What decisions are being made? What data are they trained on? Where are they embedded? Experts and leaders have a responsibility to translate this technical architecture into civic language.

A visible example of these operational ethics emerged when Anthropic publicly refused to remove safeguards from its model Claude that prevented use in autonomous weapons and mass domestic surveillance. This moment opened the door because it clarified their boundaries in accessible terms. When lines are visible, people can respond and act with personal agency. 

Utilise civic spaces.
AI literacy and the potential of its use should not be confined to corporate boardrooms or policy briefings. Suggested ideas of community gatherings in libraries, universities, town halls, and informal community venues to host open forums, so that non-partisan, cross-sector dialogue can build a shared vocabulary and understanding before polarisation sets in.

This concept was inspired by the global initiative of “Lectures in Bars”, where University lecturers hosted free presentations at a bar, and the general public could have a beer and engage with experts in a safe and relaxed environment, making knowledge accessible and fun. 

Provide open access resources.
Free explainers, public briefings, and transparent documentation to empower individuals to form their own view of the future. Without open access, participation becomes symbolic rather than substantive, and to have engaged members of society, we need to encourage them that their voice, opinion, and perspective are important. 

2. Implement Distributed Governance Frameworks

But understanding alone is insufficient, as there must be mechanisms for participation beyond just learning and talking about it. Action completes the circuit.

Decentralised Autonomous Organisations (DAOs) offer one potential complementary structure. They are not replacements for democratic institutions, but tools that can increase transparency, traceability, and distributed agency.

Transparent decision-making.
DAOs enable auditable voting and recorded deliberation. In high-impact AI deployments, such as in healthcare, education, or civic infrastructure, communities could participate in structured oversight rather than relying solely on executive decisions.

Complement existing institutions.
These frameworks should operate alongside traditional governance systems, not in opposition to them. Their strength lies in visibility and participation, particularly where trust in centralised power is fragile.

Shift ethics into operations.
Too often, ethical commitments remain high-level principles, but distributed frameworks embed them into process. Decisions about safety constraints, acceptable use cases, and oversight mechanisms become participatory rather than declarative.

The goal is not slower innovation, but legitimate and trusted innovation.

3. Establish Clear Ethical Red Lines and Distinctions

Consensus requires clarity about what is being governed.

Define AI roles.
There is a material difference between AI as a tool that augments human capacity and AI as an autonomous agent capable of acting independently. Governance requirements need to differ accordingly and strategically.

Determine non-negotiables.
Through public participation, communities must define where human oversight remains mandatory. This includes areas such as autonomous weapons, pervasive surveillance, and core civic infrastructure that demand explicit red lines. 

Identify critical contexts.
Military applications, predictive policing, welfare allocation, and healthcare triage are not neutral deployments. These contexts shape rights, liberties, and social trust. So, public deliberation should prioritise high-impact contexts before systems scale irreversibly.

Red lines are most powerful when drawn early, as once embedded, infrastructure is difficult to unwind.

4. Exercise Collective Autonomy and Agency

Structural change often begins with behavioural signals.

Vote with participation.
Individuals and institutions can choose platforms and providers that demonstrate enforceable safeguards. When companies articulate and uphold boundaries, as seen in the stance taken by Anthropic, market behaviour reinforces ethical architecture.

Move beyond passive progress.
Technologies frequently normalise through incremental adoption, as has been seen with the use of widespread data collection and our regularly used platforms since the early 2000’s. Each step in a technological transformation can feel manageable until the cumulative effect becomes structural. This is where distributed consensus interrupts this drift by requiring conscious reaffirmation at key stages of deployment.

Commit to small course corrections.
Demanding transparency clauses, cross-sector review panels, or sunset provisions on experimental systems may seem modest, but small and early adjustments can often determine the long-term outcome.

Collective autonomy is rarely dramatic if its approach is sustained, cumulative, and strategic.

5. Bring Independent Voices to the Table

To avoid governance being captured by profit or power, deliberation must include anthropologists, sociologists, philosophers, educators, and community representatives, not solely technologists or executives. They always say we need a “human in the loop”, so let’s consult with people from the humanities.

And putting “all cards on the table” means exposing incentives, trade-offs, and constraints openly. It also means ensuring that those shaping policy are not exclusively motivated by scale, market share, or geopolitical competition, as distributed consensus requires plural insight.

A Democratically Steered Future

AI integration is accelerating, and the momentum of innovation can create the illusion of inevitability. Yet technological futures are rarely predetermined; they are shaped by early architectural decisions and the governance models that accompany them.

If we want AI to reflect collective values rather than narrow incentives built by the few, the shift must begin now, and we can do that by:

  • Educating broadly

  • Build participatory mechanisms

  • Define red lines and boundaries early

  • Act collectively and incrementally

A democratically steered AI future will not emerge by default, but it will emerge because communities insist on being part of the navigation. 

Next
Next

Who's steering the AI ship? Should it be up to the public to decide?