Working Paper

Counted but Not Heard: Platform Democracy and the Noumenal Erosion of Participatory Agency

Section 1

Vaulting the Space of Reasons: The Noumenal Dimension of Platform Power

The impact of social media and internet communication can, perhaps, be summed up in the transition from the early ethos of “move fast and break things” (Zuckerberg, 2012) to the current mission “to give people the power to build community and bring the world closer together” (Meta, n.d.). These statements reflect two distinct moral perspectives. The earlier motto, retired in 2014, is a relatively amoral statement of engineering methodology, reflecting the desire to bring a product to market. The later mission statement is an ethical declaration, ostensibly handing users the means to shape their own communities and foster a more harmonious society. Although mission statements differ across social media and community platforms, they broadly converge on the same declaration of empowerment, sharing and connection.

This declaration is an issue which has the potential to undermine the quality of democracy and, in particular, redefine the concept of participation in democratic processes. Applying a noumenal analysis, in Forst’s sense, provides a framework positioning this generic mission as largely illusory. Forst defines the exercise of noumenal power as the capacity to “influence, use, determine, occupy, or even seal off the space of reasons for others” (Forst, 2017, p. 42). This occupation challenges the fundamental moral status of individuals as “justifying beings” who possess a basic right not to be subjected to structures that cannot be justified to them (Forst, 2024, p. 2). In this respect, the normative value of participation is not just the ability to ‘post’, ‘share’ or ‘like’ but the right to be a co-author of the normative order—to inhabit the space of reasons where the logic of a system’s design is transparent and contestable.

When a platform claims to build community, the claim invokes a space of mutual recognition, providing a pointer towards equality and freedom. However, if the rules governing that community are technologically shielded from the demand for justification, the participation offered is reduced to a functional performance rather than an exercise of agency. The phenomenal appearance of a platform, experienced through the user interface and the language of connectivity and sharing, promises a decentralisation of power. This appearance presents the platform as a neutral stage for human agency, a digital arena where interaction appears spontaneous and free.

However, the technical architecture providing this stage remains a system of algorithmic governance, shielded under the guise of intellectual property and corporate necessity. The noumenal reality of code operates as a logic of extraction: it “organizes perception and shapes the expression of technological affordances at their roots”. It functions as a largely invisible context that determines “what is measured, and what is passed over” (Sadowski, 2019, p. 7), effectively sealing off the space of reasons behind a technically opaque infrastructure that prioritises the data imperative over genuine social interaction.

Under the banner of increasing participation, a structural paradox emerges: the freedom granted to the user is procedurally constrained. Agency, in this digital realm, is pre-defined by data-extraction algorithms. Power is, in reality, a circumscribed autonomy: permission is granted to operate within a closed system of automated monitoring and directed engagement. In his analysis of algorithmic bureaucracy, Pagano (2025) updates Weber’s 20th-century concept of the ‘iron cage’ (1905/2001, p. 123) to reflect the creation of a 21st-century ‘digital cage’, a system that is more “impermeable to negotiation”. While the Weberian model relied on a ‘human gear’, the street-level bureaucrat who possessed an irreducible space of discretionary power to interpret rules and assess exceptions, the digital cage removes this point of contact. It replaces human mediation with a structure of self-executing code that reduces social interaction to a crude binary operation: data either conforms to predefined software fields or it does not.

As a result, the digital cage represents the total opacity of the power mechanism. Where Weber’s iron cage was a system of explicit norms applied by humans, the digital cage is a system of implicit rules executed by inscrutable code. Returning to Forst (2017), the normative problem is not just that agency is constrained, but that the space of reasons is occupied by a logic of prediction rather than a logic of justification. The presence of a human mediator, however minimal, ensures that the subject remains a “justifying being” capable of dialogue even if they do not engage in it. In contrast, the invisibility of the algorithmic mechanism is not incidental to its power but constitutive of it. A system that cannot be seen cannot be addressed resulting in a system that cannot be held to any justificatory standard. It is this structural inaccessibility, not the constraint on action as such, that constitutes the distinctively noumenal dimension of the digital cage’s power. Within it, the conditions under which participatory claims can be made are architecturally pre-determined by systems accountable to no justificatory standard that users could reasonably contest.1

The architecture that forecloses justificatory agency does not do so accidentally. The opacity of algorithmic decision-making is often presented as an epistemic problem: the outputs of neural networks are, in a technical sense, irreducible to any single point of human-legible reasoning, arriving at conclusions whose precise derivation is lost in the complexity of the network’s own operations. This gives the system the operational appearance of inscrutability. Such inscrutability is problematic at both the phenomenal and noumenal levels, it preserves the appearance of a neutral technical process while closing off the conditions under which that process could be made accountable. To treat inscrutability as genuinely accidental is to attribute to the system a form of agency that exempts it from the justificatory demands placed on any human actor, a convenient, structurally engineered and dangerous exemption. The precise logic by which a platform determines what data to surface, and to whom, remains hidden. This inaccessibility is itself an architectural choice, not an unavoidable consequence of technical complexity. Zuboff’s (2019) reconstruction of the implicit normative logic of surveillance capitalism identifies two constitutive claims that make this structure legible. The first asserts that human behavioural data constitutes a free resource, available for extraction without the knowledge or meaningful consent of those who generate it. The second asserts that the logic governing this extraction is proprietary, vaulted from external scrutiny by the same intellectual property frameworks that protect any other commercial asset (p. 178). These declarations do not only describe a business model; they attempt to establish a normative position. Asserting that the architecture governing human experience in the digital sphere is exempt from the demand for justification advances a self-serving metaphysical claim: that a system designed by human actors, to serve commercial imperatives, is nonetheless beyond the reach of human justificatory demands. The engineered inscrutability and the proprietary exemption are not two separate problems, they are two faces of the same structure: the former makes the architecture illegible, the latter makes that illegibility permanent. Returning to Forst’s framework, this exemption is not a peripheral feature of platform governance but its defining characteristic. The space of reasons is not just occupied, it is claimed as private property.

The consequences of this structure extend beyond the immediate foreclosure of justificatory agency. The proprietary exemption does not merely prevent challenge, it prevents the recognition that challenge is warranted. Users do not experience the digital cage as constraint; they experience it as participation. The phenomenal surface of the platform, its interface of sharing, connection and community, presents the architecture’s operation as invisible. What is experienced as a free choice to engage is, at the noumenal level, a structured set of permissions issued by a system whose governing logic cannot be inspected, contested or refused without exiting the platform entirely, the terms and conditions lock-in present in all platforms. This is the consent fiction on which Zuboff’s Declaration 1 depends: the behavioural data claimed as a free resource is extracted under conditions that render meaningful withholding of consent structurally impossible. Terms of service that run to tens of thousands of words, that must be accepted in their entirety as a condition of access, do not constitute consent in any sense that Forst’s framework could recognise as justificatory. They constitute, rather, the phenomenal appearance of consent, the formal gesture of agreement that substitutes for the substantive right to contest the conditions of one’s own participation.

This recursive quality is what distinguishes noumenal erosion from other forms of power that constrain democratic agency. A law that restricts speech is at least visible as a restriction. It can be identified and subjected to the demand for justification through vertical and horizontal accountability structures, to draw on Diamond and Morlino (2004). The digital cage operates differently: its architecture conceals its own operation from those it most completely constrains. The opacity is not just a barrier to accountability; it is a barrier to the recognition that accountability is owed. It is in this sense that the structure identified by Zuboff’s two declarations represents something more than a failure of platform governance. The failure represented is a condition in which the preconditions of justificatory challenge have been architecturally dismantled. A demand for reasons cannot be formulated because the system against which it would be directed has rendered itself, in Forst’s terms, noumenally invisible. The space of reasons is not simply claimed as private property, the claim itself is hidden from view.

At this stage it is necessary to pause and be precise about what noumenal erosion means as a concept, what it damages and why existing accounts of algorithmic harm do not fully capture it. ‘Erosion’ is chosen deliberately. What erodes are not the opportunities to speak—these remain phenomenally intact, generated and amplified by the very algorithms causing the noumenal harm. What erodes are the conditions under which participation can function as a justificatory claim: where the speaker is recognised as a normative authority whose contribution must be taken seriously, whose demand for reasons cannot be dismissed or algorithmically hidden. This distinction locates the democratic harm at the level of precondition rather than mechanism. Individuals may retain the formal capacity to post, share, vote and organise. These are the phenomenal apparatus of participation which remain intact. They can be observed, and measured to a greater or lesser degree, whereas their noumenal counterpart cannot, which is precisely the source of their political danger. In parallel, the noumenal conditions that give those acts their democratic significance are systemically dismantled. Counted, in other words, but not heard.2

Revisiting vertical accountability gives further insight into why being precise is crucial to understanding noumenal erosion. Diamond and Morlino (2004) identify vertical accountability as the mechanism through which citizens hold power to account between and during electoral cycles. Within a democracy, these focus on the formal channels through which grievances are formulated, directed at an identifiable individual or group and, if the democracy is working, met with a substantive response. This is powerfully illustrated through stepping outside of the parliamentary sphere. The Post Office Horizon scandal demonstrates what the failure of this justificatory mechanism looks like in practice. Subpostmasters used every formal channel available to them: they raised discrepancies with Post Office management, sought legal advice, pursued appeals, and in increasing numbers, took their cases to court. The phenomenal structure of vertical accountability was fully intact: channels existed, procedures operated, institutions functioned. At every point the substantive connection was severed. The grievance could not reach the addressee that mattered because the addressee was the algorithm. In this case study, the algorithm was a system whose governing logic was protected by proprietary claims, institutional authority and legal resources that rendered it, in practice, unchallengeable. Platform architecture operates with the same logic: the individual can still make complaints, sign group petitions, express discontent with their member of parliament or local council representative. Noumenally the structure is evacuated. The grievance cannot reach the addressee that matters because the addressee is the algorithm. It is a system with no justificatory capacity, no obligation to respond and no mechanism through which the demand for reasons can be registered as such. The formal channel exists. However, the substantive connection is severed. Returning to Forst, the individual retains the phenomenal status of a participant while being noumenally reduced to a data point. They are present in the count but, crucially, absent from the conversation.

This is not simply a failure of platform governance but something more structurally fundamental. Alnemr (2024) argues in a related context that algorithmic decision-making represents an algocratic shortcut that bypasses citizen co-authorship of the conditions governing public life, diminishing the role of voice in politics not by silencing it but by rendering it democratically inert (p. 205). My argument extends Alnemr’s insight to the prior architectural level. It is not simply that algorithmic decisions bypass deliberation. Architecture prevents the justificatory conditions under which deliberation could function as a democratic act in the first place. The Post Office case magnifies this distinction. Alnemr’s algocratic shortcut describes what happens when algorithmic governance displaces deliberation at the level of institutional decision-making. What the Horizon scandal shows, and what noumenal erosion names, is the prior condition. Architectural authority of the Horizon system was used not merely to bypass justification but to actively invalidate the justificatory standing of those it harmed. Subpostmasters who raised the discrepancy were told the system was correct and they were wrong. Requests for reasons were structurally suppressed rather than just ignored. Where Alnemr identifies the shortcut at the level of governance outcomes, noumenal erosion operates at the level of the preconditions that make governance accountability possible at all. The algocratic shortcut and the noumenal cage are sequential diagnoses. An algocratic shortcut analysis describes what algorithmic governance does to democratic decision-making. Noumenal erosion describes what the architecture does to the conditions under which democratic decision-making should operate. Authenticity, in Alnemr’s deliberative sense—the capacity to communicate reasons in a non-coercive and reflexive manner (p. 208)—presupposes an environment capable of receiving those reasons as open to discussion and revision. The digital cage forecloses that prior condition, making authenticity structurally impossible regardless of individual communicative intent.

The normative weight of platform architecture’s founding promise becomes fully visible when set against what the digital public sphere was originally understood to offer. Its betrayal is not merely a corporate deception, it runs deeper than that. Habermas identifies the original commitment precisely: the equal authorisation of users to make their own spontaneous contributions was “originally supposed to be the hallmark of the new media” (Habermas, 2022, p. 159). This was not a marketing claim but a genuinely held architectural aspiration—the promise that digital communication would finally realise the egalitarian potential of the public sphere by making every participant an author on equal terms. That this promise has been structurally inverted is Habermas’s own diagnosis: the emancipatory potential of the early digital public sphere solidified into what he terms the “libertarian grimace of world-dominating digital corporations” (Habermas, 2022, p. 160). The significance for the present argument is precise. The illusion of participatory equality is not a peripheral deception but the inversion of a founding normative commitment—the appropriation of the vocabulary of democratic empowerment to obscure its systematic negation. Habermas’s framework, however, reaches a structural limit at this point. His concern is with the communicative conditions of the public sphere—what happens to the quality of deliberative exchange when its structural prerequisites are dissolved by algorithmic fragmentation and engagement-maximising design. This is a diagnosis at the level of communicative output: what people say, how they say it and whether the conditions for rational-critical exchange remain intact. What his framework does not fully theorise is why those conditions cannot be reconstructed from within the platform logic—why the deliberative remedies his own theory prescribes are structurally insufficient. The answer, as the preceding analysis has established, lies at the prior architectural level. The digital cage does not merely degrade the quality of communicative exchange; it removes the justificatory conditions under which the demand for better exchange could be addressed to anyone capable of responding. Habermas identifies what is being lost and that identification matters. His communicative diagnosis provides the phenomenal register through which the effects of noumenal erosion can be observed and evidenced. This analytical limit does not diminish Habermas’s normative contribution. His insistence that maintaining the inclusive and deliberative character of the public sphere is not a matter of political preference but a constitutional imperative (Habermas, 2022, p. 168) provides ground for the normative foundation on which the argument of this discussion sits. Where Habermas establishes the constitutional stakes of public sphere degradation, noumenal erosion identifies why meeting those stakes requires something his own framework cannot prescribe. What is required, and what his framework cannot provide, is a justificatory standard for platform architecture itself.

Notes

1 The structural inaccessibility of platform architecture to justificatory challenge is illustrated by KGM v. Meta Platforms, Inc. et al. (Los Angeles Superior Court, 25 March 2026, https://www.bbc.co.uk/news/articles/c747x7gz249o). The trial held social media platforms liable for addictive design. Importantly, the legal strategy shifted focus from platform content to platform architecture, making the noumenal level of algorithmic design judicially visible. The verdict confirms rather than contradicts the argument here: the digital cage is not just politically inaccessible but structurally resistant to the very mechanisms society has developed to hold power to account.

2 The Post Office Horizon scandal provides a warning and illustration of what noumenal erosion looks like when it reaches its logical conclusion. Whilst not social media platform based, it nonetheless illustrates how algorithms can be deployed to shut down agency and justification arguments. Over more than two decades, hundreds of subpostmasters were prosecuted for false accounting and theft on the basis of Horizon software discrepancies. The phenomenal evidence against the system’s reliability was overwhelming. Statistically, it was improbable that so many individuals across so many locations were simultaneously committing fraud. However, the noumenal authority of the algorithmic system, protected by institutional and legal structures, rendered that evidence inadmissible as a justificatory claim. The subpostmasters were counted: prosecuted, convicted and financially ruined, but, crucially, they were not heard. Their demands for reasons were not just unanswered. They were structurally invalidated by a system whose governing logic could not be challenged from within the terms it had itself established.

References

  1. Alnemr, N. (2024) ‘Democratic self-government and the algocratic shortcut: the democratic harms in algorithmic governance of society’, Contemporary Political Theory, 23(2), pp. 205–227.
  2. Diamond, L. and Morlino, L. (2004) ‘The Quality of Democracy: An Overview’, Journal of Democracy, 15(4), pp. 20–31.
  3. Forst, R. (2017) Noumenal Power’, in Normativity and Power: Analyzing Social Orders of Justification. Oxford: Oxford University Press.pp. 37–52.
  4. Forst, R. (2024) The Noumenal Republic: Critical Constructivism After Kant. Cambridge: Polity Press.
  5. Habermas, J. (2022) ‘Reflections and hypotheses on a further structural transformation of the political public sphere’, Theory, Culture & Society, 39(4), pp. 145–171.
  6. Meta (n.d.) Meta Mission. Available at: https://about.meta.com/ (Accessed: 25 March 2026).
  7. Pagano, U. (2025) ‘The Digital Cage: A Sociological Analysis of Algorithmic Bureaucracy as a Total System’, Science & Philosophy, 13(2), pp. 71–84.
  8. Post Office Horizon IT Inquiry (2025) Final Report, Volume 1: Human Impact and Redress. Chair: Sir Wyn Williams. London: Post Office Horizon IT Inquiry. Available at: https://www.postofficehorizoninquiry.org.uk/volume-1-post-office-horizon-it-inquirys-final-report (Accessed: 30 March 2026).
  9. Sadowski, J. (2019) ‘When data is capital: Datafication, accumulation, and extraction’, Big Data & Society, 6(1), pp. 1–12.
  10. Weber, M. (1905/2001) The Protestant Ethic and the Spirit of Capitalism. Translated by T. Parsons. London: Routledge Classics.
  11. Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the Frontier of Power. London: Profile Books.
  12. Zuckerberg, M. (2012) Letter to Shareholders. Facebook, Inc.