The impact of social media and internet communication can, perhaps, be summed up in the transition from the early ethos of “move fast and break things” (Zuckerberg, 2012) to the current mission “to give people the power to build community and bring the world closer together” (Meta, n.d.). These statements reflect two distinct moral perspectives. The earlier motto, retired in 2014, is a relatively amoral statement of engineering methodology, reflecting the desire to bring a product to market. The later mission statement is an ethical declaration, ostensibly handing users the means to shape their own communities and foster a more harmonious society. Although mission statements differ across social media and community platforms, they broadly converge on the same declaration of empowerment, sharing and connection.
This declaration is an issue which has the potential to undermine the quality of democracy and, in particular, redefine the concept of participation in democratic processes. Applying a noumenal analysis, in Forst’s sense, provides a framework positioning this generic mission as largely illusory. Forst defines the exercise of noumenal power as the capacity to “influence, use, determine, occupy, or even seal off the space of reasons for others” (Forst, 2017, p. 42). This occupation challenges the fundamental moral status of individuals as “justifying beings” who possess a basic right not to be subjected to structures that cannot be justified to them (Forst, 2024, p. 2). In this respect, the normative value of participation is not just the ability to ‘post’, ‘share’ or ‘like’ but the right to be a co-author of the normative order—to inhabit the space of reasons where the logic of a system’s design is transparent and contestable.
When a platform claims to build community, the claim invokes a space of mutual recognition, providing a pointer towards equality and freedom. However, if the rules governing that community are technologically shielded from the demand for justification, the participation offered is reduced to a functional performance rather than an exercise of agency. The phenomenal appearance of a platform, experienced through the user interface and the language of connectivity and sharing, promises a decentralisation of power. This appearance presents the platform as a neutral stage for human agency, a digital arena where interaction appears spontaneous and free.
However, the technical architecture providing this stage remains a system of algorithmic governance, shielded under the guise of intellectual property and corporate necessity. The noumenal reality of code operates as a logic of extraction: it “organizes perception and shapes the expression of technological affordances at their roots”. It functions as a largely invisible context that determines “what is measured, and what is passed over” (Sadowski, 2019, p. 7), effectively sealing off the space of reasons behind a technically opaque infrastructure that prioritises the data imperative over genuine social interaction.
Under the banner of increasing participation, a structural paradox emerges: the freedom granted to the user is procedurally constrained. Agency, in this digital realm, is pre-defined by data-extraction algorithms. Power is, in reality, a circumscribed autonomy: permission is granted to operate within a closed system of automated monitoring and directed engagement. In his analysis of algorithmic bureaucracy, Pagano (2025) updates Weber’s 20th-century concept of the ‘iron cage’ (1905/2001, p. 123) to reflect the creation of a 21st-century ‘digital cage’, a system that is more “impermeable to negotiation”. While the Weberian model relied on a ‘human gear’, the street-level bureaucrat who possessed an irreducible space of discretionary power to interpret rules and assess exceptions, the digital cage removes this point of contact. It replaces human mediation with a structure of self-executing code that reduces social interaction to a crude binary operation: data either conforms to predefined software fields or it does not.
As a result, the digital cage represents the total opacity of the power mechanism. Where Weber’s iron cage was a system of explicit norms applied by humans, the digital cage is a system of implicit rules executed by inscrutable code. Returning to Forst (2017), the normative problem is not just that agency is constrained, but that the space of reasons is occupied by a logic of prediction rather than a logic of justification. The presence of a human mediator, however minimal, ensures that the subject remains a “justifying being” capable of dialogue even if they do not engage in it. In contrast, the invisibility of the algorithmic mechanism is not incidental to its power but constitutive of it. A system that cannot be seen cannot be addressed resulting in a system that cannot be held to any justificatory standard. It is this structural inaccessibility, not the constraint on action as such, that constitutes the distinctively noumenal dimension of the digital cage’s power. Within it, the conditions under which participatory claims can be made are architecturally pre-determined by systems accountable to no justificatory standard that users could reasonably contest.1
The architecture that forecloses justificatory agency does not do so accidentally. The opacity of algorithmic decision-making is often presented as an epistemic problem: the outputs of neural networks are, in a technical sense, irreducible to any single point of human-legible reasoning, arriving at conclusions whose precise derivation is lost in the complexity of the network’s own operations. This gives the system the operational appearance of inscrutability. Such inscrutability is problematic at both the phenomenal and noumenal levels, it preserves the appearance of a neutral technical process while closing off the conditions under which that process could be made accountable. To treat inscrutability as genuinely accidental is to attribute to the system a form of agency that exempts it from the justificatory demands placed on any human actor, a convenient, structurally engineered and dangerous exemption. The precise logic by which a platform determines what data to surface, and to whom, remains hidden. This inaccessibility is itself an architectural choice, not an unavoidable consequence of technical complexity. Zuboff’s (2019) reconstruction of the implicit normative logic of surveillance capitalism identifies two constitutive claims that make this structure legible. The first asserts that human behavioural data constitutes a free resource, available for extraction without the knowledge or meaningful consent of those who generate it. The second asserts that the logic governing this extraction is proprietary, vaulted from external scrutiny by the same intellectual property frameworks that protect any other commercial asset (p. 178). These declarations do not only describe a business model; they attempt to establish a normative position. Asserting that the architecture governing human experience in the digital sphere is exempt from the demand for justification advances a self-serving metaphysical claim: that a system designed by human actors, to serve commercial imperatives, is nonetheless beyond the reach of human justificatory demands. The engineered inscrutability and the proprietary exemption are not two separate problems, they are two faces of the same structure: the former makes the architecture illegible, the latter makes that illegibility permanent. Returning to Forst’s framework, this exemption is not a peripheral feature of platform governance but its defining characteristic. The space of reasons is not just occupied, it is claimed as private property.
The consequences of this structure extend beyond the immediate foreclosure of justificatory agency. The proprietary exemption does not merely prevent challenge, it prevents the recognition that challenge is warranted. Users do not experience the digital cage as constraint; they experience it as participation. The phenomenal surface of the platform, its interface of sharing, connection and community, presents the architecture’s operation as invisible. What is experienced as a free choice to engage is, at the noumenal level, a structured set of permissions issued by a system whose governing logic cannot be inspected, contested or refused without exiting the platform entirely, the terms and conditions lock-in present in all platforms. This is the consent fiction on which Zuboff’s Declaration 1 depends: the behavioural data claimed as a free resource is extracted under conditions that render meaningful withholding of consent structurally impossible. Terms of service that run to tens of thousands of words, that must be accepted in their entirety as a condition of access, do not constitute consent in any sense that Forst’s framework could recognise as justificatory. They constitute, rather, the phenomenal appearance of consent, the formal gesture of agreement that substitutes for the substantive right to contest the conditions of one’s own participation.
This recursive quality is what distinguishes noumenal erosion from other forms of power that constrain democratic agency. A law that restricts speech is at least visible as a restriction. It can be identified and subjected to the demand for justification through vertical and horizontal accountability structures, to draw on Diamond and Morlino (2004). The digital cage operates differently: its architecture conceals its own operation from those it most completely constrains. The opacity is not just a barrier to accountability; it is a barrier to the recognition that accountability is owed. It is in this sense that the structure identified by Zuboff’s two declarations represents something more than a failure of platform governance. The failure represented is a condition in which the preconditions of justificatory challenge have been architecturally dismantled. A demand for reasons cannot be formulated because the system against which it would be directed has rendered itself, in Forst’s terms, noumenally invisible. The space of reasons is not simply claimed as private property, the claim itself is hidden from view.
At this stage it is necessary to pause and be precise about what noumenal erosion means as a concept, what it damages and why existing accounts of algorithmic harm do not fully capture it. ‘Erosion’ is chosen deliberately. What erodes are not the opportunities to speak—these remain phenomenally intact, generated and amplified by the very algorithms causing the noumenal harm. What erodes are the conditions under which participation can function as a justificatory claim: where the speaker is recognised as a normative authority whose contribution must be taken seriously, whose demand for reasons cannot be dismissed or algorithmically hidden. This distinction locates the democratic harm at the level of precondition rather than mechanism. Individuals may retain the formal capacity to post, share, vote and organise. These are the phenomenal apparatus of participation which remain intact. They can be observed, and measured to a greater or lesser degree, whereas their noumenal counterpart cannot, which is precisely the source of their political danger. In parallel, the noumenal conditions that give those acts their democratic significance are systemically dismantled. Counted, in other words, but not heard.2
Revisiting vertical accountability gives further insight into why being precise is crucial to understanding noumenal erosion. Diamond and Morlino (2004) identify vertical accountability as the mechanism through which citizens hold power to account between and during electoral cycles. Within a democracy, these focus on the formal channels through which grievances are formulated, directed at an identifiable individual or group and, if the democracy is working, met with a substantive response. This is powerfully illustrated through stepping outside of the parliamentary sphere. The Post Office Horizon scandal demonstrates what the failure of this justificatory mechanism looks like in practice. Subpostmasters used every formal channel available to them: they raised discrepancies with Post Office management, sought legal advice, pursued appeals, and in increasing numbers, took their cases to court. The phenomenal structure of vertical accountability was fully intact: channels existed, procedures operated, institutions functioned. At every point the substantive connection was severed. The grievance could not reach the addressee that mattered because the addressee was the algorithm. In this case study, the algorithm was a system whose governing logic was protected by proprietary claims, institutional authority and legal resources that rendered it, in practice, unchallengeable. Platform architecture operates with the same logic: the individual can still make complaints, sign group petitions, express discontent with their member of parliament or local council representative. Noumenally the structure is evacuated. The grievance cannot reach the addressee that matters because the addressee is the algorithm. It is a system with no justificatory capacity, no obligation to respond and no mechanism through which the demand for reasons can be registered as such. The formal channel exists. However, the substantive connection is severed. Returning to Forst, the individual retains the phenomenal status of a participant while being noumenally reduced to a data point. They are present in the count but, crucially, absent from the conversation.
This is not simply a failure of platform governance but something more structurally fundamental. Alnemr (2024) argues in a related context that algorithmic decision-making represents an algocratic shortcut that bypasses citizen co-authorship of the conditions governing public life, diminishing the role of voice in politics not by silencing it but by rendering it democratically inert (p. 205). My argument extends Alnemr’s insight to the prior architectural level. It is not simply that algorithmic decisions bypass deliberation. Architecture prevents the justificatory conditions under which deliberation could function as a democratic act in the first place. The Post Office case magnifies this distinction. Alnemr’s algocratic shortcut describes what happens when algorithmic governance displaces deliberation at the level of institutional decision-making. What the Horizon scandal shows, and what noumenal erosion names, is the prior condition. Architectural authority of the Horizon system was used not merely to bypass justification but to actively invalidate the justificatory standing of those it harmed. Subpostmasters who raised the discrepancy were told the system was correct and they were wrong. Requests for reasons were structurally suppressed rather than just ignored. Where Alnemr identifies the shortcut at the level of governance outcomes, noumenal erosion operates at the level of the preconditions that make governance accountability possible at all. The algocratic shortcut and the noumenal cage are sequential diagnoses. An algocratic shortcut analysis describes what algorithmic governance does to democratic decision-making. Noumenal erosion describes what the architecture does to the conditions under which democratic decision-making should operate. Authenticity, in Alnemr’s deliberative sense—the capacity to communicate reasons in a non-coercive and reflexive manner (p. 208)—presupposes an environment capable of receiving those reasons as open to discussion and revision. The digital cage forecloses that prior condition, making authenticity structurally impossible regardless of individual communicative intent.
The normative weight of platform architecture’s founding promise becomes fully visible when set against what the digital public sphere was originally understood to offer. Its betrayal is not merely a corporate deception, it runs deeper than that. Habermas identifies the original commitment precisely: the equal authorisation of users to make their own spontaneous contributions was “originally supposed to be the hallmark of the new media” (Habermas, 2022, p. 159). This was not a marketing claim but a genuinely held architectural aspiration—the promise that digital communication would finally realise the egalitarian potential of the public sphere by making every participant an author on equal terms. That this promise has been structurally inverted is Habermas’s own diagnosis: the emancipatory potential of the early digital public sphere solidified into what he terms the “libertarian grimace of world-dominating digital corporations” (Habermas, 2022, p. 160). The significance for the present argument is precise. The illusion of participatory equality is not a peripheral deception but the inversion of a founding normative commitment—the appropriation of the vocabulary of democratic empowerment to obscure its systematic negation. Habermas’s framework, however, reaches a structural limit at this point. His concern is with the communicative conditions of the public sphere—what happens to the quality of deliberative exchange when its structural prerequisites are dissolved by algorithmic fragmentation and engagement-maximising design. This is a diagnosis at the level of communicative output: what people say, how they say it and whether the conditions for rational-critical exchange remain intact. What his framework does not fully theorise is why those conditions cannot be reconstructed from within the platform logic—why the deliberative remedies his own theory prescribes are structurally insufficient. The answer, as the preceding analysis has established, lies at the prior architectural level. The digital cage does not merely degrade the quality of communicative exchange; it removes the justificatory conditions under which the demand for better exchange could be addressed to anyone capable of responding. Habermas identifies what is being lost and that identification matters. His communicative diagnosis provides the phenomenal register through which the effects of noumenal erosion can be observed and evidenced. This analytical limit does not diminish Habermas’s normative contribution. His insistence that maintaining the inclusive and deliberative character of the public sphere is not a matter of political preference but a constitutional imperative (Habermas, 2022, p. 168) provides ground for the normative foundation on which the argument of this discussion sits. Where Habermas establishes the constitutional stakes of public sphere degradation, noumenal erosion identifies why meeting those stakes requires something his own framework cannot prescribe. What is required, and what his framework cannot provide, is a justificatory standard for platform architecture itself.
Political news and information is increasingly accessed through online media platforms, as opposed to being accessed via direct engagement with news publishers. The Reuters Institute’s 2024 Digital News Report, drawing on a survey of nearly 95,000 respondents across 47 countries, found that 22% of online news consumers now identify a news website or app as their primary source of news, a figure that has fallen ten percentage points since 2018, with the majority, 78%, finding their news through social media, search engines or aggregators (Newman et al., 2024, p. 10). Matched against the 5.24 billion people connected through social media infrastructure globally, 63.9% of the world’s entire population (Kemp, 2025), this indicates that approximately four billion people are receiving political news and information primarily through an architecture they did not design and whose curation logic remains opaque to them. This is not to say that four billion people are actively engaging with political content: posting, sharing, activating. Survey data indicates that 34% of social media users regard the platforms as important to them personally for expressing political opinions, and 42% consider them important for getting involved with political or social issues (Pew Research Center, 2025). These figures are themselves an approximation. As the Reuters Institute cautions, findings of this kind are best understood as representative of the online population rather than the population as a whole, with near-universal coverage in much of the developed West but a considerably wider gap between online and total population in less affluent nations (Newman et al., 2024, p. 6). What can be said with confidence is that in much of Europe and the United States the online figure is approaching saturation, with a consequent scale of potential impact that may give cause for concern when considering noumenal erosion specifically and the quality of democracy in general. Figures for less developed nations point to rapid and closing growth: social media users grew by 4.1% globally in the year to January 2025, adding over 200 million new users, with the majority of that growth concentrated in Sub-Saharan Africa and South Asia where platform adoption continues to accelerate (Kemp, 2025). The numbers are striking. Approaching two thirds of the world’s population now connected through social media infrastructure, growing at over 200 million new users annually, with saturation approaching in the developed West and rapid acceleration continuing elsewhere. But scale is not the argument.
Considered through a noumenal lens, the reach of platform infrastructure establishes these systems as the primary public sphere through which the conditions for political agency are now formed for the majority of the world’s citizens. This is not because four billion people are actively deliberating within them — they are not. It is because the platforms constitute the architectural environment within which political identity is shaped, the sense of what participation means is formed and the understanding of what is contestable is determined, before any deliberation begins. Where Weber’s iron cage operated through an explicit hierarchy, a many-to-one structure mediated by the street-level bureaucrat who retained at least a residual space for human judgement, the digital cage presents the phenomenal appearance of its opposite: a many-to-many public sphere of horizontal connection, peer exchange and shared community. It is this appearance that makes the constraint invisible. As Pagano (2025) argues, the digital cage organises perception itself, determining what is measured and what is passed over, what is surfaced and what is blocked. The four billion who receive their political news and information through platforms do not experience this as a constraint. They experience it as connection, community and access. That is precisely what noumenal power, in Forst’s sense, does: it occupies the space of reasons not by blocking participation but by pre-determining the conditions under which participation becomes thinkable at all. The paradox with which this section is concerned — participation at its phenomenal peak, democracy at its lowest point in decades — begins here.
Democratic backsliding has become one of the central preoccupations of contemporary political science — discussions concerned with diagnosing the conditions under which democracies deteriorate, identifying the mechanisms of decline and exploring the prospects for democratic recovery. The question raised by the scale of platform reach documented above is whether that body of work possesses the analytical tools to address what is happening. The argument of what follows is that it does not, not because it is wrong, but because it is looking in the wrong place. The wrong place is the institutional surface. The right place is the architectural conditions that determine what kind of democracies we arrive at.
“Democracies erode slowly, in barely visible steps” — so Levitsky and Ziblatt describe the process by which democracies deteriorate in the twenty-first century (2018, p. 3). When considering the quality of democracy and how it may be assessed, the phrase captures something important. Democratic decline is not a binary event. Erosion is a gradual process, which means that measuring it requires not a simple present/absent classification but a framework capable of tracking quality and degree of erosion. The V-Dem Institute provides such a framework. Its five core indices — electoral, liberal, participatory, deliberative and egalitarian — treat democracy not as a single condition but as a set of linked principles, each measurable on a scale, each capturing a dimension of democratic life (Nord et al., 2026, p. 7).
These five indices can be consolidated into two registers, each capturing a distinct layer of democratic life.
Register 1 encompasses electoral democracy — the participatory surface of democratic governance. Electoral democracy exists when elections are free, fair and recurring, elected officials wield political power de facto, suffrage is universal and political parties and candidates can form and compete freely within an environment of reasonably level playing field freedoms of speech, media and civil society (Nord et al., 2025, p. 9). This register measures the conditions under which individuals can participate in the democratic process: participation in its observable, procedural form. It is, in the terms this article develops, phenomenal participation: democracy visible at its surface.
Register 2 encompasses liberal democracy — capturing what Diamond and Morlino (2004) identify as the substantive dimensions of democratic quality, extending beyond electoral procedure. In this register, we consider the dimensions of rule of law and horizontal accountability. Liberal democracy builds on the electoral foundation, Register 1, by adding constraints on the executive by the legislature and the judiciary and the rule of law ensuring respect for civil liberties (Nord et al., 2025, p. 9).
Where Register 1 measures the procedural conditions of participation, Register 2 measures the institutional architecture through which democratic participation is made accountable and rights are formally protected. Rule of law and horizontal accountability are normative principles, the noumenal conditions that precede and exceed their reduction to measurable outcomes. Accountability and the rule of law are not created by democratic institutions but are foundational conditions that democracy requires in order to operate. When treated as measurable outputs rather than normative grounds, they become vulnerable to precisely the erosion this section addresses.
The two-register framework provides a lens through which democratic erosion can be tracked with some precision — whether assessed across the narrower electoral and liberal platform or the fuller five-index rubric. It does not, however, explain the mechanisms of that erosion. The democratic backsliding literature has developed as a scholarly response to the observable deterioration of democratic conditions globally, making the mechanisms of decline from within its central concern.
Bermeo’s typology charts the principal ways in which contemporary democratic breakdown occurs (2016). She identifies what makes contemporary backsliding so difficult to detect and measure (2016, pp. 5–6). Where earlier democratic failures tended towards binary changes — military coups, executive seizures of power — Bermeo observes that twenty-first century backsliding characteristically operates through legal means: executive aggrandisement, strategic harassment of opponents and the gradual dismantling of institutional constraints. Elected leaders retain democratic legitimacy while systematically undermining democratic substance. The process is legal in form. It is open to challenge through existing institutional channels but corrosive in effect. It leaves the procedural surface of democracy largely intact while weakening the accountability architecture beneath. It is, in the terms Levitsky and Ziblatt capture, erosion in barely visible steps (2018, p. 3).
Levitsky and Ziblatt build on Bermeo’s analysis. They argue that current democratic breakdown is driven not by external assault, but by the erosion of two informal norms: mutual toleration and institutional forbearance. Their guardrails argument rests on a distinction between de jure and de facto democratic protection (2018, p. 102). Written constitutions establish the formal architecture of democracy, placing de jure constraints on executive power. What actually protects democratic substance in practice is the de facto observance of informal norms by political actors who could, legally, choose to violate them. This argument applies equally to democracies such as the United Kingdom, which relies almost entirely on convention and precedent observance for its democratic protections, rather than written constitutional guarantees. When those norms erode, the formal institutions of democracy remain but their protective function is lost. Elected leaders can exploit constitutional ambiguities and subordinate independent institutions, all within the formal boundaries of democratic procedure.
The United Kingdom provides a vivid illustration of how the absence of a codified constitution creates vulnerability to precisely this form of norm erosion. Prime Minister Boris Johnson’s prorogation of parliament for five weeks in 2019, ruled unlawful and void by the Supreme Court as an unjustified interference with parliamentary sovereignty (Miller v Prime Minister, 2019), deployed a formal prerogative power for the improper purpose of curtailing parliamentary scrutiny of government policy. The same government threatened judicial consequences following the Supreme Court ruling, with ministers characterising lawyers and judges who ruled against the government as “lefty” obstacles to democratic will. This conduct the International Bar Association condemned as showing a disregard for the principles of international law and constituting part of the government’s wider attack against the rule of law (IBA/IBAHRI, 2020). The UK’s experience is not unique. As McMillan (2022) documents in a survey of the global assault on the rule of law, the delegitimisation of judges and lawyers by elected governments has become a defining feature of contemporary democratic backsliding across multiple jurisdictions. Both actions in the UK are formally legal in origin. No statute was broken before the courts intervened. They represent the kind of norm erosion Levitsky and Ziblatt identify as the mechanism through which democratic substance is hollowed out while democratic form is preserved. Democracy dies, in their terms, not at the hands of generals but of elected leaders, those who subvert the very process that brought them to power (2018, p. 3).
Bennett and Livingston represent the point at which the backsliding literature comes nearest to the noumenal argument this article develops (2025). They integrate institutionalist accounts — the focus on elites, norms and power structures found in Bermeo and Levitsky and Ziblatt — with analysis of how digital platforms organise and mobilise the extremist networks that accelerate democratic erosion. In doing so, they update how we should conceptualise the relationship between platform architecture and democratic erosion in the twenty-first century. Their connective action framework shows how scattered extremist factions are drawn together online into quasi-organisations that challenge party gatekeeping and reshape the boundaries of mainstream politics (2025, pp. 2, 15) — providing, in their own terms, “incomplete yet complementary clues” that neither purely institutional nor purely technological accounts can supply alone (2025, p. 1). This synthesis operates at the level of observable outputs: the measurable spread of disinformation and the traceable organisation of extremist networks. They argue that a technology driver exerts persistent influence, capable of usurping the traditional framework of political party organisation and in turn destabilising democratic institutions. What is missing from their account is the prior question of agency: how a platform supplants genuine democratic participation whilst amplifying marginalised extremist voices into instruments of democratic dismantling. That supplanting operates at the noumenal level. It operates beneath the observable outputs, reshaping the conditions under which individuals form the capacity for democratic agency itself. It is to that layer that the analysis now turns.
The connective action dynamic Bennett and Livingston identify in the United States has a mirror in the United Kingdom, though operating through a different executive and party political architecture. The trajectory from UKIP through the Brexit Party to Reform UK represents precisely the kind of progressive organisational coherence of formerly scattered right-wing factions that Bennett and Livingston describe. Fringe elements are brought into greater alignment through shared memes, digital networks and the connective action of social media platforms, until they become sufficiently organised to reshape the boundaries of mainstream party politics. Hayton (2025, p. 388) analyses the relationship between Reform and the Conservative Party, arguing both now represent what he terms “populist conservatism” with their dynamic competition reinforcing its dominance across the right of British politics. Beyond Reform, the emergence of Restore Britain, drawing support primarily from former Reform and Conservative voters, illustrates the further fragmentation and radicalisation dynamic Bennett and Livingston identify. When extremist factions are not fully absorbed by a party, they generate new organisations further along the spectrum, pulling the entire political field rightward. Critically for the argument this article develops, these movements are driving democratic erosion from outside institutional power. They restructure the discursive conditions of political competition without needing to hold office. The consequence is visible in the Labour Party’s progressive rightward movement on immigration and cultural policy, not because its own institutional norms have changed but because the architectural conditions of political discourse have shifted beneath it (Kippin, 2025, pp. 15–16; Hayton, 2025, pp. 393–395).
Bennett and Livingston represent the closest existing point of meeting between the backsliding literature and the argument developed in this article (2025). Their analysis of the technological and institutional roots of democratic backsliding in the United States identifies the informational environment — restructuring of media ecosystems, rise of disinformation, fragmentation of shared knowledge foundations — as a primary driver of democratic deterioration. They place technological change within an institutional analysis of backsliding, moving the literature closer to the architectural conditions this article addresses. Their account remains anchored in observable and quantifiable events such as content of disinformation, nature of media consumption and decline of institutional trust. The architectural question — the conditions under which we form the capacity to participate in democratic life at all — remains, in their framework, just beyond the analytical horizon.
The distinction matters. Whether democracy is assessed across five indices or concentrated into two registers, what the spectrum reveals is that erosion can occur at the procedural surface while the deeper qualitative conditions of democratic agency remain unmeasured — or can erode at the qualitative level while procedural indicators remain apparently intact. It is precisely this second form of erosion that the argument of this section addresses.
Democratic backsliding did not arrive unannounced. Its symptoms were observable to scholars, practitioners and citizens long before the indices confirmed them — discernible in the texture of political culture, in the changing conditions of public discourse, in the erosion of norms whose absence becomes visible precisely because they are no longer operating. The literature that grew up around this phenomenon reflects that visibility: sophisticated, empirically grounded and increasingly urgent in its diagnoses. By the time the V-Dem Institute’s 2023 report confirmed that advances in global democratic quality made over the preceding thirty-five years had been effectively reversed — that the average world citizen in 2022 enjoyed a level of democracy equivalent to that of 1986 — scholars had already been documenting the mechanisms of decline for the better part of two decades (Papada et al., 2023). What the literature had not resolved, and what the indices could not capture, was why the deterioration proved so resistant to correction — why the electoral road to breakdown remained, as Levitsky and Ziblatt put it, “dangerously deceptive,” with people still voting, leaders maintaining the veneer of democracy while eviscerating its substance, and the process remaining at every incremental step just below the threshold of democratic emergency (2018).
The standard accounts locate the causes of this resistance in the institutional and behavioural layer of democratic life. For Bermeo, backsliding is best understood as a series of discrete changes to the rules and informal procedures that govern elections, rights and accountability — incremental, legal, and in each individual instance defensible, but cumulatively corrosive of the democratic order (2016). Levitsky and Ziblatt identify the mechanism more precisely: the erosion of the unwritten norms of mutual toleration and institutional forbearance that sustain democratic culture beneath the level of formal constitutional rules, driven by elite actors who exploit institutional weakness and the unwillingness of political parties to enforce the guardrails that once constrained them (2018). In Tyranny of the Minority they extend this analysis to the systematic deployment of counter-majoritarian constitutional devices — minority veto powers, supermajority requirements, anti-democratic electoral structures — that insulate minority rule from democratic correction and make the recovery of majority governance structurally difficult even where the popular will for it exists (2023). Runciman approaches the same deterioration from a different angle: not the abuse of institutions by elites but the complacency of citizens in mature democracies, the progressive hollowing of democratic forms whose procedural shell persists while their animating substance drains away, producing a kind of democratic zombie — the outward appearance of self-government without its substantive reality (2018).
These are consequential and well-evidenced diagnoses. The institutional deterioration they document is not in dispute here, and the mechanisms they identify — norm erosion, elite complicity, constitutional manipulation, civic complacency — are demonstrably operating in the cases they examine. The argument of this section is not that these accounts are wrong but that they share a structural limitation that prevents them from seeing the full picture. Each operates at the phenomenal surface of democratic life: observing and theorising what is visible in the behaviour of elites, the erosion of norms, the manipulation of institutions and the decline of civic engagement. The public sphere through which citizens form political identities, encounter democratic norms, and develop the capacity for political agency appears in all three accounts as background — a given environment within which the institutional drama they are describing takes place. It is not itself theorised. The question of what happens to the conditions of democratic agency when that public sphere is restructured by privately governed architectural systems, operating at the scale documented in the preceding section and beyond the reach of any justificatory standard that citizens could meaningfully contest, is simply not a question these frameworks are designed to ask. It falls outside the field of vision that phenomenal analysis provides.
The most ambitious recent attempt to bridge this gap is Bennett and Livingston’s account of the technocentric and institutionalist paradigms of democratic backsliding (2025). Their argument is that institutionalist approaches — Levitsky and Ziblatt, Bermeo and their interlocutors — have had, in their own assessment, “little to say about media and communication processes in general and social media in particular,” while technocentric approaches, focused on algorithmic amplification and individual cognitive effects, have largely ignored the broader institutional, historical and economic conditions that shape how digital networks operate politically (Bennett and Livingston, 2025, p. 4). Their synthesis, built around the concept of digitally constituted organisations — fluid, nonhierarchical, leaderless formations that cohere online around shared political content and resist the gatekeeping mechanisms that once moderated democratic politics — represents a genuine advance on either paradigm taken alone. It moves the level of analysis from the individual to the organisation, and from the technological to the socio-institutional.
Yet even this synthesis operates within the phenomenal register. The digitally constituted organisations Bennett and Livingston describe are observable entities — their activities can be tracked, their content analysed, their effects on institutional politics measured and, in principle, regulated. What their framework does not reach is the layer beneath: the architectural conditions under which the public sphere itself is constituted, within which political identity is formed before it ever coheres into an organisation, and within which the very capacity for the kind of democratic agency that institutional accounts presuppose is shaped or eroded. To theorise that layer requires a different analytical vocabulary — one capable of addressing not just what people do within platform environments but what those environments do to the conditions under which democratic subjectivity is formed. That is the vocabulary of noumenal analysis, and it is to its application to the backsliding paradox that this section now turns.
Freedom House, the Economist Intelligence Unit and the V-Dem Institute each produce annual indices, applying expert assessment and survey data to track the condition of democracy globally (Freedom House, 2025; Economist Intelligence Unit, 2024; Nord et al., 2026). Their findings converge. Freedom House’s Freedom in the World 2025 records the nineteenth consecutive year of democratic decline, with the balance between free and not-free countries shifting steadily against the former (Freedom House, 2025, p. 1). The EIU’s Democracy Index 2024 indicates that fewer than half the world’s population now live in a democracy of any kind, with only 15% of polled countries classified as full democracies (Economist Intelligence Unit, 2024, p. 17). V-Dem’s Democracy Report 2026 reports that the level of democracy enjoyed by the average global citizen has returned to the levels of 1978, erasing approaching fifty years of democratic progress (Nord et al., 2026, p. 8).
The three indices confirm the decline of democracy. There may be disagreement between their respective rubrics, but in aggregate they measure what has happened and provide a degree of convergence in their findings. They cannot reach why it is happening in the way the noumenal erosion argument requires. The limitation becomes visible the moment one examines what the indices actually ask. The EIU’s methodology, for example, poses the question: “Do ethnic, religious and other minorities have a reasonable degree of autonomy and voice in the political process?” (Economist Intelligence Unit, 2013, p. 34). The question is directed at observable institutional conditions. This makes it quantifiable and, by extension, usable as a qualifier. What the EIU indicators cannot do is examine the architectural conditions that determine whether minorities, or any other group, have the capacity to exercise that voice before they arrive at the process at all. Answering ‘High’, ‘Moderate’ or ‘Low’ to the EIU indicator “Extent to which adult population shows an interest in and follows politics in the news” (Economist Intelligence Unit, 2013, p. 35) might provide an avenue to follow. A low score raises a question about participation and the conditions that inhibit it. The indicator goes nowhere towards answering whether the decision not to follow politics is itself a product of the architectural conditions through which political information is received. It records an outcome. It cannot reach its source. As the preceding analysis has established, platform architecture occupies the space of reasons before the individual arrives at any measurable act of participation (Forst, 2017; 2024). Bennett and Livingston illuminate the consequence from a different angle. Their connective action framework demonstrates how the same platform architecture that registers as disengagement in mainstream democratic participation simultaneously draws scattered extremist factions into alignment, amplifying marginalised voices into organised political forces (2025, pp. 2, 15). These are two faces of the same architectural condition. The EIU indicator can record the first effect. It cannot see the second, and it cannot connect them.
The paradox identified above — participation at its phenomenal peak, democracy at its lowest point in decades — demands an explanation the existing measurement frameworks cannot supply.
The most significant regulatory responses to platform power are the European Union’s General Data Protection Regulation (GDPR, 2018) and, in the United Kingdom, the Online Safety Act (OSA, 2023).3
GDPR recognises platforms as collectors and processors of personal data. Its central concern is the lawfulness of data acquisition and use: whether consent is meaningful, whether processing is proportionate and whether individuals retain rights of access, rectification and erasure. In this sense, it places the individual at the centre of the legal process as a data subject provided with transactional rights. However, this leaves untouched the conditions under which political agency is formed — the space of reasons. By the time an individual encounters a consent button, accepts terms of service or seeks deletion of their data, the platform’s curation logic has already shaped what is seen, chosen what is relevant and how the individual can participate — removed the individual from the process before they arrive. GDPR regulates the exchange of data after the fact; it does not reach the prior architecture through which participation itself is constituted. In Forst’s terms, the noumenal dimension of participation — the space of reasons and justificatory power within which agency becomes possible — remains beyond the reach of the framework. The result is that GDPR can constrain informational practices. It cannot dismantle the structural conditions that produce the democratic subject it presupposes.
The United Kingdom’s Online Safety Act (2023) addresses platforms as hosts and distributors of content. Its central concern is the management of content risk: illegal content must be removed, harmful content must be assessed and mitigated. Ofcom is empowered to enforce these obligations against platforms operating in the UK. The OSA places duties of care on platforms. Those duties are framed entirely around what platforms host and distribute, not around what their architecture does to the conditions under which users form political identity and capacity. By the time Ofcom identifies harmful content, requires its removal or levies a fine, the curation logic that determines what reaches the user has already operated. The OSA, like the GDPR, arrives after the architectural fact. It addresses the platform as an institutional actor with content obligations. It cannot reach the platform as an architectural condition shaping the prior conditions of democratic agency. The addressee of the regulatory framework and the source of noumenal erosion are not the same. The space of reasons remains beyond the reach of both.
Weber’s rational-legal order produces exactly this form of institutional response: procedurally rigorous, enforceable in its own terms, and constitutively unable to reach the conditions it does not recognise as its object. In Weber’s model the street-level bureaucrat, however constrained, retained a residual space of human judgement. They possessed an irreducible point of contact between the system and the individual through which justificatory exchange remained at least theoretically possible. Ofcom occupies that structural position in the regulatory architecture of the OSA. It cannot perform that function. As a proxy for the individual’s justificatory claim against the platform, Ofcom acts on behalf of a generalised public interest rather than the specific justificatory standing of any individual citizen. Its remit is defined in content terms that do not correspond to the noumenal harm being done. In substituting itself for the citizen’s direct justificatory relationship with the platform, the regulatory framework does not merely fail to reach the noumenal level. It forecloses the possibility of that relationship by design. The citizen is not empowered to demand justification from the platform. Ofcom demands compliance on the citizen’s behalf. The space of reasons is not just beyond the reach of the framework. It has been institutionally replaced.
Zuboff’s reconstruction of surveillance capitalism’s normative logic identifies why this structural limitation is not incidental. Her two constitutive declarations — that human behavioural data is a free resource available for extraction, and that the governing logic of that extraction is proprietary and beyond external scrutiny (Zuboff, 2019, p. 178) — establish the platform as an actor whose defining operations are vaulted from the reach of justificatory challenge. Regulatory frameworks that address the platform as a legal person with content obligations leave that vault intact. They can require transparency reports, mandate flagging mechanisms and impose fines for illegal content. They can empower Ofcom to act on the individual’s behalf. What they cannot do is require the platform to submit its curation logic to the justificatory standard that Forst’s framework demands. If we are to address the erosion of participation, we must address the problem requiring the architecture shaping political agency to be accountable to those it shapes. The proprietary exemption is not a gap in the regulatory framework. It is the condition the regulatory framework was not designed to address. The noumenal dimension of platform power remains, in Zuboff’s terms, claimed as private property, and that claim, not its consequences, is where the democratic harm originates.
Analysing the impact of digital communications in 2005, Jodi Dean was prescient in identifying what she termed ‘communicative capitalism’ — the condition in which the very tools conceived as democratic are structurally depoliticising (Dean, 2005, p. 55). Dean identified a paradox, long before platforms achieved the algorithmic power they do today, that is core to this section. Our present day ‘communicative abundance’ coincides with the collapse of ‘democratic deliberation’ (Dean, 2005, p. 54). Her analysis points to networked communications providing the arena for a structural transformation in the basic unit of political communication. This arena has generated a shift from the message to the contribution. It will benefit us to unpick what this means. A message is directed: it has a sender, a receiver and an obligation of response. A contribution is different. It circulates and adds to the flow of content. For Dean, the content, sender and recipient are all irrelevant, agency is stripped from the participation. What matters is circulation itself (Dean, 2005, p. 58). The exchange value of the message overtakes its use value. The consequences of this structural shift are visible in the digital mobilisation around Gaza in 2023–24. A UK parliamentary petition calling for a ceasefire gathered 268,882 signatures — well over the 100,000 threshold required to trigger a parliamentary debate, which took place on 11 December 2023 (UK Parliament Petitions Committee, 2023). In parallel, a YouGov poll commissioned by Medical Aid for Palestinians found that 71% of the British public supported an immediate ceasefire (Medical Aid for Palestinians and Council for Arab-British Understanding, 2024). The message was not only circulating in multiple media — it represented a clear majority position. The government acknowledged the message’s existence, but did not respond to it. The message was delivered but, in Dean’s sense, was not received. The parliamentary petition mechanism functioned exactly as legally required, providing the impression that vertical accountability operated. A formal response was issued. However, the message was not received as an obligation. The substantive connection was severed. The question, here, is not whether the government was right or wrong. What is being asked is ‘was the message received as a claim requiring justification, or as content requiring acknowledgement?’. In Dean’s framework, as in Forst’s, only the former constitutes a genuine democratic response.
From this structural shift Dean creates her account of the fantasy of participation. Contributing to circulating content produces what she calls a registration effect — the subjective sense that your action matters, that you have been heard — we come to mistake the performance of participation for its substance. Drawing on Žižek’s concept of interpassivity, Dean argues that platform activity functions as a technological fetish: something else acts politically in our stead, relieving us of the obligation to engage in the harder, riskier work of actual political organisation. The platform is political for us, enabling us to understand ourselves as active while remaining, in the precise Žižekian sense, passive (Dean, 2005, p. 63). Clicks, shares, likes and petition signatures constitute the visible record of democratic engagement. They do not constitute democratic engagement.
Dean’s account is a useful analysis of the phenomenology of participation. Individuals believe they are politically active while democratic institutions deteriorate. This is supported by the earlier data on platform numbers. However, communicative capitalism frames a sense of capability that is structurally divorced from political effect. Dean’s framework operates at the level of political economy and communicative practice. Its object is what individuals do on platforms and what that doing means politically. In her analysis, individuals are not participating in the way they might believe they are. Dean’s framework points towards the noumenal conditions of participation without fully theorising them. The fantasy of participation Dean identifies is itself a product of the noumenal conditions the platform has established. An individual arrives at the platform already formed — already inclined to understand clicking as acting, sharing as speaking, liking as endorsing — because the platform’s curation logic has pre-determined the space of reasons within which those gestures acquire their, apparent, political significance. Dean was writing when the internet was still in its infancy. The conditions she identified have since been architecturally strengthened by algorithmic systems she could not have anticipated. Dean describes the performance. Noumenal erosion explains the architectural conditions under which that performance comes to feel, to the individual, like the real thing.
In his exploration of digital platforms, and search engines in particular, Eli Pariser identifies three dynamics that together constitute what he terms the filter bubble — the algorithmically constructed personalised information environment within which each user increasingly operates (Pariser, 2011, pp. 9–10). His analysis provides a valuable insight into how individual agency is shaped by platform architecture. Pariser approaches the architectural level at which the noumenal erosion argument operates. The three dynamics are:
First, each user inhabits a personalised information universe. Content delivered to you differs from the content delivered to me, shaped by algorithmic inference of individual preference.
Second, the bubble is invisible. We do not know what is being withheld, what bias is built into the selection or how extensively the algorithm has narrowed the informational world we inhabit.
Third, and most significantly, we do not choose to enter the bubble.
The third dynamic is a direct consequence of the first two. If your informational universe has been individually constructed and its construction is invisible to you, you cannot choose not to inhabit it. This is the point at which Pariser’s argument comes closest to the noumenal level. The bubble cannot be perceived as a bubble from within it — you cannot opt out of a frame you cannot see. Platform architecture is shaping what you see before you have any opportunity to consent to or contest the terms of that shaping.
Whilst useful, the third dynamic approaches the noumenal argument without quite reaching it. Pariser’s concern is a content argument. He focuses on what the algorithm delivers, what it withholds and how personalisation narrows the environment delivered. The filter bubble is a diagnosis of the phenomenal surface. It examines the content layer of democratic participation. What reaches you is distorted but the source of that distortion is not explained. Content withheld from you is hidden. These are real and serious harms to democratic thinking. The noumenal erosion argument operates at the prior architectural level. It is not concerned with the content of what is delivered. The concern here is with what the architecture does to the conditions under which we form the capacity to receive, interpret and act as democratic agents. The filter bubble shapes what you see. Noumenal erosion shapes the individual who does the seeing — before any content arrives.4
What Pariser describes from within the individual experience, Habermas theorises from without, as a structural transformation of the public sphere itself. Together they map the same condition at different levels of analysis. Both provide a means of assessing what is happening and how we could raise objections to what is happening through social media. They also reach the same limit. Habermas’s account of the new structural transformation of the public sphere identifies the platform as a systemic force that has radically altered the previously predominant pattern of communication in the public sphere, not just its content (Habermas, 2022, p. 159). Where his earlier work theorised the communicative conditions of rational-critical debate, his later diagnosis recognises that the platform has not simply populated the public sphere with bad content but has restructured the sphere itself. The algorithmic logic of engagement maximisation drives communication networks centrifugally outward while simultaneously condensing them into circuits that seal themselves off from each other (Habermas, 2022, p. 160). The emancipatory promise of the early digital public sphere has morphed into what Habermas terms the “libertarian grimace of world-dominating digital corporations” (Habermas, 2022, p. 160). These are structural transformations, not content problems. In this respect Habermas’s framework reaches further than Pariser. It provides a means of considering the conditions of communication rather than just the phenomenal layer of what is communicated. His insistence that maintaining the inclusive and deliberative character of the public sphere is not a matter of political preference but a constitutional imperative (Habermas, 2022, p. 168) provides the normative ground on which the argument of this section rests.
Habermas reaches a structural limit. His concern is with the communicative conditions of the public sphere: what happens to the quality of deliberative exchange when its structural prerequisites are dissolved. This is a diagnosis at the level of communicative output. What his framework does not fully theorise is the prior condition: why the deliberative remedies his own theory prescribes are structurally insufficient, why the space of reasons cannot be reconstructed from within the platform logic. The measurement frameworks cannot reach the architectural level. As we have seen with the GDPR and OSA examples, the regulatory frameworks cannot address it. The communicative capitalism diagnosis approaches it without crossing it. The platform does not only degrade the quality of what individuals say to each other. It shapes the conditions under which they become the kinds of political agents capable of saying anything that reaches the threshold of democratic significance at all. As we have seen, the agency that remains is real in appearance but illusory in substance. We are witnessing the phenomenal performance of democratic participation whose conditions have already been architecturally set.
The paradox at the core of my argument in this section — participation at its phenomenal peak, democracy at its lowest point in decades — has its explanation. It is not that individuals are receiving the wrong content, or that their messages are circulating without response, or that the regulatory framework has failed to impose the right obligations on the right corporate or administrative actors. It is that the architectural conditions under which democratic agency is formed have been systematically occupied, in Forst’s sense, by systems accountable to no justificatory standard that citizens could meaningfully contest. Counted, in the precise sense Dean identified twenty years ago, but not heard. As Dean predicted, the consequence of that foreclosure is not only political passivity. More corrosive to democracy, it is the progressive figuring of the political other not as an opponent to be engaged but as a threat to be destroyed (Dean, 2005, p. 54). That is the condition Section 3 addresses.
Platform algorithms operate in two distinct but joined modes. This joint is what allows the participation paradox to persist. What we encounter, as contributors and consumers, is the content-pushing mode: the recommendation, ranking and amplification of content designed to maximise engagement. This is the mode addressed by Pariser’s filter bubble analysis and Sunstein’s account of echo chambers. In this mode, the problem appears at the phenomenal level of algorithmic curation. It produces observable outputs and is therefore open, at least in principle, to regulatory and legal intervention. The growing legal and regulatory record increasingly confirms this reach.
The European Commission’s formal proceedings against TikTok (European Commission, 2024) mark an important moment in which the state’s claim to democratic authority takes phenomenal form. Acting to protect its citizens, the Commission is making the claim that TikTok’s impact is such that it strikes at the democratic principles to which all citizens are entitled. As a result, TikTok is accountable not only for physical and mental harms, but also for democratic harms. In this example, the erosion of the ability to participate. Through its investigation of algorithmic recommendation systems and addictive design features, the Commission is attempting to force digital architectures to comply with visible democratic requirements. In doing so, it acts on the assumption that platform architecture has weakened the citizen’s effective standing as a democratic participant, requiring institutional intervention to restore a relation of justification that the platform itself no longer sustains.
In Weberian terms, the Commission occupies the structural position of the institutional proxy — the rational-legal agent filling the justificatory gap that individual citizens can no longer bridge, demanding compliance on behalf of a public interest (Weber, 1905/2001; Pagano, 2025). Forst names precisely what these enforcement actions seek to reconstruct: justificatory addressability, the creation, in proxy form, of a means through which a claim about platform architecture can reach an addressee with both authority and obligation to respond (Forst, 2017). When TikTok and Meta obstructed researcher access to algorithmic data in the 2025 proceedings (European Commission, 2025), they were not merely committing a compliance failure in the ordinary regulatory sense. They were reasserting Zuboff’s second declaration — the proprietary exemption — against even this minimal reconstruction of justificatory addressability (Zuboff, 2019, p. 178). That reassertion represents noumenal closure made visible at the institutional level: the vault holding firm against the most powerful regulatory actor yet to test it.
The Commission’s findings confirm that the content-pushing mode of these algorithms operates through narrowing circuits that elude individual agency (European Commission, 2024). The verdict in KGM v. Meta Platforms (Los Angeles Superior Court, 25 March 2026) demonstrates the judiciary’s attempt to establish the same standard of accountability. These interventions are important. They signal recognition that platform harms extend beyond individuals to democratic substance itself. However, they are reactive to the phenomenal layer — the created digital reality that Mode 1 sustains. The deeper architecture does more than generate those harms: it systematically reorganises the conditions under which participation can still count as participation, rendering even the most forceful regulatory claims structurally indirect.
Mode two of the platform operation cycle provides a gateway into the closing-off noumenal problem. Where the content-pushing mode acts on what we see, the closing-off mode acts on the conditions under which what users see can function as a justificatory claim at all. It is the algorithmic operation through which the space of reasons is pre-determined. This space being the arena through which what is thinkable and contestable as a demand for justification is shaped prior to any act of participation. While architecturally linked to content pushing, it forecloses the conditions under which content could be received as a claim requiring response. Zuboff describes how the closing-off mode operates: the extraction of behavioural data without meaningful consent and the proprietary protection of the governing logic that makes it unchallengeable (Zuboff, 2019, p. 178). In this respect, Zuboff captures how the erosion occurs: the mechanics of closure. Krouglov helps to show what those mechanics produce. The closing-off mode estranges the individual from meaningful participation, transforming agency into a subjectivity built around platform metrics rather than democratic participation (Krouglov, 2025, p. 196). Algorithmic alienation is the process through which platform architecture turns participation into measurable data points. Political engagement becomes interaction dictated by metrics. The consequence is a restructuring of democratic subjectivity before the individual arrives at any act of participation. This is what Krouglov terms the “metamorphosis of political subjectivity” (Krouglov, 2025, p. 199).
Focusing on the content-pushing mode alone does not offer a holistic account of algorithmic harm because it does not reach this prior level. That limitation is identified by Alnemr, who argues that deliberative democracy’s concerns about algorithmic harms remain incomplete without addressing the structural conditions under which citizens’ capacity to contest those harms is itself foreclosed (Alnemr, 2025, p. 2). The closing-off mode produces Dean’s registration effect, the subjective sense of participation without its democratic substance, and explains why indices can measure decline while the phenomenal apparatus of participation remains intact. It is also the mode that the EC proceedings and the KGM verdict cannot reach. The content-pushing mode is addressable, however imperfectly and reactively. The EC and KGM bear witness to that. The closing-off mode is not addressable in the same way, not because it is technically inaccessible, but because it is structurally protected by the proprietary exemption at every level of its operation. This is the noumenal problem and it is one that regulatory interventions can neither reach nor resolve. The question it leaves is not whether platform architectures can be perfected, but whether they can be held to a minimum justificatory standard at all.
Given the foregoing, can the issue of noumenal erosion be resolved? Not in any thick sense. A fully transparent, fully co-authored, fully deliberative platform architecture would be one in which every user stands as an equal normative author of its governing logic. Such perfection is not just unachievable but conceptually unknowable: it offers no standard by which its own attainment could be confirmed. The appropriate standard must therefore be thinner, deliberately so. As Williams first established, thick ethical concepts carry rich world-guided content while thin concepts operate at higher levels of generality (Williams, 2006, pp. 140–142). Thick standards demand justification of every design choice and outcome as a comprehensive ideal. Such deliberative completeness is neither technically deliverable by platforms nor collectively authorable by citizens. Thin standards, by contrast, name only the structural preconditions without which justificatory agency cannot function at all. Whether everyone is then treated fairly is the subject of a different discussion.
In his exploration of epistemic actorhood, Risse identifies the structural condition required, pointing to the agency rights that digital lifeworlds demand: democracies committed to human rights have failed to evolve their governance systems at the pace of surveillance capitalism, turning technological adaptation into a condition of the survival of those ideas themselves (Risse, 2021, p. 355). The failure is not just institutional but normative. Human rights frameworks, as they stand, cannot protect epistemic actorhood in digital lifeworlds without the upgrade Risse demands.5 Alnemr arrives at a parallel diagnosis from a deliberative democracy perspective: existing accounts of algorithmic harm remain incomplete because they do not address the structural conditions under which citizens’ capacity to contest those harms is itself foreclosed (Alnemr, 2025, p. 2). This paper accepts both diagnoses while pressing further: institutional upgrade alone proves insufficient. What is required is a justificatory standard for platform architecture itself. The standard can be articulated as three minimum conditions that precede any regulatory intervention.
The first is architectural transparency. It functions at the epistemic level, holding to account what we know and do not know about how a platform works. This demands more than the episodic explainability of current regulatory frameworks. Users must be able to identify the governing logic of platform systems well enough to contest their justificatory structure. Without this, the space of reasons remains occupied by proprietary code rather than opened to democratic claim-making.
The second is justificatory addressability. It functions at the relational level, holding the platform to account to ensure there is a relationship of accountability between the individual and the platform. Platforms must provide a real addressee capable of receiving and responding to justificatory claims — not through compliance interfaces or regulatory proxies, but through a structure in which citizens can address those who exercise architectural power over their participation. Absent this and the citizen remains a data point rather than a justifying agent.6
The third is minimal co-authorship. It functions at the structural level, holding the platform to account for enabling review and enacting change directed by the user. Users must possess some democratic role in constituting the basic rules that organise participation itself, moving beyond reactive consultation to structural involvement in platform governance. This is not collective design of every algorithm, but the right to participate in determining the conditions under which participation retains democratic substance.
Taken together these conditions restore the three elements Weber regarded as essential to legitimate rational legal authority: intelligibility, an accountable addressee and publicly constituted rules, without which we are faced with the digital cage, rule without justification (Weber, 1905/2001, p. 123).
Noumenal erosion constitutes an existential threat to self-rule because it operates below the threshold at which existing democratic tools can intervene. Regulatory attempts to establish accountability, such as the KGM v. Meta verdict, are vital but ultimately limited; they address phenomenal harms like addiction and disinformation without reaching the proprietary closure that determines what counts as participation in the first place. When platform architectures claim the space of reasons as a proprietary enclosure, democracy hollows from within. The structural paradox reveals that what began as engineering methodology has become a fundamental assault on the foundations of collective agency.
This paper has traced the structural paradox of contemporary platform democracy: participation reaches phenomenal saturation while democratic legitimacy erodes beneath it. Section 1 established the noumenal/phenomenal frame, revealing platform architectures as digital cages that seal off the space of reasons from justificatory claim-making. Section 2 demonstrated why backsliding indices and regulatory interventions remain blind to this prior architectural level — measuring outputs while Mode 2 silently reorganises the conditions of democratic subjectivity itself. Section 3 diagnosed the two-mode operation and proposed three minimum conditions for platform legitimacy: architectural transparency, justificatory addressability and co-authorship.
These conditions expose the category error in conventional platform governance. Content moderation, algorithmic transparency duties and liability regimes address phenomenal harms, the filter bubbles, disinformation and addiction, but cannot reach the proprietary closure that determines what counts as participation at all. The European Commission and KGM v. Meta bear witness to regulatory reach, but also to its structural limit: even the most forceful institutional proxies cannot reconstruct what Mode 2 has foreclosed.
The democratic stakes are existential. Participation is not one democratic good among others, but the prior condition of legitimacy itself. Citizens exercise self-government only insofar as they appear to one another as justifying agents capable of contesting the rules that organise their common life. When platform architectures erode this capacity below justificatory visibility, transforming agency into metrics and claims into data points, democracy hollows from within.
This is not a manageable governance defect, but an emergency for self-rule. The tools of democratic accountability presuppose a space of reasons that platforms have already claimed as proprietary enclosure. What began as engineering methodology (“move fast and break things”) has become a structural assault on the foundations of collective agency. Counted, still, but not heard.
1 The structural inaccessibility of platform architecture to justificatory challenge is illustrated by KGM v. Meta Platforms, Inc. et al. (Los Angeles Superior Court, 25 March 2026, https://www.bbc.co.uk/news/articles/c747x7gz249o). The trial held social media platforms liable for addictive design. Importantly, the legal strategy shifted focus from platform content to platform architecture, making the noumenal level of algorithmic design judicially visible. The verdict confirms rather than contradicts the argument here: the digital cage is not just politically inaccessible but structurally resistant to the very mechanisms society has developed to hold power to account.
2 The Post Office Horizon scandal provides a warning and illustration of what noumenal erosion looks like when it reaches its logical conclusion. Whilst not social media platform based, it nonetheless illustrates how algorithms can be deployed to shut down agency and justification arguments. Over more than two decades, hundreds of subpostmasters were prosecuted for false accounting and theft on the basis of Horizon software discrepancies. The phenomenal evidence against the system’s reliability was overwhelming. Statistically, it was improbable that so many individuals across so many locations were simultaneously committing fraud. However, the noumenal authority of the algorithmic system, protected by institutional and legal structures, rendered that evidence inadmissible as a justificatory claim. The subpostmasters were counted: prosecuted, convicted and financially ruined, but, crucially, they were not heard. Their demands for reasons were not just unanswered. They were structurally invalidated by a system whose governing logic could not be challenged from within the terms it had itself established.
3 The addressee problem is not peculiar to the UK’s regulatory settlement. The European Union’s Digital Services Act (DSA, 2022) reproduces the same structural limitation at supranational scale. Its enforcement confirms where regulation is pointed: formal proceedings against X investigating content moderation practices and algorithmic transparency; against Meta examining mechanisms for flagging illegal content; against TikTok over risk assessment obligations (European Parliament Think Tank, 2024). The regulatory architecture is elaborate, enforceable and directed almost entirely at the phenomenal surface — what users see, what is removed, what is flagged. That this pattern holds across both the EU’s supranational framework and the UK’s domestic statute confirms that the addressee problem is structural to the regulatory imagination as such. It is worth noting that platforms have actively resisted even this limited phenomenal regulation. Elon Musk characterised the DSA as a censorship tool; the Trump administration threatened tariffs in response to DSA enforcement actions against X; and the US Secretary of State mounted a lobbying campaign against the legislation (Tworek, 2025). If the content-oriented regulatory surface is already under coordinated political assault before it has fully operated, the distance between that surface and the noumenal architectural level it cannot reach becomes more stark, not less.
4 The echo chamber literature, developed in Sunstein (2017), operates at the same phenomenal level as Pariser’s filter bubble argument. Both diagnose content-level consequences of algorithmic enclosure. Pariser focuses on personalisation, Sunstein on the self-reinforcing circulation of content within ideological networks and its consequences for democratic deliberation and policy outcomes (Sunstein, 2017, pp. 9–11). Neither reaches the architectural conditions the noumenal erosion argument addresses. They do not theorise the architectural conditions under which that capacity is formed or eroded.
5 Risse’s demands are more specific than the governance failure claim. His framework of epistemic actorhood identifies four roles in which individuals require strengthened rights protection in digital lifeworlds: as individual knowers, collective knowers, individual knowns, and collective knowns and develops the rights to protections each requires (Risse, 2021, pp. 370–372). Significantly, Risse argues that the exercise of infopower can only be legitimate if rights are in place that generate possibilities of participation in the design of the data episteme itself (Risse, 2021, p. 371), an approach structurally close to the minimal co-authorship condition developed here. Secondly, his demand for broadly shared control over collected data (Risse, 2021, p. 372) addresses the same structural absence as justificatory addressability, from a data rights rather than a justificatory standpoint.
6 The active dismantling of accountability structures makes the absence of justificatory addressability not merely a structural condition but an accelerating one. X’s dissolution of its Trust and Safety Council in December 2022 (an advisory body of approximately one hundred civil, human rights and other organisations formed to address hate speech, child exploitation and platform safety) was the first major exercise of this logic, the company informing members by email less than an hour before a scheduled meeting that the council was no longer the appropriate structure for external input (NPR, 2022). Meta’s January 2025 announcement ending its eight-year third-party fact-checking programme (framed by Zuckerberg as a return to free expression following a “cultural tipping point” signalled by the Trump election) removed the closest institutional approximation to an external accountability mechanism its platforms had operated (Meta, 2025). The EC’s preliminary finding in October 2025 that TikTok and Meta had obstructed researcher access to algorithmic data demonstrates that the same proprietary exemption is being asserted not only against users but against the most powerful regulatory actor yet to test it (European Commission, 2025). All three decisions were exercises of the proprietary exemption in its most direct form: the unilateral redesign or concealment of the architecture of participation without obligation to account for it to those it governs. That these withdrawals were presented as expansions of free speech — or, in the EC case, as legitimate intellectual property protection — is itself a symptom of the noumenal problem: the governing logic of participation is reasserted as a liberty or property claim precisely at the moment when the structural conditions for contesting it are removed.