17 min read — Analysis | Media | Legislation | Free Speech

Invisible Filters: Shadow Banning at the Crossroads of Free Speech and Competition Law in the EU

Shadow banning epitomises the complex challenges at the intersection of transparency, competition law, and fundamental rights in the digital era.
Image Credit: Euro Prospects

By Candela Fernández Pascual — Social Media Manager & Tech Law Correspondent

December 30, 2024 | 15:30

Follow our European journalism:

Introduction

In the digital era, content moderation has become indispensable for ensuring online safety, combating disinformation, and safeguarding public discourse. However, the emergence of shadow banning—a practice whereby (Big Tech) digital platforms covertly suppress the visibility of content without informing users—raises intricate legal and ethical questions that challenge traditional regulatory frameworks. Shadow banning occupies a critical intersection between transparency, competition law, and fundamental rights, making it a pressing issue for regulators and policymakers.

This article examines shadow banning in the context of EU law, focusing on its implications for transparency under the Digital Services Act (‘DSA’ hereinafter), Digital Markets Act (‘DMA’ hereinafter), the Artificial Intelligence Act (‘IA Act’ hereinafter), competition law, and the protection of freedom of expression.1 By its very nature, shadow banning undermines the transparency objectives central to modern content moderation frameworks. Moreover, when deployed by platforms with significant market dominance, shadow banning can distort competition, potentially constituting an abuse of dominance under Article 102 TFEU or forming part of collective boycotts prohibited under Article 101 TFEU. These practices further raise significant concerns about private censorship and the role of digital platforms as gatekeepers of public discourse, posing threats to the exercise of fundamental rights and the integrity of the digital marketplace.

Building on Polański’s insights into the incentives and mechanisms driving coordinated suppression of content,2 this article identifies shadow banning as part of a broader trend initially used to combat terrorism-related content but now expanding into more opaque content moderation practices. Such developments risk curtailing both freedom of expression and fair competition, highlighting the urgency of the legal and regulatory challenges examined herein.

The analysis is divided into four sections. The first explores how shadow banning challenges the transparency obligations established by the DSA, particularly in light of algorithmic opacity. The second investigates the antitrust implications of shadow banning, considering its potential to constitute exclusionary conduct under Article 102 TFEU and its connection to anti-competitive agreements under Article 101 TFEU. The third section addresses the impact of shadow banning on freedom of expression, including a case study on Facebook’s practices during the 2019 UK general election. Finally, the fourth section offers practical suggestions for reform, including proposals to categorise AI-driven content moderation systems as high-risk under the AI Act, enhance transparency requirements, and strengthen oversight and accountability mechanisms.

This comprehensive framework aims to advance the legal discourse on shadow banning by proposing a path forward to align content moderation practices with the EU’s principles of transparency, competition, and fundamental rights protection.

Shadow Banning and Transparency 

Transparency in content moderation is a cornerstone of the DSA, which imposes significant obligations on online platforms to disclose their moderation measures, including any restrictions on content visibility. Article 17 of the DSA mandates that platforms provide a “Statement of Reasons” for all moderation actions, thereby safeguarding users’ rights to due process and ensuring that content moderation practices remain transparent. Nevertheless, shadow banning—a practice characterised by its deliberate concealment and secrecy—poses a direct challenge to the DSA’s transparency objectives

The covert nature of shadow banning undermines the requirement for openness by enabling platforms to manipulate content visibility without notifying the affected users. Platforms often cite technical complexity and security concerns to justify the opacity of their algorithms.3 Critics argue that such opacity risks undermining users’ rights to due process and transparency, especially when shadow banning is used to target disinformation—a type of harmful content that is notoriously difficult for automated systems to detect with precision.4

Under the DSA, shadow banning emerges as a legal and ethical dilemma. While the regulation generally prohibits the practice, limited exceptions exist, notably for high-volume deceptive commercial content.5 These exceptions highlight a tension between promoting transparency and addressing operational realities. From a competition law perspective, this tension warrants further scrutiny. For example, platforms might leverage the opacity of their algorithms to influence market dynamics subtly, favouring their own services or partners over competitors. Such practices could fall under the purview of antitrust law, raising questions about whether transparency obligations under the DSA are sufficient to mitigate potential abuses of market dominance.

Furthermore, although the DSA imposes transparency requirements, it stops short of mandating access to the source code of the algorithms that drive content moderation decisions.6 This limitation restricts external oversight and accountability, allowing shadow banning to persist as a practice that is theoretically transparent yet practically opaque.

This paradox—where transparency obligations coexist with concealed practices—raises broader questions about the adequacy of current regulatory frameworks. Some scholars argue that the evolving nature of disinformation, troll farms, and foreign interference in democratic processes necessitates a rethinking of free speech protections.7 While the DSA, the DMA, and the IA Act collectively address aspects of these challenges, none of these frameworks explicitly incorporate shadow banning or its broader implications for competition law. This regulatory gap underscores the need for more comprehensive measures that address both the procedural and substantive dimensions of content moderation practices.

Antitrust Implications of Shadow Banning

From the perspective of competition law, shadow banning raises significant concerns regarding potential abuse of dominance and market distortion. At the core of this issue lies the interplay between shadow banning and collective decision-making on handling online information8 (mainly content moderation decisions), which may result in practices akin to “collective boycotts.”9 Such boycotts can target not only competitors but also consumers, thereby exacerbating their harmful effects.

Dominant platforms may exploit shadow banning to suppress the visibility of competitors’ content while favouring their own services. Such conduct aligns with the concept of exclusionary practices under Article 102 of the Treaty on the Functioning of the European Union (TFEU), which prohibits abuse of a dominant position. For instance, platforms might prioritize content that aligns with their commercial interests, while covertly relegating rival content without offering any transparent justification. This covert manipulation resonates with the idea of “content cartels,”10 where platforms engage in collective actions to suppress specific types of information to their strategic advantage.

The coordination of undertakings in the digital market raises additional concerns under Article 101 TFEU, which prohibits anti-competitive agreements. Shadow banning, when coordinated among dominant platforms, is particularly pernicious due to its inherent secrecy and opacity. Such practices amplify the anti-competitive effects of traditional suppression, as they undermine market competition while evading public scrutiny. These coordinated actions risk creating a digital market where innovation is stifled, diversity is constrained, and smaller competitors are effectively excluded.

Moreover, the implications of shadow banning extend beyond competition law to intersect with fundamental rights, particularly freedom of expression.11 Coordinated shadow banning practices not only limit the accessibility of certain viewpoints but also foster an uneven playing field where dominant platforms exert disproportionate control over the digital public sphere, further exacerbated by the secrecy and opacity that characterises this practice12 This dual impact on competition and free speech underscores the urgent need for a comprehensive regulatory response that addresses both the procedural fairness and substantive legitimacy of such practices.

Free Speech in the Context of Shadow Banning

Shadow banning inherently infringes on freedom of expression, as it suppresses content without users’ knowledge or recourse. Such practices, often characterised as a form of “private censorship,”13 raise critical concerns about the role of dominant platforms as gatekeepers of public discourse. The DMA underscores this issue, particularly in Article 5(2), which prohibits self-preferencing practices that distort market competition, indirectly influencing the diversity of public discourse. However, private censorship claims are typically associated with unilateral actions by dominant platforms, falling under the purview of Article 102 TFEU. In contrast, this article focuses on collective actions—referred to earlier as “content cartels”—where coordinated efforts among platforms suppress certain content types,14 exacerbating the impact on competition and free speech.

The Facebook case study

The Facebook case in the United Kingdom during the 2019 general election campaign exemplifies the chilling effect of shadow banning on free speech. Several pro-European Union (EU) civil society organizations experienced significant reductions in their online visibility due to automated restrictions imposed by Facebook. These groups, including locally focused organizations such as “Banbury for Europe,” were volunteer-driven and engaged in both online and offline campaigning.

In November and December of 2019, these groups reported “shadow bans,” where their content, while still technically available online, became significantly harder to locate. This resulted in a dramatic decrease in their daily page reach, with some groups experiencing drops exceeding 90%.15 Notably, Facebook did not accuse the affected groups of illegal activity or cite violations of its terms and conditions. Instead, the platform recommended reducing posting frequency but did not confirm whether this would prevent future restrictions. These actions were attributed to automated flagging by Facebook’s AI systems, which lacked transparency and failed to provide clear reasoning for their decisions. This raises important questions about the state’s role in protecting freedom of expression, the transparency and fairness of decisions made by online platforms,16 and the need for AI systems to provide clear reasoning for their actions.17

Furthermore, this case underscores the intersection of freedom of expression and competition law. The shadow banning practices described, wherein automated systems covertly suppressed content without transparent justification, raise significant legal questions. Under Article 102 TFEU, if Facebook is deemed to hold a dominant market position, such practices could constitute abuse of dominance through exclusionary conduct. By disproportionately disadvantaging certain groups or competitors, shadow banning diminishes visibility without a clear or lawful basis.

Evidence of coordination between platforms to implement similar suppression measures would invoke concerns under Article 101 TFEU. Such coordination, if proven, could amount to an anti-competitive agreement that collectively suppresses lawful content, harming both market competition and free speech.

The Role of AI and the AI Act

The increasing reliance on AI systems for content moderation adds another layer of complexity. The AI Act,18  which entered into force in August 2024, establishes a comprehensive regulatory framework for AI applications within the European Union. However, ambiguities remain concerning the classification of AI-driven content moderation systems as high-risk activities under the regulation.

The AI Act primarily addresses systems that employ automated data processing to profile individuals based on factors such as work performance, economic status, health, preferences, behaviour, and location.19 Exceptions exist for AI systems performing narrowly defined tasks that enhance human decision-making or detect patterns without replacing or significantly influencing prior human judgment, provided these systems operate under appropriate oversight mechanisms.20

Given the profound implications of AI-driven moderation on fundamental rights, including freedom of expression and access to information, these systems should be explicitly designated as high-risk under the AI Act. Such classification would trigger heightened regulatory safeguards, including mandatory transparency obligations and human oversight, ensuring that content moderation practices comply with competition law and uphold free speech principles.

Proposals for Reform

The interplay between antitrust regulation, fundamental rights, and AI-driven practices suggests the need for a more integrated and policy-oriented approach to address these challenges. Increasingly, competition law is viewed as a potential tool to preserve non-economic interests, such as free speech,21 within the digital environment. The following reforms are proposed to strengthen the regulatory framework:

  • Enhancing Transparency: The Digital Services Act’s (DSA) existing transparency requirements should be expanded to encompass detailed disclosures regarding algorithmic decision-making processes. Platforms should provide comprehensive qualitative descriptions of their AI tools, including indicators of accuracy, error rates, and the safeguards implemented to mitigate potential harm.
  • Strengthening Oversight: Enhanced collaboration between competition authorities and regulators is essential to address the intersection of antitrust and free speech concerns. The shadow banning practices of dominant platforms should be subjected to rigorous scrutiny, particularly in cases involving coordinated actions or abuse of market dominance.
  • Categorising AI Moderation as High-Risk: The AI Act should explicitly classify AI-driven content moderation systems as high-risk activities. This would impose stricter requirements on platforms, including robust transparency obligations, human oversight measures, and detailed impact assessments to evaluate their influence on fundamental rights.

Conclusion

Shadow banning epitomises the complex challenges at the intersection of transparency, competition law, and fundamental rights in the digital era. This article has examined its implications under EU law, highlighting its potential to undermine the transparency obligations enshrined in the Digital Services Act, distort competition through anti-competitive practices under Articles 101 and 102 TFEU, and infringe upon freedom of expression by facilitating covert censorship.

The analysis underscores the urgency of addressing shadow banning as a regulatory priority, particularly given its growing prevalence and opacity. The practice not only threatens individual rights but also disrupts fair competition in the digital marketplace. Recognising the role of artificial intelligence in enabling these practices, this article advocates for the classification of AI-driven content moderation systems as high-risk under the AI Act.

Finally, this article concludes by offering practical proposals for reform, including enhanced transparency obligations, stronger oversight mechanisms, and accountability measures to contest shadow banning decisions. By aligning content moderation practices with the EU’s principles of transparency, competition, and fundamental rights protection, these reforms aim to mitigate the risks posed by shadow banning while ensuring a more equitable and transparent digital ecosystem.

1 Paddy Leerssen , ‘An End to Shadow Banning? Transparency Rights in the Digital Services Act between Content Moderation and Curation’ (2022) http://dx.doi.org/10.31219/osf.io/7jg45  3.

2 Polański J, ‘Antitrust Shrugged? Boycotts, Content Moderation, and Free Speech Cartels’ (2023) 19 European competition journal 334 http://dx.doi.org/10.1080/17441056.2023.2200612, 338-340.

3 Robert Gorwa, Reuben Binns, and Christian Katzenbach, ‘Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance’ (2020) 7 Big Data & Society 205395171989794 http://dx.doi.org/10.1177/2053951719897945.

4 Paddy Leerssen , ‘An End to Shadow Banning? Transparency Rights in the Digital Services Act between Content Moderation and Curation’ (2022) http://dx.doi.org/10.31219/osf.io/7jg45  7.

5 Ibid.

6 Mario Santiesteban Galarza, ‘Garantías Frente a La Moderación de Contenidos En La Propuesta de Reglamento Único de Servicios Digitales’ (2022) 41 Revista CESCO de Derecho de Consumo 159 http://dx.doi.org/10.18239/rcdc_2022.41.3103, 165-167.

7 See Tim Wu, ‘Is the First Amendment Obsolete?’ (2018) 117 Michigan Law Review 547.

8 Polański J, ‘Antitrust Shrugged? Boycotts, Content Moderation, and Free Speech Cartels’ (2023) 19 European competition journal 334 http://dx.doi.org/10.1080/17441056.2023.2200612, 356.

9 Commission Staff Working Document, ‘Guidance on restrictions of competition “by object” for the purpose of defining which agreements may benefit from the De Minimis Notice’, SWD(2014) 198 final.

10 Evelyn Douek, ‘The Rise of Content Cartels’ (2020) Knight First Amendment Institute https://knightcolumbia.org/content/the-rise-of-content-cartels, accessed 28 december 2024.

11 Polański J, ‘Antitrust Shrugged? Boycotts, Content Moderation, and Free Speech Cartels’ (2023) 19 European competition journal 334 http://dx.doi.org/10.1080/17441056.2023.2200612, 338.

12 Robert Gorwa, Reuben Binns and Christian Katzenbach, ‘Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance’ (2020) 7 Big Data & Society 205395171989794 http://dx.doi.org/10.1177/2053951719897945.

13 Polański J, ‘Antitrust Shrugged? Boycotts, Content Moderation, and Free Speech Cartels’ (2023) 19 European competition journal 334 http://dx.doi.org/10.1080/17441056.2023.2200612, 335.

14 Evelyn Douek, ‘The Rise of Content Cartels’ (2020) Knight First Amendment Institute https://knightcolumbia.org/content/the-rise-of-content-cartels, accessed 28 december 2024. 

15 Ivana Kottasová  ‘Facebook Bans British Far-Right Groups and Their Leaders’ [2019] CNN https://www.cnn.com/2019/04/18/tech/facebook-uk-far-right-ban/index.html accessed 30 September 2024.

16 Monica Horten, ‘Algorithms Patrolling Content: Where’s the Harm?’ (2022) https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3792097, accessed 30 September 2024.

17 Steering Committee for Media and Information Society (CDMSI), Council of Europe, ‘Content Moderation. Best Practices towards Effective Legal and Procedural Frameworks for Self-Regulatory and Co-Regulatory Mechanisms of Content Moderation’ (2021) https://rm.coe.int/content-moderation-en/1680a2cc18, 30.

18 European Parliament and Council Regulation (EU) 2024/1689, of 13 June 2024, laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) OJ L, 2024/1689.

19 Article 6 and Anex III, IA Act.

20 Michael Veale and Frederik Zuiderveen Borgesius , ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) https://papers.ssrn.com/abstract=3896852  accessed 28 December 2024.

21 Richard Posner, ‘Free Speech in an Economic Perspective’ (1986) 20 Suffolk University Law Review 1.

Disclaimer: While Euro Prospects encourages open and free discourse, the opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or views of Euro Prospects or its editorial board.

Write and publish your own article on Euro Prospects

Subscribe to our newsletter – stay informed when we publish articles on pressing European affairs.

Close