[STUDENT ESSAY] Warnings from the US: The Relevance of Filter Bubbles in Polarized Countries
6 June 2023
[ARTICLE] What will happen to transatlantic data transfers following the sanction imposed by the Irish DPC on Meta?
10 June 2023

[STUDENT ESSAY] Filter Bubbles and their impact on Social Media

By Giovanna Hajdu Hungria da Custódia

“Personalised outreach gives better bang for the political buck.”

Eli Pariser, The Filter Bubble (2011)


The Digital, Governance and Sovereignty Chair will now publish, on a regular basis, the finest essays and papers written by Sciences Po students in the course of their studies.

This blogpost features the essay written by a second-year student at Sciences Po (Reims campus), as part of the course taught by Rachel Griffin and entitled ‘The Law & Politics of Social Media’.


Though the magnitude of Cambridge Analytica’s role in Donald Trump’s election still remains unanswered, there is little doubt that the infamous consulting firm’s ensuing scandal of 2018 brought to the fore of public consciousness the dangerous power of personalization and the tremendous real world impact it can engender. It is precisely these political and societal consequences that are contended with in Eli Pariser’s 2011 book The Filter Bubble. Importantly, the book provided the first comprehensive analysis of the titular phenomenon coined by Pariser. Within Pariser’s definition, filter bubbles are understood as “unique universes of information” engineered by internet algorithms and recommenders to curate a world specifically tailored to each individual’s likes and dislikes, preferences, and so forth based on previous behavioural patterns. This process is also what Pariser understands as personalisation, the product of which are filter bubbles. 

Pariser conceptualises filter-bubbles as unique in their encompassing of a tripartite dynamic never before encountered in our interaction with media. Firstly, each filter bubble is one-of-a-kind; no two are alike, and each individual is alone in their own bubble. Secondly, filter bubbles are invisible in that the agendas employed by different platforms are opaque, and as such we are not aware of being part of a bubble because we are not privy to the criteria used to create it. Thirdly, one’s role in a filter bubble is involuntary: an inevitable byproduct of enrolling in a particular social media ecosystem. 

For the purpose of this post, my understanding of filter bubbles, though grounded in Pariser’s definition, will be enlarged so as to accommodate the wider phenomenon that encapsulates the narrowing effect social media platforms have by design on public discourse. As such, I will be contending not only with the workings and consequences of algorithmic bubbles as conceptualised by Pariser, but will also address the more general impact posed by the overarching design of social media platforms orchestrated around “the ability of users to follow like-minded individuals.”.

Under a section tackling the workings and consequences of targeted political ads on campaigns, Pariser claimed that “the most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument”. It is important to understand that Pariser’s criticism of filter bubbles is a criticism of personalisation. I will argue that both factors pose an undoubted risk to discourse plurality and as such impoverish and threaten our ability to conduct vital public arguments. 

However, I believe that the best analysis as to the normative value of Pariser’s argument does not involve a binary understanding. A better starting point is to ask why is it that filter bubbles engender this impoverishing and inhibiting impact, and to contend with the valid counterarguments to Pariser’s claim. Most notable amongst them is the increasingly popular understanding of filter bubbles (in my definition, not Pariser’s) as protective means of community-building that allow individuals, and especially members of marginalised social groups, to create safe spaces populated solely by people and content that share in their values. In that they are founded upon this very notion of user-led community-building, it is in turn important to grapple with the challenges that decentralised social media platforms pose to the maintenance of spaces of intellectual and normative plurality and discussion when they are designed to facilitate just the opposite.  

Filter Bubbles and the Public Dangers of Personalisation 

“Moving from swing states to swing people”

Personalisation is the process of manipulation and application of user data that allows for the tailoring of individual and “unique universes of information” to be created. In other words, it is the process that generates filter bubbles. Pariser argues that the political threat of filter bubbles stems from their personalised nature, especially in the context of voter access to information during political campaigns. Microtargeting and personalised campaign ads focus exclusively on individuals (not groups, individuals) that campaigners consider to be ‘persuadable’, those voters commonly referred to as ‘swing-voters’ in traditional political vernacular. The danger of this manner of targeted campaign advertising is that it generates large discrepancies between individuals in access to knowledge about candidates and the campaign. 

To illustrate the magnitude of this issue, Pariser goes as far as claiming that in a fully personalised world, so-called ‘non-persuadables’ might be made so devoid of campaign-advertising and of access to campaign information that they might not even be aware that there is a campaign happening at all. Though unabashedly exaggerated, Pariser’s claim raises valuable points that ought to be contended with. It is no secret that the absence or reduction of widespread campaign knowledge is of course dangerous to the sustainability of public argument, not least because it foments the creation of various individual realities dependent upon the information and facts that platform algorithms choose to make available to each of us. 

Personalisation also facilitates the spread of misinformation through making it harder for journalists to exercise their role as enablers of public accountability by excluding them from knowledge on the candidates, politicians and the dynamics of the campaign itself. Effectively, if journalists do not fit the profile of ‘persuadables’ they might be thus deprived of campaign ads and information by the algorithm. Furthermore, personalisation could allow candidates to manipulate the algorithms so as to purposefully exclude the journalistic class from the campaign. Evidently, this has major implications for accountability, seeing that editorial media (i.e., fact-checking media) traditionally had the responsibility of disseminating verified information that enables the public to make informed political decisions and hold their leaders and institutions to account. Thus, without journalistic input, personalisation is likely to engender an increase in the spread of misinformation. 

Another issue posed by personalisation revolves not around the quality or quantity of the information proposed and made accessible to users, but rather the subject-matter of the information itself. Personalisation by definition means that there is not one general campaign, but thousands of individualised campaign messages aimed at convincing each of those individuals to endorse the product at hand, the candidate. As is the nature of personalisation, messages will target potential voters though means that interest them. It plays on the psychological inclination to “directionally motivated reasoning”, which is our inclination to believe in and be susceptible to information that validates our previously existing beliefs and biases. For instance, suppose a user / potential voter likes a lot of ‘feminist’ posts, or has recently bought a book on Amazon related to ‘feminism’, their personalised campaign message is thus likely to focus on the candidate’s support for abortion rights or increased protection against domestic violence (traditionally ‘feminist’ political topics) and nothing else

This impoverishes the quality of political discourse and public life by inhibiting individuals from obtaining a holistic overview of their leaders and prospective leaders and their values and policy proposals. It may also prove to inflame polarisation in that personalisation and directionally motivated reasoning exist as a two-way street. That is to say that much as users will be shown ads based on a targeted behaviour, the exposure to these ads and their selected kernel of information will reinforce users’ proclivities towards said behaviour, pushing them  – at least in theory – further and further towards their extreme. 

Bubbles as Protectors?

Whereas Pariser argues that the user self-involvement generated by filter bubbles inhibits communication and discourse, Chinese academic Longxuan Zhao challenges this negative framing and suggests in the process that, at least for some, filter bubbles are not barriers but vectors for communication and information acquisition. In a study published by Zhao earlier this year he conducted an interview-based investigation of the perceptions harboured by queer Chinese men of the Chinese social media platform Zhihu, which seems to confirm his hypothesis. 

The Chinese equivalent of Quora, Zhihu is a forum-based social media platform structured around a question-and-answer format whereby chats are created to debate questions posed by users. Content and forums are recommended to users following the personalisation model. The foundation of Zhao’s study rests on the social power he attributes to what Eslami et al. define as “algorithmic folk theories”, a term which refers to the perceptions individuals harbour of platform algorithms and the narratives they attach to them (i.e., negative, positive, etc.). The social power of these folk theories is derived from the way in which users’ perceptions of algorithms change the manner in which they subsequently interact with them. Since predictive algorithms respond to to user behaviour, this can allow users to influence algorithms in much the same way that they are influenced by them

Through his study, Zhao discovered that a sizable proportion of the interviewed queer male Zhihu users endorsed a positive framing of Zhihu algorithms. These users saw algorithms as vectors for the creation of “exclusive networks” and information barriers – most commonly understood as filter bubbles – that allow traditionally marginalised individuals access to various queer-only communities that shield them from potentially unfriendly or dangerous heterosexual intruders. 

This framing challenges the traditional viewpoint held by Pariser and others that regard information barriers and filter bubbles as hindrances to the interaction and “flow of views’‘ between people. It also disputes the perspective that views users as passive in their interactions with algorithms. In fact, Zhao posits that the men that informed his study have actually engaged in what he terms a process of domestication vis-à-vis the Zhihu algorithms, which they achieve through active and intentional interactions with LGBTQ+ content on a long-term daily basis so as leave “trackable clues for the algorithms” in a discernible attempt to manipulate the outcome of their personalised algorithmic curations. Ultimately, the viewpoint that regards algorithms as protectors and sources of community strength serves as a direct challenge to Pariser’s view that declares plurality as the cornerstone of public life – a heteronormative luxury many queer scholars would argue is unfortunately not afforded to the LGBTQ+ community, due to the dangers plurality often poses to the marginalised.  

Decentralised Social Media and the Filter Bubble Phenomenon

Following increasingly pronounced public awareness and concern over data privacy and algorithmic manipulation, many users of centralised social media platforms have progressively migrated over to increasingly popular decentralised platforms. Decentralised social media platforms are networks of interconnected servers that operate on the principle of interoperability, whereby users are able to communicate with one another despite using independent servers. 

The biggest selling point of these platforms is that they harbour no single, centralised forum for moderation / curation. Instead, they are orchestrated around the intent of giving power back to the users by allowing them to join servers that correspond to their values – each server operates by its own distinct moderation criteria. This could mean they will be exposed almost exclusively to users and content that align with their beliefs, behaviours and inclinations. Some notable examples are Mastodon (the decentralised equivalent of Twitter), Peertube (YouTube) and diaspora* (Facebook). 

In my view, decentralised social media can be understood as an institutionalisation of Zhao’s domestication of filter bubbles theory, whereby instead of having to manipulate the algorithm so as to control the outcome of the personalisation process, users are given a direct framework by the platform through which to curate their own filter bubbles. It is hard to say whether Pariser would even consider the experiences of decentralised social media users as filter bubbles per se, given that an intrinsic component of his definition is that filter bubbles are involuntary and invisible. On the other hand, decentralised user-led personalisation could engender the exact same impacts on discourse as centralised algorithmic personalisation. Indeed, considering the human propensity towards directionally motivated reasoning, it is unlikely that users will intentionally set out to create pluralistic personal curation which will expose them to any views, users or behaviours that stray too far from their own. As such, I fear decentralisation, although more democratic in its room for choice, might end up impoverishing public discourse in much the same way as algorithmic personalisation, by coaxing us into existing in intellectually sterile, artificial bubbles of our own making. 

Conclusion

I agree with Pariser’s assessment of filter bubbles as a threat to political discourse and public life, and his identification of plurality as the cornerstone of any healthy democratic society. Furthermore, despite sympathising with the experiences and concerns of queer communities and acknowledging the need for the maintenance of some exclusive spaces for marginalised groups, I propose that our efforts as human beings and policy-makers should be geared towards fostering a society in which all individuals are embraced and where marginalised people feel safe to partake in public life. 

As such, I humbly suggest that a potential avenue for mediating between all of the issues and concerns discussed herein is the increased adoption of diversity-based recommenders into the fold of centralised social media moderation. Much as algorithms can be coded to show us content in line with our previous behaviour, they can also be instructed to expose us to content that diverges – to varying degrees – from our viewpoints. Through a partial implementation of these recommenders, platforms could sustain their popular model of personalisation – which both them and is apparently preferred by users themselves – whilst contributing to the safeguarding of plurality, which must be understood as a public value of the utmost importance.  


Giovanna Hajdu Hungria da Custódia is a current second-year student at Sciences Po’s Collège universitaire, where she majors in Politics and Government with a minor in Law. Giovanna is interested in social media content moderation policies and in particular the influence they exert on narrative-building processes and information consumption within the current information economy. She will be spending the next year studying public policy at King’s College London.