Home>Social media platforms and challenges for democracy, rule of law and fundamental rights

16.05.2023

Social media platforms and challenges for democracy, rule of law and fundamental rights

In April 2023, Professor Beatriz Botero Arcila and PhD candidate Rachel Griffin of Sciences Po Law School published a report on social media governance, commissioned by the European Parliament’s LIBE Committee on civil liberties. It provides an in-depth examination of issues at the intersection of social media and the rule of law, democracy and fundamental rights, with a particular focus on hate speech, disinformation and media pluralism, and based on an up-to-date analysis of the EU regulatory landscape including recent developments such as the Digital Services Act. 

Aim

Social media have created vast opportunities to access and share information, but they have also brought new challenges for democracy, rule of law and fundamental rights. Policymakers are faced with the challenge of strengthening accountability and oversight of social media to address such threats, without curtailing access to their many benefits. This study examines risks posed by today’s most widely-used social media platforms, focusing specifically on content governance (rather than issues such as how platform businesses are organised or how they handle user data.) The study assesses existing EU law and industry practices which address these risks, and evaluates potential opportunities and risks to fundamental rights and other democratic values. On this basis, it makes policy recommendations relating both to implementation of existing law, and to possibilities for further legislative reform and policy initiatives.

The EU legal framework

Chapter 2 provides a high-level overview of the existing law governing social media content. It covers three broad areas: the overarching regulatory framework for content moderation set out in the 2022 Digital Services Act (DSA); the various other regulations that address content moderation in specific areas, such as copyright infringement, disinformation and terrorist content; and the nascent regulatory framework governing content recommendations and other aspects of platform design. It also highlights general issues and fundamental rights risks in each area.

Hate speech

Chapter 3 provides an in-depth analysis of hate speech on social media. Hate speech and other forms of targeted harassment and abuse not only violate the fundamental rights of those targeted, but undermine equal participation in the public sphere and in democratic debate. Human rights law suggests that censoring such content via content moderation can sometimes be justified to protect the rights of others, as well as these broader social interests in safety and equality. However, content moderation is not a sufficient solution and raises its own fundamental rights concerns (e.g. regarding freedom of expression and state censorship).

The chapter highlights three areas of particular concern. First, content moderation is highly unreliable: serious hate speech is often overlooked while valuable and/or harmless content is often removed. There are significant geographic and linguistic disparities, with far less reliability in less wealthy and non-English-speaking markets. Second, content moderation is highly discriminatory, disproportionately suppressing content from marginalised users. Third, however, marginalised groups need more protection against online hate. Instead of simply expanding moderation, platforms should focus on more proactive and systemic interventions, for example design changes which can discourage abusive behaviour.

On this basis, the chapter highlights two main issues in the current legal framework. First, the 2016 Code of Conduct is too narrow to address these impacts, as its definition of hate speech overlooks many forms of marginalisation and abuse. Second, the EU’s encouragement of automated content moderation as a primary response raises fundamental rights concerns, and does not give adequate weight to more structural, design-based interventions. The chapter highlights aspects of the legislative framework which regulators could utilise to promote more systemic interventions. In particular, developing a new Code of Conduct on Hate Speech could effectively incentivise and provide accountability for such systemic improvements.

Disinformation

Chapter 4 analyses the regulation of disinformation on social media, starting with a necessarily brief review of relevant empirical literature. Disinformation research is a vast, complex field and fundamental questions about the causal effects of disinformation and the role of social media remain unresolved. Experts generally agree that online disinformation should not be considered in isolation, but as one dynamic element of a broader social and political environment characterised by increasing polarisation and mistrust in institutions and the media. This chapter’s analysis and recommendations should thus be read in conjunction with Chapter 5 on how to strengthen the news media more generally.

Platforms and regulators must grapple with the tension between protecting citizens against harmful disinformation and maintaining trust in the information environment, without threatening fundamental rights and political freedoms by centralising control over the ‘truth’. The chapter outlines current responses by social media platforms, including content moderation and fact-checking, and the existing EU hard- and soft-law framework. It argues that organised disinformation campaigns, and disinformation directly encouraging violence or harmful behaviour, present the greatest threats to fundamental rights and democracy. Counter-disinformation measures should be narrowly targeted towards these areas. Conversely, to strengthen fundamental rights protection, the Digital Services Act should be amended with stronger safeguards against removal of speech based only on assessments of accuracy. The 2022 updated Code of Practice on Disinformation includes positive elements, such as promoting ‘safe design practices’, as well as some that are concerning. The chapter suggests how policymakers can build on its positive elements, in collaboration with civil society, industry and independent researchers, to promote effective and fundamental rights-respecting interventions.

Finally, the chapter briefly analyses the relevance of micro-targeted political advertising to disinformation, and to political polarisation, trust and inclusion more generally. The chapter recommends incorporating stronger restrictions on targeting into the proposed Political Advertising Regulation.

Pluralism in the news media

Chapter 5 analyses how social media have impacted media pluralism in Europe, focusing on the news media due to its particular importance for democratic processes, and examining developments in the news industry in the context of wider economic trends. As a major source of audiences and traffic for publishers, platforms exercise increasing influence over journalism. The rise of digital advertising has also threatened existing news business models. These trends have encouraged market consolidation and particularly undermined local journalism, with concerning implications for political participation and accountability. New business models such as paywalls and subscriptions – attempts to compensate for lost advertising revenue – often favour the biggest and best-known news brands, reducing pluralism.

The chapter analyses recent regulatory developments, notably the European Media Freedom Act and the new press publishers’ right introduced by the Copyright Directive, and suggests that they do not adequately address the structural trends favouring consolidation and threatening smaller-scale and local journalism. Consequently, the chapter advocates expanded subsidy programmes for independent media, especially local and regional media, and discusses how EU institutions could promote new pilot schemes and best practices in this area.

Summary of recommendations

Chapter 6 summarises the detailed recommendations from each in-depth chapter. These can broadly be grouped in three areas.

DSA enforcement

The Digital Services Act leaves many open questions - for example, regarding very large platforms’ obligations to assess and mitigate systemic risks, which will be essential in addressing systemic issues such as hate speech and disinformation. The study presents detailed recommendations as to how regulators can effectively implement relevant provisions, while respecting users’ rights.

Legislative reform

The study identifies gaps where further legislative reform could strengthen the protection of fundamental rights and democratic processes. These relate in particular to three areas: the regulation of content moderators’ working conditions; strengthened safeguards against state-mandated censorship; and more stringent restrictions on personalised targeting of political advertising.

Funding and policy programmes

Finally, EU funding and support can help strengthen the broader civil society and media ecosystem to support healthy democratic debate. The report highlights three priority areas: subsidising independent media, especially local media; promoting the development of professional associations for platform ‘trust and safety’ workers; and supporting and expanding media literacy programmes.

Read the report

Virtual Graduate Open House day, October 2024

Graduate Open House Day

On 19 October 2024: meet faculty members, students and representatives and learn more about our 30 Master's programmes.

Sign-up