By Rachel Griffin
Facebook has been having a bad few weeks. Starting on the 13th September, the Wall Street Journal released a series of damning stories based on leaked documents from the company’s own internal research team. Among other things, they claimed that the company’s 2018 decision to optimise user feeds for maximum engagement had actively promoted hateful and divisive content; that it had taken no action on widespread use of the platform by drug cartels and human traffickers; that Instagram harmed adolescent girls’ mental health; and that Facebook operated a secret ‘cross check’ programme under which almost any employee could effectively exempt a public figure from all moderation policies. After a few weeks, whistleblower Frances Haugen revealed her identity, swiftly followed by an interview on leading US news show 60 Minutes and a highly-publicised appearance before the US Congress. A second, anonymous whistleblower has now come forward with further leaks relating to Facebook’s inaction on far-right content.
To add to Facebook’s troubles, it suffered a five-hour outage of all services on the 4th October. It’s perhaps a mark of the intense media interest in Haugen’s leaks that commentators on Twitter suggested Facebook had orchestrated the outage deliberately to distract attention. While such claims may be overly conspiratorial, it would not be unreasonable to assume that similar considerations influenced the timing of Facebook’s announcement this week of a major company rebrand.
The really damaging aspect of the stories is less what they reveal about the effects of Facebook’s products, which is hardly new. Social media have for years been regularly accused of damaging teens’ mental health and sowing division. Moreover, as psychologist Laurence Steinberg has pointed out, the research that forms the basis for these revelations is actually fairly flimsy. Firm conclusions can hardly be drawn about Instagram’s effects on mental health based on the self-reported experiences of a sample of under 150 teenagers.
Rather, Haugen’s leaks paint a picture of a company where executives have well-founded suspicions about the risks their products pose – even if those are not yet conclusively demonstrated – and have chosen to do nothing, double down on the policies that create these risks, and hide as much as possible from the public. One quote from the ‘cross check’ story is illustrative: an internal report freely admitted that ‘we are not actually doing what we say we do publicly.’
What the long-term impact of Haugen’s leaks will be remains to be seen. She has certainly made a political splash: following her high-profile testimony before the US Congress on the 5th October, she is due to testify before the UK Parliament on the 25th October, and has been invited to the European Parliament on the 8th November. It’s not clear, however, that these appearances will significantly change the political weather. So far, policymakers mostly seem to have taken the leaks as further confirmation that whatever tech regulation measure they were already planning is exactly the right approach – whether it’s the antitrust reforms pushed by US Democrats, the UK’s controversial Online Safety Bill or the EU’s Digital Services Act. Haugen has also filed several complaints to the US Securities and Exchange Commission, arguing that it should be liable for misleading investors as well as policymakers and the public. The Washington Post has suggested that this might actually pose the most immediate legal threat to the company.
While there is a lot to discuss in the leaked documents, and more could be on the way, one important conclusion from the leaks should be a reiteration of the importance of digital sovereignty. A revelation from the Wall Street Journal that deserves more attention is that of all the hours Facebook moderators spent finding and removing online disinformation, a rather stunning 87% were dedicated to content from the US (representing 7% of Facebook’s user base). The leaks make clear that the company’s capacities to assess and detect hate speech, violence and other dangerous content in non-English languages – even those that are very widely spoken, such as Arabic – are extremely poor.
Underinvestment in moderation in ‘rest of world’ markets has long been challenged by activists, and has regularly been linked to violence and political unrest – most notoriously in Myanmar, where Facebook was widely used to incite ethnic violence in the late 2010s, at a point when Facebook had a grand total of two Burmese-speaking moderators working on the region. Scathing press coverage successfully pushed Facebook to invest more in Burmese-language moderation, but has had little impact on the structural incentives that lead it to neglect non-Western markets: similar dynamics now seem to be playing out in Ethiopia.
A major takeaway from the Haugen leaks (and the chaos caused by the five-hour outage) should be the severe risks for countries around the world of relying so heavily on US companies for basic digital infrastructure. Such companies will always be structurally incentivised to invest disproportionately in safety measures in their home market – not only the most profitable, but also the one where they most need to stay on regulators’ good side – and correspondingly less elsewhere.
Sociologist Michael Kwet describes the contemporary globalised tech industry in terms of ‘digital colonialism’. From the distribution of resources for safety programmes, to the geography of undersea internet cables and the framing of intellectual property rights, the sector is structured around the basic imperative for US-based companies to extract resources from other countries and take minimal responsibility for any damage they leave behind. Without changing the underlying economic and political relationships that centralise power with US corporations, it is unrealistic to expect that a smattering of bad press coverage will lead to more than superficial change.
This is particularly the case given that press and policy attention is itself overwhelmingly focused on US interests. As media scholar Siva Vaidhyanathan has pointed out, Haugen’s leaks have caught the attention of policymakers and the public far more than those of fellow Facebook whistleblower Sophie Zhang, who revealed in 2020 that the company was aware of, and refused to take action on, coordinated disinformation and political manipulation in smaller and developing markets around the world. Vaidhyanathan suggests that Haugen and the Wall Street Journal’s slick press strategy may have had something to do with the different reactions. More importantly, though, Haugen’s leaks addressed issues that touch a nerve for the average Western Facebook user, like adolescent mental health. Zhang’s revelations about political violence and election integrity in countries like Azerbaijan, Honduras and Bolivia were never likely to attract the same concern.
Given widespread global reliance on US-based platforms, the skewed priorities of Western publics and policymakers have serious consequences. The Wall Street Journal’s reporting highlights direct links between Facebook’s refusal to invest more resources in global trust and safety programmes, and civil unrest and organised crime in countries including Ethiopia, India and Kenya. Not only the distribution of moderation resources, but also the objectives of moderation policies are shaped by Western interests: the ‘dangerous individuals and organisations’ list that Facebook uses to identify and ban terrorist content reads like a direct translation of US foreign policy priorities.
Facebook’s US user base is already declining; its growth strategy increasingly focuses on emerging markets. In some of them, many consumers rely wholly on Facebook for internet access through its zero-rated Free Basics mobile data offer, while the company is playing an increasing role in providing basic internet infrastructure, such as the new 2Africa undersea cable. In this context, it is particularly important that political debates about social media regulation are not disproportionately skewed towards US priorities.
That is just as much a problem in Europe – the world’s largest economy and the other regulatory jurisdiction big enough to significantly influence how social media companies run their businesses. Germany’s 2017 Network Enforcement Act, which requires large platforms’ content moderation to meet certain procedural standards, was and continues to be hotly debated, yet domestic discussions of its effects and human rights impacts barely acknowledge that it has been used as a template by several authoritarian regimes to strengthen and legitimise online censorship. Current EU platform regulation debates focus heavily on issues which directly concern EU policymakers and businesses – such as (Islamist) terrorism, domestic election interference and copyright infringement – rather than the glaring neglect of basic safety measures in countries that lack the EU’s political clout.
Facebook’s announcement this week that it will create 10,000 new high-tech jobs in Europe can be understood cynically as a strategy to assuage European regulators’ concerns and keep their attention firmly focused on European interests. But digital sovereignty should not be reduced to a rhetorical justification for attempts to shift power and resources from one wealthy Western market to another. It should be a call to more fundamentally redistribute and democratise control of global platforms at the international level.
Rachel Griffin is a PhD candidate at the Sciences Po School of Law and a research assistant at the Digital Governance & Sovereignty Chair. Her research focuses on social media regulation as it relates to social inequalities
Crédit photos : dolphfyn / Shutterstock