The changes in governance that we could see in the coming months are not that closely related to the coronavirus pandemic. The only notable change of policy was Twitter’s decision to remove tweets from Trump and Bolsonaro when they were scientifically inaccurate or misleading. On the other hand, Facebook decided to maintain its rule against removing posts from politicians.
It should also be said that the crisis has spurred a period of highly publicised scientific activity, which has led to fake news, but has also brought attention to the normal controversies which are an important part of scientific debate. Fake news as such has not become more prevalent, and extreme videos like those posted by terrorists (for example, ISIS videos or the Christchurch shooting footage) have even become less common: it seems that one calamity crowds out others, at least in our limited attention. Our attention has been captured by a rather traumatising vision of the future, expressed through permanently ticking death counts, through videos of sick patients piling up in the corridors of Italian hospitals, the shock of the draconian quarantine measures imposed in China, etc. Social media, like other mass media, have continuously created strong emotional responses which make it difficult to direct our attention to other things – except when it comes to Bill Gates, for instance. We are in a spinning washing machine of media coverage, cycling back endlessly over the same numbers, the same dramas, the same arguments.
Even the origins of the virus are open to legitimate questions: it has certainly been shown that it was not created by humans, but certain authorities have given credit to the possibility of an accident in the Wuhan Institute of Virology. As China does not share all its information on this rather sensitive subject, and the WHO investigation is not yet at the stage of being able to clarify these questions, all hypotheses are still on the table. It is therefore as difficult for social media companies as it is for governments to apply a rule for the tracking and flagging of unreliable sources, as would be required by the Avia law voted through France’s parliament at the height of the Covid-19 crisis, since sources contradict each other. In France, the haste to dismiss doctor Didier Raoult’s claims about unproven Covid-19 treatments as fake news actually increased his popularity: an example of the well-known ‘Streisand effect’. Likewise, the question of mask-wearing gave rise to so much anticipation, information and promises which later turned out to be false that it became difficult to assess sources.
The lesson we should take from this is that policies which classify sources as trustworthy or untrustworthy are totally inadequate in a situation where the facts are uncertain and disputed, including in scientific contexts. This should guide the development of new policies to improve the quality of media debate, which I advocate in a forthcoming book (to be published in October by Le Passeur Éditeur).
Governments and social media platforms advocate a policy of flagging untrustworthy sources, as do mass media and their systems of fact-checking and decoding. Yet this principle has shown itself ineffective in the context of the controversies around Covid-19 (as it already had in the past). The a priori classification of sources belongs to an ancien regime model in which a few gatekeepers have all the authority. In fact, when media sources do not agree between themselves (as was the case with scientific recommendations during the Covid-19 crisis), and when there is a general climate of mistrust of authority, it is totally counterproductive to artificially maintain their trusted status in the name of another authority – social media platforms – which is even less trusted, even now that they have an Oversight Board.
What neither the platforms nor the governments want to see is that the problem of fake news is not just a problem of individual sources which have malign intentions and which violate the law, but also a problem of propagation, and specifically the architecture of propagation which by itself amplifies the problem enough to make it insoluble. Indeed, Twitter, Facebook, YouTube and the other social networks are designed to generate engagement, which is to say reactivity, in order to attract as much advertising revenue as possible. And what attracts attention and generates spontaneous reactions is, above all, content and videos which are shocking, unbelievable, ridiculous, scandalous or violent. Vosoughi (Science, 2018) has shown that the spread of such content on social media is faster, larger and more long-lasting than that of other news. If we do not manage to regulate how content is disseminated, which essentially creates a state of mental overdrive that is triggered by the content but encouraged by the infrastructure of the platforms, all regulation of content will be doomed to fail.
We must break the chain of contagion for fake news as we did for the virus, even as we know that there will always be more viruses and more fake news. I argue that our high-frequency reactions to online content must systematically be slowed down, by limiting the number of retweets, shares, likes etc. per day and per account – just as we limit driving speeds for reasons of collective security, although in this case it is a question of mental security. But we can see that even a government with a strong will to regulate will find itself at odds with the platforms’ business model based on advertising and attention. We will therefore have to be as firm against the platforms as we were against the automobile lobbies on the question of speed limits (and even this varies by country, as we can see in Germany). However, a recent move by Twitter seems quite significant, and reinforces my remarks about the role played by the speed of replication. Twitter decided in June to test a message that would ask members to read the very articles that they are retweeting: ‘When you retweet an article that you haven’t opened on Twitter, we may ask if you’d like to open it first.’ Research (Gabielkov et al., 2016) documented that 59% of retweeted articles were not read at all, but only the headlines. To prevent virality from intoxicating the whole public sphere, this seems a rather clever move…and still a soft one!
In March 2018, I argued that Facebook was engaged in constructing a quasi-state, in that it had a research budget which it distributed to researchers whose work it deemed useful, not only in producing knowledge but also in steering public policy. Likewise, when Facebook successfully compelled all its users to declare their real identity – which is to say that recognised by the civil services of their country – it gained the ability to replace all other authorities which provide access authorisations, by guaranteeing identity through a Facebook account. Facebook has therefore become a replacement for the civil registry, competing on this front with Google and its Android/GDrive/Gmail empire.
To further reinforce this identification, but at a much higher level of computability, Facebook uses 614 basic ‘features’ to profile its users’ accounts based on sometimes minute traces of behaviour. What Zuboff (2019) calls ‘surveillance capitalism’ is based on the ‘predictive products’ which Facebook sells to other companies, and which it obtains by identifying the correlations between all these tiny traces of behaviour. Facebook the quasi-state is also the entity which knows and learns the most about our social behaviour, and which doesn’t share this knowledge because exclusivity helps it capture more users, brands and revenue. This empirical vision is also based on the theory of Supiot (2015) that platforms are reinventing the concept of ‘suzerainty’, which allows them to escape the traditional inconveniences and obligations of sovereignty, but retain the advantages which permit them to make national governments their vassals (notably for rather hazy reasons of employment conditions).
The creation of an Oversight Board is an additional stone in the construction of this quasi-state architecture. It is analogous to the supreme court or constitutional court that exists in every state. The supreme guarantor of rights cannot be the company, even though it will still decide everything and profit from everything, but must have the appearance of an independent body, which is supposed to give it an air of legitimacy. The careful selection of the Board’s members (a process which is still ongoing, and accepts suggestions from anyone with a Facebook account, which is very inclusive) cannot be criticised at this point, but it is the underlying principle which should be discussed.
Why should national law cede decisions to a court which is totally ad hoc and without established doctrines, which does not pronounce the law, while it situates itself immediately at the global level (as distinct from the international level of, for example, standard-setting institutes like the ITU, or multilateral institutions like those associated with the UN)? This invention, which will complete the construction of a global quasi-state, doesn’t even refer to the principles which regulate the internet (IETF, ICANN, W3C etc.), which are based on ‘rough consensus and running code’, with all the drifts this causes. Facebook has not even tried to create a multiparty decision forum including all the social media platforms: it alone will be the regulator of its own network, and thus of all the other social networks which it has crushed or absorbed one by one in the last 15 years. The problem is this monopoly, and the impossibility of holding it accountable before legal – and thus democratic – authorities.
The company has skillfully taken advantage of the pretext of fake news, in a context where it was widely mistrusted following the Cambridge Analytica scandal, the only event which has really affected politicians and demanded a response. The Oversight Board is therefore at the same time a defensive strategy and an offensive move, taking the opportunity to finally complete Facebook’s progressive construction of a quasi-state. Its prospects of reining in the proliferation of fake news are close to nothing, since Facebook’s replication machine of shares and likes is notably completely outside its jurisdiction. The whole operation is best situated at the meta-level of institutionalisation of the platform, which has very little to do with the moderation of content.
You can read more of Dominique’s works on his researcher’s page and on his website.