by Florence G’sell
Twitter traditionally refuses to censor tweets from politicians, claiming that these publications are of ‘public interest’. As a result, public figures are generally not subject to the terms and conditions imposed on other users. Even in the case of abusive language, the platform does not use its usual sanctions for violating its rules, such as removing a tweet, or locking or suspending a user’s account. However, last year, the social network was driven to change its position in response to the many challenges that the online activity of certain leaders, starting with Trump, is bound to provoke.
In June 2019, Twitter announced that it had adopted new rules for politicians. To be eligible for this specific scheme, individuals must have been or be eligible for election or appointment to public office, have more than 100,000 users and a verified account. Even if the tweets of such officials violate the rules of the platform (incitement to violence, harassment etc.), they are not withdrawn if they have a ‘clear public interest value’. They may, however, be masked by a warning message that should provide context and prevents other users from liking or retweeting the content.
However, this announcement has hardly been acted on. At the beginning of July, Trump made racist comments about four Democratic female elected officials without any reaction from the social network, which merely stated that the tweets did not violate Twitter’s policy, even though the network’s rules prohibit in principle ‘targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category.
In the autumn of 2019, after facing criticism for failing to take any action on controversial Trump tweets, Twitter strengthened its system by announcing an update of its policy on leaders. The social network reiterated the principle adopted in June: no censorship of tweets that violate Twitter’s rules if they are of public interest, but the possibility of hiding them behind a warning. At the same time, the platform warned that it will intervene – whatever the interest of the tweet – in the presence of content involving serious violations such as the promotion of terrorism, clear and direct threats of violent actions against individuals or the dissemination of information of a private nature. In this case, tweets of public interest from politicians will be treated in the same way as other tweets and may, if necessary, be masked or deleted, with sanctions that may go as far as the suspension of the account. At the end of November, Twitter banned sponsored tweets with an electoral or political purpose.
However, Democratic Senator Kamala Harris did not manage to persuade Twitter to intervene following a series of tweets from President Trump targeting the whistleblower behind the Ukraine case. Stressing that the tweets involved intimidation and threats to witnesses involved in the case, Harris unsuccessfully demanded the suspension of Donald Trump’s account.
The health crisis has led Twitter to review its doctrine again. As early as 18th March, the platform introduced new principles and announced that it would remove content that constituted a clear call to adopt behaviour that could directly create a risk to people’s health or well-being. The new categories of false information targeted by Twitter include challenging quarantine measures taken by local authorities, promoting ineffective treatments against Covid-19, challenging proven scientific facts about the transmission of the virus, inciting behaviour that could cause panic or disorder…As regards politicians, the platform specifies, their tweets that contravene these rules will, in principle, not be deleted but reported when they are of public interest, in accordance with the principles previously adopted.
Regularly updated, this new policy, which relies on information transmitted by the official health authorities, has made it possible to combat false information, particularly conspiracy theories. At the end of March, Twitter did not hesitate to withdraw two tweets posted by Brazilian President Jair Bolsonaro, which included videos promoting the notorious hydroxychloroquine and calling for an end to physical distancing strategies. Similarly, a tweet from Venezuelan President Nicolas Maduro referring to a decoction capable of eliminating coronavirus pathogens was withdrawn.
However, Trump’s tweets extolling the benefits of hydroxychloroquine did not provoke a reaction from Twitter. Admittedly, the platform did not hesitate to censor, at the end of March, Trump’s lawyer Rudy Giuliani, who had quoted a tweet suggesting that hydroxychloroquine has a 100% effectiveness rate against the coronavirus. But the US President’s campaign manager’s tweet sharing an article claiming that chloroquine has 90% chance of helping coronavirus patients did not generate any action, as Twitter did not see this as a clear call for action that would harm the public. In mid-April, Twitter once again refused to intervene when Trump appeared to urge residents of Michigan, Minnesota and Virginia to resist the lockdown in those states by simply calling for their ‘liberation’, which did not appear to the social network to be of a nature to endanger the public.
On May 11, however, the platform decided to step up its fight against content that contradicts the opinions of public health experts, by attacking tweets that do not generate an obvious and immediate risk, but can mislead the public. It warned that new alerts would be added to this type of content and, once again, specified that world leaders are covered by these new rules. In this framework, tweets with inaccuracies can be removed if the misinformation is blatant and of a nature to cause real-world damage. Tweets with questionable statements that pose a serious risk of harm can be flagged by being shaded and marked with a warning. Finally, tweets with controversial content but where the potential risk is moderate are not shaded but may be linked to further information on a page maintained by Twitter.
Despite this new approach, Twitter steadfastly refused to remove Trump’s series of tweets insinuating that Joe Scarborough, a former Florida Republican elected representative who is now a presenter on MSNBC, may have played a role in the death of his former colleague, Lori Klausutis, who died of undiagnosed heart disease. Outraged by Trump’s accusations, the deceased’s husband sent a letter to the Twitter CEO denouncing these ‘horrible lies’ and calling for the removal of the tweets concerned, a request echoed by many columnists and personalities.
However, a few days after this incident, Trump published new controversial tweets dealing with the issue of absentee voting. Although he himself voted by mail in Florida last March, the US President believes that remote voting could only lead to massive fraud and clearly fears that such a system would benefit the Democrats. His tweets referred to a ‘rigged election’. This finally prompted the intervention of Twitter on May 26. Twitter representatives stressed that these tweets violated its ‘civic integrity policy’, which prohibits users from manipulating or interfering with electoral or other civic processes, for example by posting misleading information that could deter people from participating in an election. They clarified that while Trump’s tweets did not directly deter voters from voting, they did contain misleading information about postal voting that could mislead voters. In short, by using the expression ‘rigged election’, Trump allegedly violated the principles defended by Twitter. Therefore, these tweets, although they were not directly related to the health crisis, were linked to a page created by the platform, which brought together contributions from various media and fact-checkers showing how Trump’s claims were unsubstantiated or even erroneous.
A few days later, the turmoil caused by the death of George Floyd led the platform to go even further. On the 29th May, Twitter again intervened to cover up a tweet from the President suggesting that live ammunition could be used to shoot protesters in Minneapolis, using a phrase (‘when the looting starts, the shooting starts’) associated with segregationist opponents of the 1960s civil rights protests. Stressing that the tweet was of public interest, the social network did not delete it but hid it behind a warning message, saying it violated Twitter’s rules by the ‘glorification of violence’ it contained. While the tweet could still be shared with a comment, it was no longer possible to ‘retweet’ it, reply to it or click on the ‘like’ button. One month later, on June 23rd, Twitter flagged and restricted another tweet from President Trump for violating its policy against abusive behavior.
In a context where the suppression of Donald Trump’s most outrageous tweets is being loudly demanded by a growing number of politicians and observers, Twitter’s decision to report these tweets contrasts with its constant refusal to intervene until now. While this decision is a turning point, it is nonetheless a belated development. Twitter’s caution on this point can be explained in light of the extensive protection afforded under US law to freedom of expression under the First Amendment to the US Constitution. On the basis of this text, US courts tend to view accounts maintained by public figures on social networks as public spaces (‘public fora’), even though this qualification is hotly debated in the case of platforms run by private actors (see, on this point, the opinion of Judge Brett Kavanaugh in Manhattan Community Access Corp. v. Halleck, which held that private actors hosting public discussion spaces are free to moderate them at their discretion).
With regard to the particular case of social networks, the Supreme Court ruled in 2017 that access to social networks, in this case Facebook, is a constitutional right for American citizens (Packingham v. North Carolina), who must be able to access places where they can learn and debate, even if they are virtual. The decision stressed that social networks now provide the most powerful mechanisms available to a private citizen to make his or her voice heard. Judge Anthony Kennedy wrote that, ‘A fundamental First Amendment Principle is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more’.
Moreover, the protection afforded by the First Amendment does not allow public officials to prevent citizens from accessing content they post on social networks by ‘blocking’ them. Trump has himself been the victim of this jurisprudence in relation to users he ‘blocked’ on Twitter. In this case, the federal courts ruled in 2019 that Donald Trump, insofar as he expresses himself on Twitter as President of the United States, cannot exclude American users without violating their right of access to a public space protected by the First Amendment (Knight First Amendment Inst. at Columbia Univ. v. Trump). The eloquent words of the Second Circuit Court of Appeals ruling in this case are noteworthy in this regard: ‘If the First Amendment means anything, it means that the best response to disfavored speech on matters of public concern is more speech, not less’. As a result of this decision, Trump ‘unblocked’ the plaintiffs – but did not ‘unblock’ others. The case is reminiscent of the complaint of a French journalist who was ‘blocked’ by the President of the French National Assembly, Richard Ferrand, which was filed under criminal law, and subsequently dismissed by the public prosecutor’s office for lack of a suitable offence.
In any event, it is understandable, in such a context, that Trump vigorously invokes the freedom of expression that led to the ban on ‘blocking’ users to refuse any reporting or censorship of his tweets. The notion of public space also explains Twitter’s strategy, which constantly invokes the ‘public interest’ of tweets from politicians, especially when they are prominent. The incentive for social networks to intervene is all the more weakened by the fact that federal legislation exempts platforms from liability for what their users say online according to Section 230, Communications Decency Act (Title 47 of the United States Code).
It is, paradoxically or not, this Section 230 of the Communications Decency Act – or at least its interpretation – that Trump is now seeking to amend, by way of retaliation against Twitter. Before the Communications Decency Act was passed in 1996, the status of hosting platforms was not entirely clear. It was believed that they enjoyed immunity for content posted by third parties on their platforms, but were fully liable for that content if they intervened in any way. The Communications Decency Act of 1996 then guaranteed them very broad immunity, including when they practiced moderation. Under the terms of the text, the platforms can in no way be considered as publishers of the content published on them by third parties: their liability for such content is excluded. Furthermore, platforms that engage in moderation enjoy the ‘Good Samaritan’ immunity: they cannot be held liable for any action that amounts to removing or restricting access to objectionable content (‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable’ material), as long as their decision is taken in good faith.
The Executive Order signed by Trump on 28th May 2020 promotes a restrictive interpretation of the immunity of platforms, which, according to the text, should not extend to those engaged in ‘editorial activities’. In particular, the text emphasises that immunity should not benefit platforms that do not act in good faith but intervene for the purpose of silencing views with which they disagree (‘online platforms that – far from acting “in good faith” to remove objectionable content – instead engage in deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree’). Platforms choosing to moderate content without being ‘in good faith’ should therefore lose their immunity and be liable as any publisher would be. To this end, the Executive Order directs the Federal Communications Commission to ‘clarify’ the text of Section 230 by preparing regulations which restrict the scope of immunity and specify the conditions under which the platform is deemed to be acting ‘in good faith’. The order also directs the Federal Trade Commission to take action against platforms that adopt ‘deceptive’ or unfair content moderation practices or act inconsistently with their terms of use. On this last point, Trump satisfies conservatives who complain about the platforms’ bias against them and have long called for a principle of neutrality in content moderation to be enshrined in Section 230. The Executive Order also specifies that the Federal Trade Commission and the Department of Justice will be informed of all complaints received by the Tech Bias Reporting Tool, created in May 2019 by the White House to collect complaints from Internet users who believe they are unfairly censored on line.
Whatever one thinks of the substance, and even if an amendment to Section 230 – on which the Department of Justice has been working for several months – seems necessary to many experts and politicians, the method used by the US President is hardly convincing. Donald Trump intends, in fact, to modify, by a simple Executive Order’, the content of a federal law voted in Congress in 1996 and applied since then by federal courts, which have developed a solid caselaw. The text therefore poses several constitutional difficulties: not only does the amendment of this law fall in principle to Congress, but its compatibility with the First Amendment is open to discussion, especially since the First Amendment (also) protects the right of platforms, as private companies, to set and enforce their own conditions of use.
For this reason, it is doubtful whether the Executive Order, which will inevitably be challenged, will actually produce its intended effects. Even if it does, it could lead legally responsible platforms to practice particularly strict moderation by simply removing content for which they may be held liable without taking the trouble to check it. Such strict moderation would be the opposite of what President Trump is seeking, as he is clearly in favour of an extreme conception of freedom of expression on social networks. In the end, even though Trump wishes to deny any right of moderation to the platforms on the basis of the notion of public space, he chooses at the same time to call them publishers, further justifying their intervention on content.
While this battle, which is a purely American one, has only just begun, it seems increasingly difficult for social networks to adopt a stance of pure abstention by refusing to intervene in the face of inaccurate, defamatory or simply dangerous comments. However, their room for manoeuvre is narrow. Even in France, where legislation already contains more restrictions on freedom of expression and platforms are encouraged to practice moderation, a possible intervention by Twitter on the comments of a politician or public figure (such as the now famous Professor Raoult) could generate very virulent reactions. The new strategy initiated by Twitter also requires absolute transparency and an unassailable method to determine which content should be targeted, which is not yet the case (see the section “How will we identify these Tweets ?”).
The difficulty is so great that it no doubt explains why Mark Zuckerberg – who has long argued that it is not up to platforms to be the ‘arbiters of truth’ in public debate does not wish to engage in the fact-checking of what officials say. Accordingly, Facebook has long taken a much more non-interventionist stance than Twitter on political speech – particularly when it comes to Trump. In the context of the anti-racism protests sweeping the US, however, this approach met with so much criticism that Facebook was forced to reverse its stance: on June 29th, it announced that it would also start hiding abusive content of public interest behind warning labels. Facebook’s changing policies are analysed in detail in another of our blog posts. This disagreement illustrates the ambiguity of what social networks have become today: public places where democratic debate takes place, owned and managed by private companies.
Florence G’sell is a Professor of Law at the Université de Lorraine and a lecturer at the Sciences Po School of Public Affairs. She co-holds the Chair Digital, Governance, and Sovereignty.