Home>Florence G’sell: towards “a healthier internet”?
23.02.2021
Florence G’sell: towards “a healthier internet”?
To mark Safer Internet Day 2021, we spoke to Florence G’sell, co-chairholder of the Digital, Governance and Sovereignty Chair at Sciences Po’s School of Public Affairs. She shared her thoughts on the political power of the GAFAM (Google, Apple, Facebook, Amazon and Microsoft) tech giants and how states are adapting their legislative arsenal to deal with these new players.
Can you tell us a bit more about the Digital, Governance and Sovereignty Chair, which you have been co-directing for a year now?
Florence G’sell: The Chair has been running for four years but it was renamed a year ago to encompass all issues around digital sovereignty. These issues are of major importance today. “GAFAM” are transnational corporations with unprecedented power and they have accrued such hegemonic status that they seem to almost compete with states. That makes effectively regulating them extremely difficult, all the more given how dependent we in Europe are on technology. At present, it’s hard to imagine storing data, without making use of the Amazon, Google or Microsoft clouds. The goal of the Chair, therefore, is to examine those challenges posed by the current state of affairs and propose potential solutions.
9 February marked Safer Internet Day worldwide. What would a “safer” internet look like? What is under the greatest threat on the web? Democracy? Truth? Freedom of expression?
F.G.: I think that if we want to take a broad view of the internet, we need to keep in mind its origins. This was a network designed by researchers, endorsed by the American government and, above all, one that did not have a commercial end goal. It was intended to facilitate new forms of exchange and reduce imbalances in access to knowledge. For the early pioneers of the internet, those working for its democratisation, the World Wide Web was a way of liberating people from the guardianship of the state through decentralisation and peer-to-peer networking.
In the 1990s, researchers like Lawrence Lessig and Tim Wu argued that the internet could be regulated. They also predicted something of what the internet of the 2010s would resemble and foresaw the huge centralisation process that we are witnessing today. From the free internet of the early days, we have shifted to a sector dominated by a handful of extremely powerful platforms, with a vertical structure trickling down from their prominent CEOs (Jeff Bezos, Mark Zuckerberg, Jack Dorsey etc.). These individuals are able to make decisions with massive consequences: the suspension of Donald Trump’s Twitter account, for example, or Amazon’s refusal to host the social network Parler. This is not a satisfactory situation, neither for states nor for citizens, who have no control over the virtual universe in which they are interacting daily.
Are digital players in the process of shifting from their status as “pure” hosting services, with no responsibility for content, to that of editors? Or do we need to invent a new kind of status entirely?
F.G.: My answer to the first question is no. As far as the European Union is concerned, a new proposal for regulating digital services was published on 15 December last year and it does not go back on the principles established in the e-Commerce Directive of 2000. It specifies that, as hosting services, the platforms do not have any responsibility for content posted by their users online. The hosting service only becomes liable when it is aware of illegal content and does not take the necessary steps to remove it. In France, the Avia Law introduced reinforced obligations for platforms to remove all illegal content, but the law was blocked by our Constitutional Council on the grounds of impinging on freedom of expression. The main argument was that it put the onus on the platforms alone to determine whether or not content was illegal and should be removed.
US legislation, by contrast, affords full immunity to the platforms. Whether or not they decide to moderate content posted online is at their discretion. That said, the situation may well be about to evolve. Serious discussions are currently being had over whether to reform Section 230 of the Communications Decency Act, which regulates the platforms. Some voices argue that the companies should become liable if they do not take “reasonable measures” to prevent the spread of illegal or toxic content.
This change seems to be something of a spontaneous movement. But are platforms capable of regulating themselves? Should they be compelled to do so by state regulation or by the public?
F.G.:The exact role of the state and the judiciary in moderating online content is a source of debate in France and the US. On the side of the state, the question of a regulatory authority arises. In France, the Conseil supérieur de l’audiovisuel is already entrusted with some tasks in this regard, but we could explore the possibility of creating a dedicated regulatory authority at European level.
The creation of a specialised authority has not been ruled out in the US either. There have been several academic proposals to establish a federal agency in charge of regulating the platforms. Among other things, this agency could be handed control of the platforms’ algorithms, particularly in order to ensure that they are not promoting toxic content, which we know can generate engagement.
With regards to the role of the judiciary, there are several varying points of view. Some critics in France are of the opinion that the decision to remove content should be up to judges alone. I am not one of them. I think we need a more flexible system, in which the judiciary would intervene at a later stage.
In the US, the issue of the judge’s role comes under the broader question of whether social networks should be held to the First Amendment to the US Constitution, which protects freedom of speech. If this were to be accepted, which is far from certain currently, the issue of removing content could be subject to challenges in the courts. The judiciary would then become the arbiter: it would be up to judges, for example, to rule on whether Trump’s Twitter account should be suspended.
In the face of these growing challenges, and on its own initiative, Facebook has created the Oversight Board, an independent entity in charge of settling users’ appeals. The Board has just handed down its first decisions, and will soon be ruling on the question of Donald Trump’s deplatforming by Facebook. This decision will be a good indication of what bodies like this could be capable of.
Do you have examples of legal regulations or planned regulations that are opening up the possibility of a safer internet?
F.G.: Rather than discussing a safer internet, I tend more to talk about a healthier internet. First of all, there’s the European Commission’s legislative proposal, the Digital Services Act, which aims precisely to ensure that the platforms are more effectively moderating content, with greater transparency. This regulation would also provide firmer guarantees, by requiring the platforms to justify their moderation decisions to users and giving the latter the opportunity to challenge them. In the event of genuine disagreement, users should be given the option of making an appeal to an independent investigative body, such as an online ombudsman. If accepted, the Commission’s proposal could lead to greater consistency in the way that platforms develop, edit and apply their terms and conditions.
Can we paint all the GAFA companies with the same brush or do they have differing visions of their social responsibility? Are some models more ethical than others?
F.G.: It’s important to note the differences of approach between Twitter and Facebook. Twitter CEO Jack Dorsey appears to reject the prospect of state regulation and advocates a totally decentralised approach, following the Bitcoin platform model. Dorsey favours the idea of leaving it to the user to withdraw from the platform if they are not satisfied with the conditions of use. It is enough, he argues, to let the competition play. I, personally, find that view unrealistic.
For its part, Facebook has frequently called for regulation and launched its own trial with the creation of its Oversight Board. This proves that the platform has understood the value of appealing to a neutral third party for moderation. The Oversight Board is independent and makes its decisions on the basis of international law and human rights. I’m not sure that we can talk about an ethical model, but there is a genuine consideration of users’ rights here.
What recommendations might you give our students who are active on social media?
F.G.: I would recommend that they be cautious about what they post and share on social networks. Even without realising, we can fall into the trap of posting or sharing illegal, even toxic content (fake news, conspiracy theories etc.). When we do so, we engage our responsibility. It is important to discuss the accountability of the platforms, but we should never forget that we ourselves are the ones primarily responsible for what we do or say on social media.
Find out more: