Accueil>8 nouveaux projets de recherche lancés dans le cadre du partenariat entre Sciences Po et l'Institut McCourt

13.01.2022

8 nouveaux projets de recherche lancés dans le cadre du partenariat entre Sciences Po et l'Institut McCourt

Dans le cadre du partenariat entre Sciences Po et l'Institut McCourt signé en juin 2021 et consacré à l’étude, le décryptage et la clarification des grands enjeux de bien commun qui se jouent autour des nouvelles technologies, huit projets de recherche ont été sélectionné par le comité directeur

Projet dirigé par Sylvain Parasie et Pedro Ramaciotti Morales, médialab.

The last two decades have seen the emergence of different hypotheses about disorders of digital public spaces, such as fragmentation (or “bubbles”), polarization, extremism, and the role that algorithms mediating this space might have on them by pushing the visibility and virality of particular contents. Conclusive results, however, have proved elusive, as a growing body of research seems to present a contradicting picture: disorders might be pervasive, capturing the attention of policy makers and the general public, but with no widely-accepted definitions or metrics emerging to quantify them or the role algorithm might have on them, let alone actionable means to design better algorithms that would minimize identified negative outcomes. Meanwhile, growing evidence suggests that Recommender Systems might be leveraging political opinions of users and other features of public debate associated with societal cleavages. This project takes inspiration in ideological social network embedding methods and political surveys for the analysis of party systems in policy spaces to propose a double network and opinion space modeling of digital public spaces. Using this double network and spatial opinion analysis, this project then proposes to test whether algorithms learn and leverage political opinions of users through algorithmic explainability, how they affect information dynamics in public debate, and to open a path towards actionable tools capable of guiding algorithm design, governance, and policy.

Regarder la présentation en vidéo.

Projet dirigé par Dominique Cardon, médialab.

This research will be addressed in two successive calls for proposal. The objective of this preliminary project is to explore how speech regulation in digital discussion spaces can be based on users’ normative expectations. “You shouldn’t talk like that!”, “That’s wrong!”, “We don’t say that!”, “That’s nonsense!”, “It’s false”. Internet users who chat in forums, Facebook groups, or social network comment threads constantly challenge others to assess the legitimacy of speech and the limits of what can be said in public. With the massification of digital audiences and the diversity of speech spaces, it appears increasingly obvious that users do not share the same values regarding what can or cannot be said in public. The objective of this project is to map the differences and tensions between different conceptions of online speech. This proposal sets the framework for a project that brings together five perspectives that we will develop in future calls. The first four research axes propose to confront different perspectives on how users, public opinion, platforms and States define the possibilities and limits of speech in digital spaces. More applied, the fifth research axis will seek to design new types of digital speech spaces. This first proposal is based on an analysis of a large corpus of digital conversations and a set of qualitative and quantitative survey techniques enabling the study of the normative representations of Internet users. In the next phase, we would like to combine this bottom-up approach with the legal debate on the regulation of content within large digital platforms and to design speech spaces architectures allowing users to regulate themselves.

Projet dirigé par Jen Schradie, Observatoire sociologique du changement.

What shapes whether people receive, believe, and share disinformation? The rise of ‘fake news’ has become an area of deep concern worldwide, raising questions about the role of information in democratic societies. Observers often point to the January 6 attack on the U.S. Capitol, based on falsehoods about election results, as a critical turning point in how disinformation has dire consequences. Despite a dramatic increase in disinformation research, a crucial and remaining puzzle is that a large number of people believe fake news claims while only a small number of people (often below 1%) consume fake news in their daily news diet (Allen et al. 2020; Fletcher and Nielsen 2018; Grinberg et al. 2019; Tsfati et al. 2020). How and why do people report that they trust unverified information if they are not actually consuming this news directly? To understand this empirical disconnect in the diffusion of disinformation in the digital era (3Ds), this mixed-methods research, DeCodingDisInfo advances the state of the art that selects on the dependent variable of digitally visible fake news and top-down levers of distribution. Current scholarship skews toward top-down powerful players: platforms (like Twitter), politicians (like Trump), or policies (like the GDPR). The audience for disinformation, however, is usually viewed as passive individuals without institutional affiliation. This extant research ignores the broader array of everyday bottom-up active media practices and mechanisms of sharing—or not sharing. Instead, DeCodingDisInfo, an interdisciplinary and mixed-methods project, will uncover how information – both fake news and otherwise – circulates in the digital media environment and in offline spaces. Taking a deeply contextualized and community-based approach, our team will harness the power of a two-country comparison and examine how ideology and institutions shape information flows. This will result in publicly available tools to better decode and deconstruct fake news.

Projet dirigé par Michele Fioretti, Département d'économie.

Decentralized networks provide users with control over their personal information. Applied to the Internet, decentralization can reduce the market power of dominant firms, as it reduces their ability to learn consumers' preferences. However, information sharing comes with the cost of constantly deciding whether to share information. Since the Internet is not decentralized, there is currently no data about how Internet users share information under decentralization. We, therefore, propose to analyze, estimate, and simulate a model of information sharing in an extensive decentralized network: shareholding of large corporations. We focus on shareholder activism over socially responsible issues to study the evolution of related shareholding information and the reactions of large nodes in the network (shareholders with dominant positions) to examine governance issues related to information sharing in decentralized networks. To do so, we exploit the news flow of digital media as exogenous shocks to shareholder preferences.

Projet dirigé par Kevin Arceneaux et Martial Foucault, CEVIPOF.

In this project we propose developing and administering an innovative research design for both understanding how people engage with the online public sphere as well as test interventions that are designed to cultivate healthier and more constructive online behaviors. We plan to do so in the context of the French presidential and legislative elections slated for spring 2022. Following the experience of the last national elections in France, we anticipate that the electoral campaign will be emotionally charged, polarizing, and filled with attempts to spread misinformation. Our research approach will cast a broad net by studying a wide range of attitudes and behavior, both on- and offline.

Projet dirigé par Achim Edelmann, médialab.

The digital transformation has led to major changes in the information ecosystem. Modern communication platforms such as WhatsApp, Facebook, and Twitter have become important media for information dissemination, breaking down centralized structures while blurring boundaries between private and public channels. Unfortunately, these platforms have also been misused to share inaccurate or inflammatory content. As of today, we—science—have failed to provide the knowledge needed to successfully counter this phenomenon. We lack the evidence needed to design and implement successful interventions. This project will therefore identify mechanisms that substantively and effectively govern the spread of misinformation and related effects such as polarization.

Projet dirigé par Séverine Dusollier, École de droit.

How can the values and rights needed to sustain democracies and the common good be upheld and ensured in our digital world? What is the role of the law in doing so? Sciences Po’s Law School Towards a New Digital Rule of Law project will assess the legal governance/regulation of the Internet and of some of its technologies in identified case studies in order to develop a new line of research investigating and experimenting the necessary role of law in current internet evolution. The School of law intends to develop this line of research for the years to come around different specific issues challenging the rule of law. In the next three years, the Project will focus on examining the value and of constitutional law concepts for a digital rule of law, how to govern AI to make it compatible with democracies, and how to use infrastructures to govern and how to govern infrastructures to address data inequality and shape a less concentrated web. All projects will be developed emphasizing three main pillars (1) hosting and supporting rigorous academic research focused on relevant governance challenges;(2) channeling the research into early action and early impact through the law school’s clinical program; and (3) implementing public outreach, by hosting conferences and workshops where different stakeholders can discuss and influence policy on time-contingent issues.

Projet dirigé par Julia Cagé, Sergei Guriev et Émeric Henry.

The spread of misinformation on social media has become a major challenge for modern societies (Tufekci 2018). In this project, we propose to develop and test new solutions and methods to slow down or block the spread of false news and alternative facts. We propose solutions at two levels: upstream, to improve the regulation of platform companies and improve fact-checking procedures, and downstream, to enhance users’ ability to recognize false news. Our project will therefore both enhance digital governance, by proposing and evaluating changes to the design of social media, enhance the impact of fact checking, by evaluating best practices, and improve digital literacy, by designing and testing a training app that will be used in practice.

Regarder la présentation en vidéo.