What is the purpose of public policy evaluation? How are evaluations produced? How can the issues be shared with a wide audience in order to strengthen democracy? These are the questions addressed in the book Evaluation: foundations, controversies, and perspectives (Éditions ‘Science and Common Good’, 2021), coordinated by Anne Revillard, director of the Laboratory for Interdisciplinary Evaluation of Public Policies (LIEPP) and researcher at the Centre for Research on Social Inequalities (CRIS) at Sciences Po, in collaboration with Agathe Devaux-Spatarakis, Thomas Delahais, and Valéry Ridde. Presentation.
The evaluation of public policies consists of mobilising social science methods to assess the operation and effects of public or public-interest interventions, while ensuring the usefulness of the knowledge thus produced. It covers a variety of practices carried out by a wide range of actors: researchers – who have laid the methodological foundations of evaluation – as well as administrations, consultants, international organisations, and non-governmental organisations.Over the past fifty years, these practices have led to the international development of a new body of interdisciplinary methods and theories discussed in specialised journals(1)For example Evaluation, American Journal of evaluation, Canadian Journal of Program Evaluation, Journal of MultiDisciplinary Evaluation, Evaluation review, New directions for evaluation, African Evaluation Journal, Evaluation Journal of Australasia.. Sometimes referred to as a ‘transdiscipline’, this body of work is struggling to establish a foothold in universities: doctoral programs, for example, are rare, as are positions specifically targeting the requisite skills.
Mostly in English, this body remains little known to the French-speaking audience that it is likely to interest: public actors, associations, and citizens. Among academics, it is very unevenly mobilised depending on the discipline. When it is, the methodological dimensions are emphasised more than other aspects, particularly regarding reflection on values and on the use of the knowledge produced. The growing international literature on evaluation is also marked by a conceptual inflation that makes it difficult to access, including for researchers adding these new references to the tools and analytical frameworks of their respective disciplines.
In this context, our book has two objectives. The first goal is to democratise evaluation in the sense of making it accessible to the widest possible audience: researchers, students, public actors, non-governmental organisations, administrative evaluators, evaluation consultants, etc., as well as citizens, since evaluation is an essential democratic issue, enabling judgment of the relevance, effectiveness, and coherence of public interventions.
The desire to facilitate access to a French-speaking audience in France and around the world guided the decision to publish in French, with the translation of basic or recent texts initially published in English. This desire to make evaluation accessible is also reflected in the format of the publication, which favours short text excerpts accompanied by chapters written by the book’s coordinators that provide theoretical reference points and contextualise the published texts to facilitate their appropriation. Finally, the objective of making evaluation accessible is reflected, more literally, in the choice of an open access publication in partnership with the Quebecois publisher ‘Science and common good’ (Science et bien commun).
A second objective of the book is to bring together different perspectives and practices in evaluation, particularly between professional and academic practices. The project team includes two researchers (Valéry Ridde, public health researcher at CEPED – UPC, and Anne Revillard, sociologist at Sciences Po – CRIS-LIEPP) and two professional evaluators (Agathe Devaux-Spatarakis and Thomas Delahais, evaluators at the SCOP Quadrant Counseil). The different chapters were discussed with a variety of theoretical colleagues and evaluation practitioners.
The exchanges that took place during the writing process underscored the complementarity of views, which is reflected in the themes addressed in the chapters: while researchers are more likely to approach evaluation from the angle of its links with research and methodological and epistemological issues (‘Is evaluation a science?” ‘Diverse paradigmatic approaches’), practitioners introduce the issue of values, the practical organisation of evaluation, and usefulness of the knowledge (‘How to judge the value of interventions?’, ‘Who evaluates and how?’, ‘What is the purpose of evaluation?’).
For example, the notion of use can refer to the impact of the results of a given evaluation on public action (what changes were adopted as a result of an evaluation’s findings?) But we can also consider in a broader way how the evaluation process itself encourages reflection among policy stakeholders and their knowledge-building about the evaluation process (beyond the specific case of the examined policy). Meanwhile, the question of evaluation criteria raises the question of what values guide the evaluation questions and the judgment calls on the interventions. In her article Does evaluation contribute to the common good?, Sandra Mathison shows that beyond the terms set by a sponsor, it is important to think about evaluation criteria in terms of the common good, from the perspective of the most disadvantaged groups.
The field of evaluation has thus given rise to original and particularly stimulating reflections on the role of values in the knowledge production process (notably through reflection on evaluation criteria), and on uses of the evaluation process and results. These are reflections that more academic research practices can also benefit from!
Book coordinated by: