Why don’t research evaluation systems improve research (and why wouldn’t we notice if they did)?

Jochen Gläser
Center for Technology and Society, University of Technology Berlin
Thursday, 20 September 2012 - 12:30

Research evaluation systems – national-level evaluation procedures for the systematic periodic evaluation of university research in a country – are governance tools that have been invented three decades ago, and have been enjoying increasing popularity in higher education policy communities ever since. They can be used to legitimise the allocation of funding, to save money, or to provide universities with both the incentives and the information for improving their research.
The assumption that research evaluation systems can indeed make universities improve their research can be doubted, however, for a variety of reasons. First, research evaluation systems rarely use valid measures of research performance. In most cases, volume rather than performance is measured. Therefore, research evaluation systems rarely provide useful information to universities and are likely to cause goal displacement. Second, the systems target organisations or organisational units rather than units actually conducting research, which opens the response to evaluations for a host of additional organisational influences including internal competition, power structures, necessities of teaching, and others. Third, evaluation systems are just one tool in a whole arsenal that is used for the governance of research. The overlap of the numerous tools aimed at governing research weakens the influence of each of them. Fourth, any influence of research evaluation systems on research performance need to ‘pass through’ the researchers, for whom they are likely to pale to insignificance compared to expectations of their scientific communities and necessities to continuously fund research.
For these reasons, universities often have only very few opportunities to shape their research.
They can redistribute time, resources and managerial support from teaching to research and from weaker to stronger researchers, they can change their recruitment practices, and they can increase the volume of research that is favoured by current evaluation procedures. Each of these measures comes at a price, though.
Interestingly enough, measuring the impact of research evaluation systems turns out to be close to impossible for the very same reasons that limit university responses. Causal attribution of effects could be achieved either by statistical association or by identifying the social mechanisms that translate research evaluation systems into improved performance. In both cases, research performance needs to be validly measured, and the influence of research evaluation systems needs to be distinguished from the influence of other factors in the long and overlapping causal chains. The latter can be achieved by in-depth studies at the micro-level but results of those are difficult to aggregate. This is why the political and scholarly discussions on the effects of research evaluation systems rarely use reliable evidence.
Discussant: Jordi Molas

Lugar: 

Ciudad Politécnica de la Innovación
Edificio 8E, Acceso J, Planta 3ª (Salón de Actos. Cubo Rojo)
Universidad Politécnica de Valencia | Camino de Vera s/n

Breve CV del Ponente: 

Jochen Gläser is currently a senior researcher at the Center for Technology and Society of the University of Technology Berlin. He obtained his PhD at the Humboldt-University Berlin. His major research interest is the interaction of epistemic and institutional factors in the shaping of conduct and content of research at the micro-level of individuals and groups and at the meso-level of scientific communities. He has investigated the GDR’s attempts of ‘integrating basic and applied research’, the transformation of the East German research landscape after German unification and the impact of indicator-based funding on Australian and German university research.
A major theoretical interest of Jochen Gläser is the social order of scientific communities and more generally of community as a type of social order. He has also published on qualitative methods, research methods in science studies including bibliometrics and interviewing, and methods of research evaluation.
Current empirical projects concern national systems of research evaluation and funding in an internationally comparative perspective, the responses of German universities to evaluations, and the impact of changing authority relations on conditions for scientific innovation. A methodological project is devoted to the development of methods for measuring research diversity.

 

Key Publications:

- Gläser, Jochen (2006) Wissenschaftliche Produktionsgemeinschaften: Die Soziale Ordnung der Forschung, Frankfurt am Main: Campus.

- Whitley, Richard and Jochen Gläser (eds) (2007) The Changing Governance of the Sciences: The Advent of Research Evaluation Systems. Dordrecht: Springer.

- Whitley, Richard, Jochen Gläser and Lars Engwall (eds) (2010) Reconfiguring Knowledge Production: Changing authority relationships in the sciences and their consequences for intellectual innovation. Oxford University Press.