One of the more significant debates surrounding the downstream results of Research Uptake concerns the complex task of assessing impact. On the one hand, universities will universally agree that a central part of their core mission is to serve the public good through the applied knowledge they generate for social and developmental change—and academics themselves are obviously also keen to see their own research inspire change and advance developmental goals.
On the other hand, the act of measuring change can be quite woolly. It is difficult to quantify research impact, to claim causal linkages and attribute connections between the research outputs of a research team and an innovation or change in practice, and to demonstrate to those who invest in strengthening Research Uptake what the return on such investment is.
Not only is it challenging to make these claims, but impact itself is a set of conditions resulting from research outcomes having become integrated into policy and becoming understood and employed by public stakeholders. The long-term impact of the original research is, then, exactly that—long-term. How do we begin to measure it?
While there are no simple answers, there is emerging good practice to support organisational assessment practices—establishing the institutional environment and the management systems to institutionalise Research Uptake can at least establish the initial information systems so that research impact data is available. This is the necessary first step in assessing it in a concerted manner.
Sophie Duncan, from the National Coordinating Centre for Public Engagement (NCCPE) at the University of the West of England in Bristol, UK, has identified some key components to good practice in assessing research impact. In a recent article for Interact, the newsletter of the ACU's University Extension and Community Engagement Network, she set out the following:
Impact is more likely to be achieved if you engage with others throughout the research cycle, including before it starts. Understanding who has a stake in your research, and involving them throughout the research process, creates a fertile ground where impact is more likely to occur.
Impact occurs over different timeframes, which can make it hard to measure. Here, one can capture data about the activity (the number of people involved, and so on) and can also assess the participants’ reaction to the activity (such as whether they enjoyed it or learned something new). However, assessing long-term impact needs a more sustained and sophisticated evaluation strategy.
Attributing impact—be realistic and avoid over-claiming. It can be tempting to assume that the impact generated from a particular piece of research is derived entirely from that research. Research, however, clearly does not happen in a vacuum. There are lots of other factors that might influence policy, individuals, practice and so on. Thinking about the contribution the research has made can be more helpful, as it recognises that research plays an important role but doesn't suggest it is the only thing that makes a difference.
These guidelines are useful for every university. Even with full agreement on the need for long-term assessment of impact—how can this assessment be resourced? Who might dedicate the financing today for research outcomes that might be understood only in the future? This question might seem challenging, but it doesn't need to be.
There are processes to put in place. The research process from inception to delivery is managed with the need for uptake and evaluation in mind; external stakeholders are identified and involved at all the crucial times in the process; institution-wide systems are in place that embed research information for assessment; and crucially, the link between knowledge and development is reinforced through internal policy and procedure. With these management processes in place, the ground will have been laid to track research that is intended to effect social change, and to demonstrate the linkages between what the university produces, and how the public can benefit.
Getting the systems in place is challenging, and also crucial. Research assessment using clear performance indicators becomes important as we look to more fully understand the outcomes of university research. Recording the number of copyrights, patents secured and spin-off businesses established, and tabulating the number of published articles in peer-reviewed journals is the usual way to do this. There is agreement, though, that this type of research assessment does not capture the true impact of research—for example, we can calculate how widely any article has been read, how profound its influence has been on the readers (by the number of citations the article has had, but citations still indicate only a peer readership and do not necessarily indicate wider social, developmental or policy-level change). Securing a patent is not itself an indicator of university research having survived the tech transfer leap into the world of enterprise or innovation. We need to measure, but the measurement tools are not yet comprehensive. The NCCPE guidelines give an indication of what might be worth measuring. The DRUSSA programme, especially the case work element will go some way toward developing and testing new indicators for development research impact.
Two modules taught by CREST in the DRUSSA postgraduate programme will contribute to individuals' skills sets in assessing impact. These are Research Evaluation and Impact Assessment. They were presented in February this year as part of the MPhil and PhD courses and will also be presented as short courses in Johannesburg, South Africa, at the beginning of October. Both courses are fully subscribed.
Liam Roberts (@LiamTyping) is a Programme Officer (Policy) at the Association of Commonwealth Universities