|Evidence-Based Research has Impact|
|Friday, 28 August 2015 13:12|
Measuring the impact of Research Uptake interventions is a tricky business, particularly given the lead-time between activities and the potential impact. It is often difficult to unpack the impact of a particular event or project from other factors that may have played a role. Establishing evidence of impact is important however, not only to justify specific programme outcomes but also to inform your overall theory of change.
Within the broader context of Research Uptake, there is particular interest in encouraging governments to use research evidence when developing or revising policy. Although it is acknowledged that this evidence is only one of a large number of considerations in the policy process, there is some consensus that policy, which is informed by evidence, is stronger and more effective.
Researchers and research organisations (universities, think tanks, local and international aid organisations as well as science councils) have therefore engaged in re-purposing academic research findings into research uptake activities such as knowledge translation, communication and dissemination, knowledge transfer or exchange, meetings and use of media outlets as intermediaries, in order to connect with policy-makers and influence policy.
Evaluating the effectiveness of change interventions is a tricky process for a variety of reasons – it is difficult to establish cause and effect when, for example, policy development is influenced by so many factors. Defining what you mean by impact and how you plan to measure impact is a process that can result in using indicators that you can associate with numerical values rather than broader, less quantifiable impact.
Nevertheless organisations do need to try and evaluate their research uptake activities, in order to monitor their success and improve future efforts. GDNet and CIPPEC have produced these helpful toolkits addressing the various steps of the monitoring and evaluation process of policy influence.
Each toolkit profiles different aspects:
Toolkit Nº 2. Where are we and where do we want to go?
Toolkit Nº 3. Establishing the basis for the M&E strategy
Toolkit Nº 4. Defining how to measure short, medium and long term results
Toolkit Nº 5. Data collection methods
Toolkit Nº 6. Using knowledge to improve policy influence.
They also provide links to a range of additional online sources, including reports, journal articles and videos.
These toolkits provide a useful base from which to work. Understanding why you are evaluating policy influence helps to define how you will measure impact and what indicators you need to use. The toolkits are particularly useful in that they provide an overview of the M&E process as a whole, including how the evaluation feeds back into future activities (Toolkit 6).
The importance of seeing M&E as an on-going process that needs to inform not just future activities but also future strategic direction can be seen from the experience in Zimbabwe of the STEPS Centre which was the second prize-winner of the Economic and Social Research Council (ESRC) Celebrating Impact Prize for Outstanding International Impact.
In it for the long haul
Impact assessment goes beyond evaluating specific uptake activities or the uptake of any specific piece of research. It requires a long term strategy and on-going analysis such as that taken by the ESRC Funded Social, Technological and Environmental Pathways to Sustainability (STEPS) Centre, based at the Institute of Development Studies (IDS), University of Sussex.
The ESRC has been funding research on land and agrarian issues in Zimbabwe since the early 2000s. Using an array of outputs including a weekly blog, video’s and booklets, as well as academic articles, STEPS has disseminated research findings on various aspects of land reform and contributed to debate and dialogue on these issues.
But, according to Ian Scoones, the Director of STEPS, “key to sustaining impact is engaging others in new research, and building the capacity to do this. Only when a wider body of research is developed that confirms, extends and sometimes challenges new findings, will debate shift”.
Building Local Research Capacity
For this reason STEPS included support of local research capacity through a small grants programme in its multi-year research programme. Implemented with local partners including the University of Zimbabwe, this allowed the debate on land reform to be informed by a range of research findings that contributed to a body of evidence that the impact of land reform in Zimbabwe is neither a great success, nor an unmitigated disaster. Research findings recorded both successes, especially among small-scale farmers, and failures, especially on the larger-scale farms.
M&E processes to eventually assess impact can be a challenge to develop and implement but it is worth it. Their value lies not only in having the information to evaluate the outcomes of activities and projects but also in guiding and influencing the policy environment.
Alison Bullen, DRUSSA, Website Content Manager email@example.com