Ideas to make research more robust, ethical and action-oriented
Idea 1: Systematically prioritise stakeholders using stakeholder analysis
​
Systematically analyse stakeholders and publics using an interest-influence-impact matrix: this new generation of stakeholder analysis enables you to consider the interests, power and benefits or risks of engaging different stakeholders and publics in your research. It asks three questions:
1) Interest: who is interested in your research and what is the nature of their interest, or who would you like to be interested (based on their influence and/or impact) who is currently disinterested, and why are they not interested? Consider their stated interests and preferences, but then go deeper to try and understand their underpinning transcendental values, beliefs and norms;
2) Influence: who has the power to facilitate or block the generation of impacts from your research (indirectly)? Consider explicit, hierarchical “power over” as well as implicit, personal and transpersonal “power with”;
3) Impact: who is likely to benefit most from engaging with your research, and whose interests might be compromised or harmed as a result of your work? Consider both immediate and longer-term benefits and negative impacts arising from engagement with your research (see the figure below).
If you invite two or three cross-cutting stakeholders to help with this analysis, it will be significantly more useful. You can see the full tool I use in the table below. Either put this into Google Sheets for an online workshop or have one piece of flip chart paper per column to do this in a face-to-face workshop. See 3i’s stakeholder analysis for the full workshop facilitation guide I developed with my colleague Helen Kendall. Use your findings to prioritise who to engage with first, and most intensively in the development of your research, as well as your pathway to impact.
Idea 2: Pro-actively manage risks arising from impact
​
Use your stakeholder analysis to identify risks that might arise as a result of achieving your impacts (for example, for non-target groups whose interests might be compromised as a direct or indirect consequence of helping a key beneficiary), if you fail to achieve them (for example, raising false expectations, eroding trust or creating more significant problems if other actions were dependent on you achieving your goal), or as a result of the methods you use to achieve them (for example, excluding or alienating important groups because you were unaware of cultural norms). Work out how you could mitigate those risks or make contingency plans and build them into your project or strategic plan. For example, do a stakeholder analysis to identify potential winners and losers from the outset and engage with them early to identify ways of mitigating risks and finding win-wins where possible. Through this dialogue, you can also identify and tailor appropriate ways of engaging with different groups, navigating conflicts and sensitivities where you identify these.
Idea 3: Practice open research
​
Open research platforms are growing in popularity very rapidly among some disciplines, but are unheard of or opposed in others. Although most funders have required data to be deposited in repositories for many years now, a growing number of researchers are pre-registering their research so that they cannot be accused of later changing their objectives or methods to match what they found, and pre-print servers are growing in popularity as researchers put their work online for the community to scrutinise or use in parallel with the peer-review process. While many of my colleagues are evangelical about this new way of working, there are risks for impact if applied research is made available and then used in policy or practice, but the peer-review process reveals that it is fundamentally flawed. On the other hand, other than the controversies over who funds it, there can be little argument against the need to make research open access if we want people outside academia to read our work.
Idea 4: Make evidence synthesis more attractive and accessible
​
Most researchers try and make the conclusions of their research as generalisable and widely applicable as possible. Even if you are a social scientist or humanities scholar, working in a small case study, you will seek to derive new theoretical insights that will be of interest to others outside that case study context. The temptation is to over-generalise and make claims that cannot be supported by your data, but if you have successfully resisted this temptation then you may well be tempted by your funders to over-generalise by basing a policy brief on that single case study. Of course, there is no harm in illustrating a policy brief with a case study, but to make robust decisions in policy or practice, you ideally need to draw on the widest possible evidence base. One of the key tasks for academics is to interpret that broader evidence base and make it accessible. There are a range of evidence synthesis methods to choose from, and I integrated a series of Rapid Evidence Syntheses into the last policy brief I wrote. The problem however is that these take time and there are few researchers with the skills to do the necessary analysis. In an attempt to solve this problem, I worked with international evidence synthesis expert, Dr Gavin Stewart, to create a synthesis training programme for early career researchers with University of Leeds and N8 AgriFood. Gav ran two workshops for a total of 30 researchers to train them in synthesis techniques. I elicited policy needs and evidence gaps from the policy community, which we turned (where possible) into questions that could be answered using evidence synthesis. Gav and I then supported each group to produce a peer-reviewed synthesis and policy brief. Each researcher left the process with new skills and a paper, the cost per synthesis and policy brief was tiny, and because the policy briefs targeted questions that arose directly from the policy community, there is real potential for impact.