LSE - Small Logo
LSE - Small Logo

Blog Editor

January 23rd, 2014

Making the Evidence Agenda in Development More Plausible

0 comments

Estimated reading time: 5 minutes

Blog Editor

January 23rd, 2014

Making the Evidence Agenda in Development More Plausible

0 comments

Estimated reading time: 5 minutes

By Mareike Schomerus

“What is the evidence?”

Clues 0068This must be the most common question in development programming and policy these days. Donors are pressing practitioners to present evidence that their programming approaches are working – themselves under pressure to show measurable results and the evidence for those. At the same time, while the question about evidence is commonplace, there is no agreement on what such evidence actually looks like. This is where the gaze of policymakers and practitioners is increasingly turning toward researchers to provide all the answers.

Yet this simple question is often the reason why the connection between research, policy, and practice snaps because researchers are most likely to answer “it depends.” When the London School of Economics and Political Science’s Justice and Security Research Programme (JSRP) conducted a substantial amount of systematic evidence reviews, we sought for a clearer answer. [Read about The Asia Foundation’s research collaboration with JSRP from the Foundation’s Matthew Arnold.] We threw ourselves into a bewildering (at least for social scientists) process of searching for evidence on conflict resolution, natural resources, security, justice, media, and gender policies. This resulted in ultimately a very enlightening process of database searches with strict keyword rules, inclusion criteria, and evidence grading. In the process, we produced a range of systematic evidence reviews but also learned some broader lessons about what is needed for researchers to become more intelligent providers of practice-oriented research and practitioners to become more intelligent consumers of research.

Through our research, we discovered that what is needed is a hands-on practical framing of evidence. My colleague and fellow LSE researcher, Anouk Rigterink, and I call it “The Three P’s.”

First P: Proof

“What is the evidence?” is a curt question demanding a curt answer. It is popular among policymakers seeking policies that will guarantee success, as if research was like the kind of evidence presented in a policy court. Yet the policy has not yet been committed, and researchers are asked to prove that it will work. In this court, proof comes from studying a different incarnation of this policy, at a different time, often in a different country. Would a prosecutor ever present evidence from a past case elsewhere? That is not how you get beyond reasonable doubt. Plus, presenting proof stops further learning.

The current quest in development for proof might be getting a bit soft around the edges, particularly as the debate heats up on working context-specific and politically, which by definition requires flexibility and responsiveness that no “evidence” can underpin. Yet “proof” is still the preferred foundation for a policy.

For researchers, the tireless quest for proof is often frustrating. It does not square with the very definition of empirical research: to gain knowledge by means of (in)direct observation and experience. It asks for information to be transplanted from one context to another, which contradicts everything we know about working with specific contexts.

Second P: Principle

I am suspicious about the evidence debate, when evidence is providing a fig leaf for policies that are based on principle or ideology. This is not a call to ban principle or ideology from policy entirely. If well done, research may suggest that a particular policy was successful in obtaining its goals, but it does not tell us whether these goals were worthwhile in the first place. This is where ideology comes in. Some things we may find worth doing for the sake of doing them; we may give humanitarian aid for instance, because we do not want people in another country to die of starvation, drink dirty water, or sleep outside in the cold. To ask for evidence connecting this to some broader success is a fig leaf. The principle behind the policy remains untouched by the research findings, but evidence is sought to justify the policy in the eyes of those who do not share the principle. The debate should be about what common ideologies policy can build on, but instead gets diverted onto an unproductive technical track.

The best meeting place for probing, policy, and practice is the land of the third P.

Third P: Plausibility

From our work, we concluded that the evidence debate among those in research, policy, and practice could benefit from some language classes on the principle of plausibility. Plausibility means to look at studies of comparable policies and see whether the connections presented are convincing. Then, a case can be argued why it is plausible that what worked in one context could plausibly work in another. No guarantees, no one-stop evidence shopping, no generalizations. It does not claim proof and it does not echo principle. It means more work on the horizon: we have to keep looking for information to see if what seemed plausible at first really holds up.

If probers, policy-makers, and practitioners spoke plausibility with each other, research could more credibly feed into program planning, designing plausible theories of change, and allowing for context-specific programming and insightful evaluation. Any how-to guide on using evidence for policy and practice would be more convincing if it stressed the importance of plausibility.

Even after conducting numerous systematic reviews, we ended up with the realization that when seeking particular kinds of information on tightly structured research questions, the search for “proof-based” evidence mostly draws a blank. What the information reviewed showed us, however, is that the vast amount of information out there is a good starting point to speak about plausibility.

Debating evidence in this way requires constant questioning, redefining success and failure, and adjusting theories of change based on newly understood plausibilities, which could possibly change perspectives entirely. It requires talking to each other. This may be frustrating, it may be repetitive, it may not be streamlined, and it might not even always be that efficient. But it mixes social science with unpredictable and messy humans (and that includes researchers, policymakers, and practitioners, as well as the end-users of development programs) in the best possible way.

This article was originally published on The Asia Foundation’s blog, In Asia.

____________________

The views and opinions expressed here are those of the individual author and not those of The Asia Foundation, the Justice and Security Research Programme or of the London School of Economics and Political Science.

About the author

Blog Editor

Posted In: Aid | Anouk Rigterink | Asia Foundation | Development | Evidence | Mareike Schomerus | Theory of Change

Leave a Reply

Your email address will not be published. Required fields are marked *