Why Evidence Fails to Reach Policy (And What We Keep Getting Wrong About It)

Published on 9 April 2026 at 10:09

Ask a researcher why their evidence didn't reach policy and they'll usually tell you it was a communication problem. Wrong framing, wrong format, wrong messenger. Ask a policymaker the same question and they'll tell you the evidence arrived too late, or wasn't relevant to the decision they were actually facing.

 

Both are telling the truth. Neither is identifying the root cause.

 

Twenty years working at the research–policy interface has left me with a fairly clear picture of why evidence fails to land — and it has less to do with how it's packaged than with where institutions position themselves in the political cycle. There are four failure modes. They're not equal, and they're not independent. And the one that's hardest to fix is the one most people aren't willing to name.

The Framing Trap

There's an entire industry built around the idea that research fails to influence policy because it's communicated badly. Wrong framing, wrong format, wrong messenger. Fix the executive summary. Add a policy brief. Train the scientists to speak human.

 

This is not wrong, but it's also not the point.

 

Framing matters when there is still a window open. When the decision remains genuinely live. Investing in communication quality after the window has closed doesn't get you influence — it gets you a well-designed document that no one acts on.

 

The organisations I have worked with that struggle most with policy influence share a common pattern: they produce high-quality evidence on the right topics, communicate it clearly, and then wonder why it doesn't move anything. The problem is rarely the product. It's the timing.

This cycle maps the relationship between evidence, policy, funding, and research. It's a useful model. The problem is that most institutions only enter it at one point: when they have findings to communicate. By then, the funding priorities are set, the policy positions are forming, and the decision window is already narrowing.

The Decision Was Already Made (But That's Not the Root Cause)

The most visible symptom of research–policy failure is this: evidence arrives after the informal consensus has already formed. It gets used to support a predetermined direction rather than to inform one. The appearance of an evidence base gets constructed around a decision that was effectively made before any evidence was properly reviewed.

 

This happens constantly, and it's real. But it's a symptom, not a cause.

 

Decisions get locked in early when researchers aren't in the room before the policy window opens — when relationships aren't established, when there's no sustained presence at the table between research cycles. The informal consensus forms among the people who are there. Evidence from institutions that only show up when they have something to publish rarely shifts it.

The Incentive Structure Nobody Talks About

There is a harder point underneath this, and it's one that gets less airtime because it implicates the research institutions themselves.

 

A significant share of policy-oriented research is not actually designed to inform decisions. It's designed to validate them. The funding flows toward questions with known-friendly answers. The commissioning brief narrows the scope. The consultants who deliver inconvenient findings don't get called back.

 

This isn't a conspiracy. It's institutional incentive logic. Funders want their priorities confirmed. Governments want legitimacy for choices they've already made. Research institutions want repeat contracts. The evidence that gets produced in this environment is technically rigorous, often published in peer-reviewed journals, and almost entirely decorative from a policy-influence standpoint.

 

Acknowledging this matters because it changes what "fixing" research uptake actually requires. If the demand side is structurally oriented toward confirmation rather than genuine inquiry, better communication tools don't solve the problem. What's needed is a fundamentally different relationship between the research and the decision — one built before the question is even framed, not after the findings are written up.

What Actually Works

Twenty years across organisations in global health, food systems, climate finance, and international development has given me a fairly consistent view of when evidence does land.

 

It lands when researchers have a sustained institutional presence in the spaces where policy is made — not just at the point of publication, but throughout the political cycle. It lands when the framing of the research question is shaped by where the policy debate is actually heading, not where it was three years ago when the project was designed. It lands when the evidence is delivered by someone with institutional standing in the room — a trusted interlocutor, not a visiting expert.

 

And it lands when the organisation behind it is willing to say uncomfortable things to funders, not just to peer reviewers.

 

None of this is particularly complicated. But it requires a different model of what a research or advocacy institution is for — one oriented toward decision points rather than publication cycles, and toward relationships rather than reports.

 

That's the shift that most institutions I work with are trying to make. Some are further along than others. The ones who get there stop asking "how do we communicate our findings better?" and start asking "where are the decisions, and are we already in the room?"