The problem with programmes

Today, when the UK government wants to make a transformative difference to society, it sets up a programme.

For the last 30 years, the major programme approach has been the orthodoxy for delivering change at scale, whether that’s infrastructure, military hardware, public service reform, or technology. Almost anything with a large price tag and a significant ‘implementation’ component will be set up as a programme. These, in effect, are the government’s big bets.

Programmes have become the default mental model for the vast majority of publicly funded activity. Whitehall’s record with major programmes is a chequered one. At the time of writing, the Infrastructure and Projects Authority (IPA) counted 244 programmes within the government’s portfolio, with an estimated whole-life cost of £805 billion, double the figure of ten years ago. The level of transparency that exists for this portfolio is not perfect but is an example of working in the open that is much to the IPA’s credit.

Of those 244 programmes:

  • 26 (11%) are rated ‘green’, meaning that the IPA is confident of successful delivery.
  • 23 (9%) are rated ‘red’, meaning successful delivery appears to be unachievable. These have a combined cost of nearly £100 billion.

Before the Covid-19 pandemic, the IPA and its predecessor agency had never rated more than 10 programmes as ‘red’.

That leaves 195 programmes rated ‘amber’. That means 80% of the government’s biggest tasks are in a position where nobody knows for sure whether the programme will be a success or not—not the programme team, not the independent assessors, and certainly not ministers.

On one level, this is understandable. By their nature, major programmes are extremely complicated and difficult. The government might be trying to do something no other organisation has tried before. Some level of failure is inevitable. But on another level: what we see is a profound level of uncertainty that exists for many years, backed by budgets in the hundreds of millions of pounds, with very little in the way of ‘good practice’ to draw upon. ‘Sunk cost’ fallacy also pervades; spend enough money on a programme, and it can take on a life of its own, regardless of whether it is now the right thing to do.

What happens when a typical ‘programme’ goes wrong?

The Green Deal was the flagship energy efficiency policy of the Coalition administration (2010–2015). In 2016, a National Audit Office report concluded that:

The design not only failed to deliver any meaningful benefit, it increased suppliers’ costs—and therefore energy bills…design and implementation did not persuade householders that energy efficiency measures are worth paying for.

The NAO’s analysis was clear that one of the fundamental missteps made by DECC, the department responsible, was its inability to be “more realistic about consumers’ and suppliers’ motivations when designing schemes.” The NAO noted that the Green Deal “looked good on paper,” but fell short in reality.

DECC’s approach made a series of major assumptions at the beginning of the policy development process. These were not tested in reality until much later, when implementation of the scheme was launched in 2013—three years after policy work began. The policy assumptions included a focus on ‘hard-to-treat’ homes, a factor the NAO pointed to as the main reason why the Green Deal ultimately saved substantially less CO2 than previous comparable schemes.

The Green Deal was also weakened by focusing on measurable but relatively general outputs (“providing energy saving measures in millions of homes,” which the scheme achieved) versus outcomes (“substantial reductions in CO2 emissions,” which it didn’t).

The department also omitted to test the Green Deal finance design directly with consumers, relying instead on a survey—which even in itself “did not provide a strong case.” This lack of testing policy and economic assumptions against reality was in spite of “many stakeholders warning the Department it would be difficult to persuade people to pay for the measures themselves.”

The error made by the Green Deal programme was not being wrong, but in leaving few chances to fix their mistakes quickly and cheaply.

The failure of the Green Deal, which took five years and cost taxpayers £240 million, was a textbook example of linear programme management processes leading to an increase in the risk of failure. Its biggest mistake was predicating success in a complex, uncertain environment on a large number of untested assumptions about human behaviour.

Common themes in programme failures

Many of the common themes in the recent history of major programme failures relate to the separation of policy and delivery.

Borrowing the civil service’s own definition, policymaking is “the act of designing, developing and proposing appropriate courses of action to help meet key government priorities and ministerial objectives.” So: the art of translating political intent into reality. Too often, though, it becomes the art of translating political intent into something that works on paper.

Translating those paper plans into reality comes later, sometimes years later. Historically and culturally, “delivery” or “operational” work is seen as secondary in importance and status in Whitehall. Policy enjoys literal and psychological proximity to ministerial power. Delivery does not.

Another problem is the linear nature of the endeavour: politics first, policy second, delivery last. Often technology—in the form of defined requirements for an external supplier to build—will appear as another distinct stage of this linear process, inserted between policy and delivery, and rarely considered any earlier than that.

This step-by-step process, sometimes described as ‘waterfall’ in homage to the Gantt charts that govern it, is still typical of major government programmes. In practice, it means placing huge bets on the assumptions made at the policy stage. Yet this is almost always the moment where the least is known. A small policy or analytical error can snowball into a catastrophic mistake as implementation ramps up. And when faced with a hostile opposition and media environment, the cultural temptation to bury heads in the sand rather than face up to mistakes often serves to compound the damage further.

Where it’s possible to predict all the variables and gather all the necessary information upfront, waterfall methodology can work well. Building a motorway, or a submarine, for example. Waterfall-governed programmes tend to start with many assumptions that don’t get tested until much later, or until the very end.

The number of programmes that actually meet the ‘controlled environment’ criteria of waterfall project management are in fact few and far between. Complexity and unpredictability are far more typical. The truth is that in most cases, even in ideal conditions, predictions can only be so good. People don’t always behave like they do in economic models, sometimes not even close. Politics shift. Unexpected events happen.

Public servants and politicians can choose to put their heads in the sand and pretend that none of this is true. 

Or choose an approach that reduces the risk in uncertainty.