Humbling the numbers

By Dr Tanya Filer, founder and CEO of StateUp, a team of experts dedicated to bridging the gap between policy, technology, and climate for public sector and other leaders worldwide.

--

One hand isn’t enough to count how many times I’ve been asked the question: ‘Do you know someone at the UN?’

The reason isn’t world peace. It’s rankings.

Global indices loom large in the collective imagination of digital government professionals, especially in development contexts. Ranking-anxiety is real. It shouldn’t draw surprise or criticism: scoring highly on digital government rankings, including the UN E-Government Development Index (EGDI) and the World Bank’s Digital Adoption Index (DAI), helps to tell stories that matter.

A high score can help secure domestic budgets in times of scarcity—crucial for an agenda still sometimes perceived as scrappy enough to scrap at first chance. A top or improved score offers international status. Projecting an image of global consequence, it can expedite development aid and accession to global membership organisations.

These numbers are powerful. Policymakers are right to care.

The problem is that rankings are both imperfect measures of success and often acted upon as objective truths. To get the best of them, we must reform how we engage with them.

Why we care

Dedication to global rankings is not unique to digitalisation. From The Economist’s Liveable Cities Index to the World Happiness Report, we get a kick out of comparing our collective lives.

But several factors may push professionals working at the intersection of digital government and international development to care a lot about global rankings:

1. A global mindset. Many digital government teams have an international outlook. From the D9 (international) to RED GEALC (Latin America) lesson-sharing, collaboration – and competition – between national digital government teams is standard practice.

2. Digital remains relatively new on government agendas. Newness brings uncertainty. It’s a tired observation but policymakers and funders often dislike uncertainty, and politicians like success stories with pluckable numbers to cite.

3. Foreign aid is increasingly earmarked for digital agendas. When making complex funding decisions, rankings appear to offer what political scientists call a ‘short-cut’: an authoritative view of what works, and who does it well. A rising score reassures international development donors that a country is committed. Top-scoring countries are taken as models of ‘best practice’, with donors funding replications of their methods elsewhere.

4. Recent history. Digital government emerged alongside the growth of global rankings (at least 68 frequently-cited indices have emerged since 2000). Many were designed to track progress in the post-Soviet world. Viewed as tools of transparency and accountability—a key promise of digitalisation—they grew up sharing a common language of state reform.

What we do with rankings

Given the attention afforded digitalisation rankings, their methodologies should compel. Yet measures of digitalisation are, as Aaron Maniam describes, ‘often patchy and poor’. They prioritise ‘presence’ over ‘quality’, awarding points for having a service, rather than for making it a good one.

The DAI, for example, measures ‘the presence of a digital identity (ID) system’. The EGDI measures availability of ‘online services’, not quality. Between 2012 and 2018, India jumped 29 places on the EGDI based almost exclusively on gains in that category. During this period India rolled out Aadhaar, the nation-wide biometric ID programme. Aadhaar has improved service provision. It has also raised concerns about exclusion and cyber security. The ranking does not obviously reflect them.

The problem is not only judging what digital ‘success’ looks like. Indexing organisations promote their work as sources of ‘best practice’. Funders and policymakers often engage with them on those terms. It’s also a question of who and what gets copied. Digitalisation rankings, like their economic equivalents, ‘not only act as judgments’ but also shape modes of governance.

Engaging indices as a source of replication may encourage de-contextualisation. We’ve seen how this goes before. Take size alone. Many high scorers on the UN E-Government survey 2018 have fewer than 6 million citizens—under half the population of Sao Paolo. Scaling digitalisation projects is complex. Where you learn from matters.

Indicators vs reform?

On a recent trip to Central Europe, one policymaker described ‘ranking highly and being good at the thing you’re being ranked for’ as ‘sometimes two different things; they serve different purposes.’ They are sometimes also mutually exclusive.

Political scientists identify something still more concerning: indicators, of all sorts, becoming ‘substitutes for the phenomena that they are measuring.’ The indicator, not what it measures, becomes the focus of social action. Like this newspaper snippet:

‘India is intent on improving its Ease of Doing Business rankings where it has been languishing between 130 and 139. India’s desire to break into the top 50 rank-holders is understandable.’

There is no mention of whether India would actually like to improve its ease of doing business.

Humbling the numbers

All this doesn’t mean that we should discard digital rankings or disclaim all their findings. But we should commit to broadening our understanding of what digital success in international development looks like, how it is documented, and the sources that we take as authoritative enough to adjudicate on it. Let’s humble the numbers.

In my view, number-humbling involves four things:

1. Talking about the social life of rankings. Rankings are human constructions, open to contestation. What social processes inform how they are generated? How do those processes affect their outcomes?

2. Encouraging methodological reform. Emerging discussion of new methodologies for global digital rankings should prioritise asking ‘What methodological updates might best serve the needs of development contexts?’

3. Reforming what we do with rankings. In time-pressed contexts, rankings can be taken as simple truths. Yet they are only one kind of evidence in a much broader evaluative puzzle. We need a richer set of contextualised, qualitative case studies, and real policy and donor engagement with them, to help that effort.

4. Measuring and evaluating more, not lessat the local level. Particularly where there is an element of self-reportage, rankings can have a limiting effect on what is measured locally. But there are plenty of gaps in government data in every country that could make a real difference.

We are too early in the process of digitalisation for international development to allow replication to get in the way of inspiration. In humbling the numbers, we might make better use of them.

Next

Open Source in global development by Srijoni Sen

Inclusion in Bangladesh, by Ishtiaque Hussain

Times have changed, by Clement Uwajeneza

public digitalThe public digital logo

Head Office

Clerks Court
18-20 Farringdon Lane
London, UK
EC1R 3AU

Our positions

Our values expressed in action and outcomes.

Read them here

Newsletter

A monthly scan about digital transformation and internet-era ways of working around the world.