The Economic Impacts of COVID-19 Will Force Us To Do More, Better, Faster (Again!) Luckily, We’ve Become Much Better at Performance Monitoring
COVID-19 and its economic impacts will challenge international cooperation, development, and humanitarian action to demonstrate value with unprecedented precision. The World Bank predicts that the global economy will shrink by 5.2% in 2020, with a continued decline of 3.9%-4.2% in 2021. That constitutes the deepest recession since World War II.[1] Not since the 2007/2008 financial crisis have the implications been so stark. Governments will tighten budgets while demanding that we do more, better, faster, and more cheaply.
Luckily, we’ve learned a lot since 2008. We have been developing performance monitoring systems that can get into the nitty-gritty of results, even in the most complicated operating environments. This has been built on the convergence of cheap mobile telephony, the proliferation of online data aggregators and dashboards, and the grudging acceptance that, actually, concepts around competitive advantage and core competencies, around economies of scale and cost ratios, even around return on investment and value, are perfectly and helpfully applicable to our work.
We have moved steadily toward recognizing that we work in the most complex operating contexts in the world, and that these demand exceptionally fine-tuned, real-time analytics to address the problems that inevitably arise, and to do so quickly and effectively. We’ve matured, and a more sober assessment of performance is a big part of what we’ve learned.
Figure 1: Screenshot of UK performance monitoring platform showing results of field monitoring of health and nutrition facilities across Somalia, showing those with adequate and inadequate water and sanitation facilities.
The most advanced performance monitoring system in the world is in Somalia. The UK Government has created a system with all the technological bells and whistles, but, more importantly, a turbocharged way to increase performance. (www.mesh-somalia.net) The UK made this investment out of necessity. During the 2011/2012 Somalia famine--an atrocious failure of food systems and the international community—the UK Government literally lost approximately £2 million—yes, ‘poof-gone’ along with the ‘briefcase’ NGOs that were funded. Her Majesty’s Government was never going to allow that to happen again.
Figure 2: Screenshot of UK performance monitoring platform showing cumulative results from partner data.
So, the UK invested heavily in a leading-edge system—MESH. In the last four years, MESH has conducted over 150,000 surveys of direct beneficiaries through a dedicated call centre, and over 12,000 field site visits to assess child protection, agricultural livelihoods, resilience, and urban integration. It has delivered over 40 briefs and evaluations, providing real-time insights into the most pressing performance issues. MESH collects and verifies all UK-supported partner micro-level data and leads quarterly performance reviews. All of this is organized and displayed in a consolidated dashboard (screenshots are included in this brief) that allows for real-time performance assessment.
All of this allows people to pinpoint problems early and to adjust, adapt, improve, and reach better results overall. It is a key driver for doing more, better, and faster.
How did they do this? What are the basic systems and lessons that could be applied as more and more people scamper around to build up performance management systems?
Figure 3: Process for developing a performance monitoring system
Analyse the causal pathways associated with results frameworks and theories of change with cold, sober thinking. Too often, performance professionals take the results framework and theory of change designed at the inception of a programme as the be-all and end-all of performance. They fail to analyse the precise actions/dependencies/constraints/opportunities/risks associated with activities, let alone the assumptions and gaps (known unknowns) related to how a programme expects to convert inputs into outputs and how those may contribute to outcomes and expected impact. Effective monitoring pinpoints the “crunch points” where the whole theory can come tumbling down due to poor delivery, and then beefs up monitoring in those areas.
For instance, in health and nutrition programming, we focus on supplies and staff. That’s it. Because we know that, however broad the programme might be, however much it tries to affect the supply and demand of critical health services, if the right medical supplies are not there at the right time or if a facility is understaffed, then nothing is going to work.
Getting to this level of precision about what can impede performance requires a lot of clear-eyed thinking about results frameworks and theories of change. People often get it wrong. For instance, billions of dollars were invested to get children into school to avoid a Lost Generation in response to the Syrian crisis. Across Lebanon, Turkey, Jordan, and Iraq, massive efforts were made to get children into school. Unfortunately, the metric they used to measure performance was attendance on the first day of the school year. They didn’t consider the fact that these children and their families faced tremendous social and economic pressures that prevented them from keeping their kids in school. Most kids left school for good after a few weeks. Tragic. It is also a failure to understand the results framework and what factors relate to performance.
Develop an analytical framework. Once the results framework and theory of change have been thoroughly interrogated, performance monitoring needs a framework for how each critical performance element will be monitored. This should include cohorts, data sources, data collection tools, analytics, and anything else essential to ensuring these programme components perform well. This might include statistically valid samples, a mix of qualitative and quantitative data, the frequency and timing of optimal performance-related data collection, etc. It is the “road map” for how every monitoring activity will be conducted while ensuring that there are valid links to the assumptions and issues identified during the causal pathway analysis of results frameworks and theories of change. The more time put into this, the better the monitoring results will be.
Avoid “garbage-in-garbage-out” data. Monitoring surveys must focus on the precise issues identified in the analytical framework—the problems with a tangible and vital relationship to results. Too often, surveys get swamped with everyone’s precious indicators, and they grow into monsters where easy performance analysis becomes impossible because of the resulting penchant to report on everything. Keep the surveys focused, and the ensuing data and analysis will be as well.
Display the data and analysis in concise, performance-oriented graphics and reports. There is a lot of focus on online dashboards, from Tableau to Palantir, from Premise to Ona. Of course, these online dashboards are cool—we can access them from our phones! We can put pretty pictures of them in reports, even as above!
This seemingly misses the point. It is more important to appreciate that most people running programmes and projects, especially in complex operating environments, don’t have time to cull through massive datasets and indicators, no matter how impressively arrayed in online dashboards. They need to know what’s working and how to address it quickly. That’s why, as described above, it is so important to have a good analytical framework that leads to good forms, which in turn lead to concise visualizations that show the good, the bad, and the ugly. The graphic of health centre facilities above is a good example. Any casual observer can see which facilities have problems and which don’t.
Follow up with actors to understand how they interpret the results and what actions should be taken. This step is essential and is either ignored, with some leaning back on their hunches in admiration of their cool analysis and graphs, or it focuses on ‘learning,’ as if this were a critical performance outcome.
All of this data is instead meant as a powerful tool for enacting change. It should provide an “alarm bell” that, if not addressed, could lead to a fire that could burn the whole enterprise down. The fire alarm is a good analogy. The monitoring data is the alarm bell. You then need to assess the extent of the fire, determine how to put it out, and prevent future fires. Concise monitoring data visualizations provide the focus to do this.
Use all of this to identify issues that require more in-depth analysis and use this to improve programming and operations—not the academic tome. Of course, learning is important. Yet, it needs to be based on the issues that impact performance. Too often, we tend toward drafting academic tomes, somehow trying to fit into academia, rather than positioning ourselves as additional performance tools: ways to dig into thorny issues in more depth while arriving at practical actions to remedy them. As such, they should typically be linked to workshops or other forums where people can discuss the implications, what may still need to be analysed, and, as with basic monitoring follow-up, what actions should be taken to improve performance. Of course, all of this is about learning—learning how to do better work.
[1] “COVID-19 to Plunge Global Economy into Worst Recession since World War II.” The World Bank, 8 June 2020.


