Our tools for identifying cause-and-effect in the world are matched by a particular view of how causality works. Ideas from complexity theory are forcing us to update our views on causality, so our tools must be updated as well. Before getting to the updates, I want to start with some of the tools used under a more traditional, linear, and simple view of causality.
Old tools 1: Root causes and growth diagnostics
There’s a process used by many consultants called root cause analysis. The basic structure works like so: Start with whatever problem you want to solve, then break it into constituent parts and causes. Go down another level, breaking those causes into their own causes. Keep going until you hit the “root cause” — and you’ve found the thing you need to address.
Many years ago, I worked for a consulting firm that applied this analysis to particular organizations and their management’s concerns. At this relatively simple and self-contained level, it’s a useful tool. The inputs were both qualitative (e.g. interviews with executives) and quantitative (e.g. benchmarking data). The analytical process was largely inductive. The output was displayed visually as a tree diagram. It looked something like this:
Except that this particular diagram came from a slightly different sort of analysis.
A few years after that consulting work, I found myself in grad school learning about growth diagnostics in international development. This approach shares a fundamental insight with the root cause analysis that I had used before, which is this: there may be many shortcomings in an economic system, but growth faces certain binding constraints which should get priority for reforms. The concept of binding constraints isn’t revolutionary — think of bottlenecks in a production process — but creating a framework for applying it to entire economies turns it into a very powerful tool.
One strength of growth diagnostics is that it considers each case individually. The approach is often held in contrast to the one-size-fits-all policy prescriptions of the Washington Consensus. Like root cause analysis, growth diagnostics brings order to the problem in question and guides our thinking on possible solutions.
However, it is still largely a one-way, top-down exercise. Dani Rodrik described a successful diagnostics exercise as, “moving downwards in the decision tree, rather than upwards or sideways.” And in a growth diagnostics handbook, Hausmann et al. noted that:
It starts from the big problem, then moves down to its causes. Only then are solutions considered. Another paper Rodrik wrote on the topic encapsulated this thinking in the title: “Diagnostics before Prescription.”
This top-down thinking is a strength and a weakness. It’s a strength because it orders the analysis in a convenient and ultimately powerful way. However, there’s no reason to think that a binding constraint, once identified, will yield to any policy or programmatic efforts. I saw the same thing when applying root cause analysis to organizations.
At the scale of national economies, political interests are frequently responsible for the binding constraints. Hausman et al. acknowledged this, but offered little guidance on how the diagnostics approach should deal with it. Diagnostics is generally focused on economic factors. You could extend the same type of analysis into political factors, but then it would be individuals and interest groups who are at fault. The analysis could no longer remain technocratic as the conclusions become contentious and (surprise) political.
For example: Suppose you traced one cause of low growth to high transportation costs, which are due to poor infrastructure in most areas. And further suppose that the poor infrastructure was due to political elites favoring other areas, or corrupt officials/contractors siphoning off money. Those problems won’t yield to mere policy fixes. And I guarantee that analysis won’t be gratefully received by national leaders who were simply waiting to have their eyes opened to the problem.
These are the weaknesses of growth diagnostics: lack of policy guidance and inability to grapple with politics.
Old tools 2: Intervention points and RCTs
Fortunately, the backers of growth diagnostics don’t claim that it’s the final word in analysis. The Hausmann et al. piece called diagnostics a “natural complement” to the more bottom-up approach of cost-benefit analysis on particular projects or policies. And in another piece, Rodrik called for pluralism, admonishing development economists for often believing in the “one right way” — whether a universal fix or a universal way of learning.
For example, he described how the “macro” of growth diagnostics relates to the “micro” of randomized control trials:
I like how Rodrik framed the relationship between the two methods. Diagnostics starts with the big problem and works downwards, seeking root causes and areas for possible solutions. On the other hand, RCTs start with interventions and work upwards, rigorously testing their impacts on bigger problems. If growth diagnostics seeks root causes, then the parallel concept for RCTs could be called intervention points.
When the dust settles, those two should be exactly the same: we’re looking for interventions at the root causes of poverty, poor health outcomes, hunger, and more. As a framework, intervention points strikes me as more intuitively useful than root causes — ultimately we’re interested in impact, bettering lives, doing things — but let’s keep in mind that these are basically one and the same.
The strength of RCTs is that they pin down these intervention points with a large degree of certainty. The method does this by controlling for all factors other than the intervention being tested. Those other factors are stripped away in the analysis. That leaves us with a pretty clear idea about the causality for that intervention point. However, causality isn’t the same as explanation. While establishing an intervention’s causality is helpful in some regards, it doesn’t tell us much about whether we should replicate that intervention in another context. To answer that question, we need to understand how and why it worked. We need an explanation. Some RCT proponents claim that repeating the study in different contexts will bolster the external validity of the results, but there’s increasing recognition that RCTs must be matched with other methods.
Political factors pose a major challenge to RCTs as well. Even for relatively straightforward interventions, local politics have the potential to cause unintended consequences that would confound measurement of the results. For explicitly political interventions, the situation is even worse, as contextual factors become central to the program’s execution and impacts.
So even if we pin down one causal consequence at an intervention point, we still lack certainty about other consequences and what they mean for the intervention’s applicability to another context.
The challenge: Politics, complexity, and the rootlessness of causality
Both RCTs and growth diagnostics share a blind-spot when it comes to politics. Maybe that shouldn’t be surprising, as these methods are promoted by economists. There’s also a much deeper shortcoming that these two approaches share. The complementary nature of diagnostics and RCTs lies in their symmetry (top-down vs. bottom-up) as well as their simplifying tendency: both strip away the complexities of reality in an effort to isolate certain factors.
This is highlighted in the very term root cause. We use this term all the time, but no one really believes that the causality behind something can be traced back to a single root. Not for a specific event, and certainly not for complex social phenomena. Not only are causes multiple, but feedback loops make them circular: poverty is caused by lack of education is caused by government failure is caused by low government capacity is caused by lack of tax base is caused by poverty. We went from poverty back to poverty in five steps. That chain could also include health outcomes, agricultural production, violent conflict, or countless other factors. (NB: The negative framing doesn’t matter to this. You could do a positive version too: increased earning potential is caused by better nutrition is caused by new seed varieties — and so on.)
If you were to display this visually, it wouldn’t be a tree with roots. It would be a web. Actually, it would be a multidimensional mish-mash of overlapping feedback loops and tenuous but very real causal links between countless ill-defined nodes.
In fact, it might look something like this:
In 2010, this chart quickly became emblematic of the US military’s reliance on PowerPoint. The New York Times ran it under the headline, “We Have Met the Enemy and He Is PowerPoint.”
More importantly, for our purposes, it underscored the complexity of US engagement in Afghanistan. The full version of the slide deck is even more bewildering. General McChrystal saw it and noted: “When we understand that slide, we’ll have won the war.” We all laughed about this muddled depiction of the Afghanistan conflict. Although in our honest moments, we silently worried that even this diagram was a simplification.
Yet none of this stops academics, advocates and journalists alike from discussing social problems as if causality were linear and identifiable. Find your own examples: just google the phrase, “root cause of…” followed by your favorite social issue or topical news story (e.g. HIV epidemic, financial crisis, Syrian conflict, whatever). Our language gets especially confused when we refer to something as being “both a root cause and a consequence.”
As I said above, what we’re really looking for are intervention points rather than root causes. But while that framing is more future-oriented, it still assumes that an intervention will have known consequences. In a complex causal web, that’s not a valid assumption. In the real world, causality doesn’t work like that.
Of course, there’s utility in the simplifying approaches of growth diagnostics and RCTs. All methodologies simplify the world to make it understandable, just as all narratives emphasize certain elements of the story while excluding others. In some situations, that simplification offers us enough to act. The critical step is to be cognizant of the limits of our knowledge — to know what we do not know. Methodological pluralism is needed for that. It must extend beyond diagnostics, RCTs, and even economics. We need other tools as well.
New tools: Grappling with a complex causal web
The world is incredibly complex. I could tell you that it’s “more complex” or “changing more rapidly” than ever before, but I don’t think that’s true. Pundits and consultants waive their hands and say “increasing complexity!” because it sounds cool and because it frightens audiences and clients into coughing up the cash. I’ve seen no evidence that this idea is anything more than chronocentrism at work.
Yet the world is still incredibly complex. It’s just as complex as it’s always been. The difference now is that we have more tools to grapple with that web of complex causality. We are complexity-enabled in ways that we never were before.
The tools fall into five broad categories:
1. Availability of data: Digital interactions have dramatically increased the amount of data available for analysis. Major corporations that do a lot of business online have the most, due to customer purchases and behavior. Other companies and organizations will catch up eventually. A lot of claims have been made for how Big Data will revolutionize analysis. However, it doesn’t seem like there are accepted methodologies for analyzing Big Data yet, so availability alone might not be enough. Big Data might lead to research risks like cherry-picking, false precision, stripped caveats, or a technocratic veneer on deeply political results. Still, Big Data has potential.
2. Processing power: I won’t belabor this point, since growth in computing power is a well-known phenomenon. The complexity-enabling aspect is that researchers can process the Big Data, and also build more detailed and nuanced models of reality. There is also a human side to increased processing power due to communication systems. Just as better computer chips allow faster digital data processing, better communication systems allow faster human data processing. Learning ultimately results from human and organizational processes. Computers crunch numbers but only people can give them meaning. We do that best through discourse with other people, and communications technologies are making that easier.
3. Interdisciplinary approaches: Methodological pluralism within economics is gaining ground, and so are interdisciplinary approaches. Whereas previously we relied on the different disciplines to strip away complexity on their own narrow topics of interest, now we see that the walls between disciplines are very porous and that this allows us to grapple more directly with complexity. Physicists are helping to explain traffic jams, design thinking is tackling international development problems, and the entire Freakanomics franchise is built around the application of economic methods to other topics. Collaboration across fields is yielding new possibilities for understanding the complexity we face (though improvements in university structures and funding could accelerate this).
4. Analytical frameworks: Complexity science itself offers a powerful lens. Concepts like emergent properties, feedback loops, and non-linearity help us understand events like the Arab Spring or shifts in ecosystems. As the complexity lens is applied to more problems and situations, we may see new analytical frameworks that incorporate complexity concepts. We’re already seeing this in disaster preparedness/recovery with the idea of resilience, which may have its own frameworks as it matures. In development more generally, maybe we’ll see a replacement for the dominant logical-framework, which is so ill-suited for describing complex programs.
5. Organizational and operational models: The last piece of this will be new ways of doing things — which is what this was all about in the first place. Some early efforts are underway. For example, Owen Barder and Ben Ramalingam describe cash-on-delivery aid as a complexity-aware approach; though I disagree with them on whether COD aid qualifies, the point here is that complexity thinking can change the way we address problems. Another example: promoting resilience may also involve new ways of organizing government or civic associations, or new funding mechanisms for recovery.
Some of these tools are in place, while others are still progressing. We’re leaving behind the days when a simple view of causality was the norm. These changes will cut across sectors. A new paradigm is emerging for international development in particular, with elements from the old one and influences from other fields being integrated. We’re developing the tools that will open our eyes and let us see the world as it is. Once we do that, there’s no telling what it will mean for our understanding and our impact.
Related posts on Find What Works:
- Complexity theory, adaptive leadership, and cash-on-delivery aid: one of these things is not like the others
- A few links on Big Data for Development
- Pritchett, feedback loops, and the accountability conundrum (guest post from USIP’s Andy Blum)
- Shifting the paradigm: Kuhn, Chambers and the future of international development
- Limitations of RCTs: Politics and context