Background first: RCTs have been promoted as an important means of improving the effectiveness of development aid projects. But there are also concerns that RCTs will become a dominating orthodoxy, driving out the use of other approaches to impact assessment, and in the worst case, discouraging investment in development projects which are not evaluable through the use of RCTs.
In my PhD thesis many years ago I looked at organisational learning through the lense of an evolutionary epistemology. That school of thought sees evolution (through the re-iteration of variation, selection and retention) as a kind of learning process, and human learning as a sub-set of that process. As I explain below, that view of the process of learning has some relevance to the current debate on how to improve aid effectiveness. It is also worth acknowledging the results of that process - evolution has been very effective in developing some extremely complex and sophisticated lifeforms, against which intentionally designed aid projects pale in comparison.
The point to be made: A common misconception is evolution is about the “survival of the fittest”. In fact this phrase, coined by Herbert Spencer, is significantly misleading. Biological evolution is NOT about the survival of the fittest, but the non-survival of the least fit. This process leaves room for some diversity amongst those that survive, and it is this diversity that enables further evolution. The lesson here is that the process of evolution is not about picking winners according to some global standard of fitness, but about culling of failures based on their lack of fitness to local circumstances.
This leads me to my own “modest proposal” for another route to improved aid effectiveness, which is an alternative to the widespread use of RCTs and the replication of the kinds of projects found to be effective via that means. This would be to build a widening consensus about the need for a defined “Minimum Level of Failure” (MLF) within the portfolio of activities funded or implemented by aid agencies. A MLF could be something like a 10% of projects by value. Participating agencies would committ to publicly declaring this proportion of their projects as failed. Each of these agencies would also need to show: (a) how in their particular operating context they have defined these as failures, and (b) what steps they will take to avoid the replication of these failures in the future. There would be no need for a global consensus on evaluation methods, or a hegemony of methods arising through less democratic processes. PS: Using the current M&E terminology, the consensus would need to be on the desired outcomes, not on the activities needed to achieve them.
I can of course anticipate, if not already hear, some protests about how unrealistic this proposal is. Let us hear these protests, especially in public. Any agency that did so would probably be implying, if not explicitly arguing, that such a failure rate would be unacceptable, because public monies and poor people’s lives are at stake. However making such a de facto claim of a 90%+ rate of success would be a seriously high risk activity, becaus it would be very vulnerable to disproof, probably through journalistic inquiry alone. For anyone involved with development aid programmes, a brief moment’s reflection would suggest that the reality of aid effectiveness is very different, and that a 10% failure rate is probably way too optimistic and in real life failures are much more common.
Perhaps the protesting agencies might be better advised to consider the upside of a achieving a minimum level of failure. If taken seriously establishing a norm of a minimal level of failure could help get the public at large, along with journalists and politicians, past the shock-horror of failure itself and into the more interesting territory of why some projects fail. It could also help raise the level of risk tolerance, and enable the exploration of more innovative approaches to the uses of aid. Both of these developments would be in addition to a progressive improvement on the average performance of development projects resulting from a periodic culling of the worst performers.
It is possible that advocates of specific methods like RCTs (as the route to improved aid effectiveness) might also have some criticisms of the MLF proposal. They could argue that these methods will generate plenty of evidence of what does not work, and perhaps that evidence should be privileged. But the problem with this method-led solution is that there is already a body of evidence from a number of fields of scientific research that negative findings are widely under-reported. People like to publish positive findings. This may not be a big risk while RCTs are funded by one or two major actors, but it will become a systemic risk as the number of actors involved increases. There needs to be an explicit and public focus on failure.
Actual data on failure rates
PS: 15th October 2010: Four days ago I posted below some information on the success and failure rates of DFID projects. I have re-stated and re-edited that information here with additional comments:
There is some interesting data on failure within the DFID system, most notably the most recent review of Project Completion Reports (PCRs), undertaken in 2005. See the “An Analysis of Projects and programmes in Prism 2000-2005”report available on the DFID website. The percentage (68%) of projects “defined as ‘completely’ or ‘largely’ achieving their Goals (Rated 1 or 2)” was given at the beginning of the Executive Summary, but information about failures was less prominent. Under section “8. Lessons from Project Failures” on page 61 it is stated “There are only 23 projects [out of 453] within the sample that are rated as failing to meet their objectives (i.e. 4 or 5) and which have significant lessons” (italics added). This is equivalent to about 5% of the sampled projects.
More importantly are the 20% or so rated 3 = Likely to be partly achieved (see page 64). It could be argued that those with a rating of 3 should also be included as failures, since their objectives are only likely to be partly achieved, versus largely achieved in the case of rating 2. In other words a successful project should be defined as one likely to achieve more than 50% of its Output and Purpose objectives. Others are failures. This interpretation seems to be supported by a comment sent to me (whose author will remain anonymous): " "My understanding is that projects with scores of less than 2 are under real pressure and maybe quickly closed down unless they improve rapidly. I have certainly "felt the pressure" from projects to score them 2 rather than 3. That said I have not buckled to the pressure!"
I think the fact that DFID at least has a performance scoring system (for all its faults), that it has done this analysis of the project scores, and that it has made the results public, probably puts it well ahead of many other aid agencies. I would like to hear about any other agencies who have done anything like this, along with comments on the strengths and weaknesses of what they have done. I would also like to see DFID repeat the 2005 exercise at the end of this year, this time with more discussion on the projects rated 3 = Likely to be partly achieved, and what subsequently happened to these projects.
PS 14th February 2011: Computer programs are intolerant of programming errors. So, computer programmers tried to avoid them at all costs, not always successfully. Doing so becomes a much bigger challenge as software grows in size and complexity. Now some programmers are trying a different approach, that involves recognising that there will always be programming errors. For more, see "Let It Crash" Programming" by Craig Stunz at http://blogs.teamb.com/craigstuntz/2008/05/19/37819/
PS 15th February 2011: "Why negative studies are good for health journalism, and where to find them" "
PS: 21 February 2011: See also the Admitting Failure website
PS: 23 April 2011. See today's Bad Science column in the Guardian by Ben Goldacre, titled "I foresee that nobody will do anything about this problem", on the difficulty of getting negative findings published