Well, this series of posts on policy implementation is taking longer than I expected. Which is more than a little bit appropriate, I suppose…
In When Things Break I presented the basic idea of looking at what happens to a system if one component breaks. In The Broken ATM I talked about the experience during fieldwork in Paiuí that got me thinking along these lines and talked about how a “micro veto player” can undermine a policy without totally stopping it.
So, I’m on a long bus ride mulling this over and the image that came to me was Christmas lights. I’m not sure if I grew up with the old-fashioned series lights or just heard stories about them, but in case you’re not familiar with the phenomena here’s how the old ones worked. If one bulb went out the whole string went out because they were wired in a series where one dead bulb broke the circuit. So people had to spend inordinate amounts of time taking a good bulb and using it to test each socket to find the bad bulb. With parallel wiring each bulb has an independent connection to the circuit, so if one goes out the others go along their merry way. For a more detailed explanation this BBC piece explains things in plain language with good illustrations. It also makes this comment, which sums the point I’m trying to make:
Parallel circuits are useful if you want everything to work, even if one component has failed. This is why our homes are wired up with parallel circuits.
So, I got to thinking. Let’s say we have a string of five bulbs, each of which works 90% of the time, and they’re wired in series. While each bulb is fairly reliable, the joint probability of all five working at once is (.9)(.9)(.9)(.9)(.9) = .54. In other words, we string five fairly reliable things together and get something which only works about half the time.
For reasons that aren’t particularly interesting, I ended up writing the three substantive chapters of my dissertation in reverse order. So the one on program design and implementation (which I called the “Why these policies?” chapter) got put on hold for a while. And by a while I mean close to two years.
When I finally got to writing that chapter and integrating it in to the literature on implementation (which hadn’t been part of my coursework) I found that Pressman and Wildavsky had made almost that exact argument in 1973 I was pretty sure that my ideas couldn’t be original, but having almost the same example stringing together several pieces with 90% chance of working was interesting to read.
Pressman and Wildavsky’s book is probably the best known book on policy implementation, and often credited with launching implementation as an area of study. (A 2005 article by Harald Saetren points out that it actually was not the first but that it is understandable that the authors did not know about previous studies, given the research tools available at the time).
Indeed, the book’s short title was simply Implementation. There are frequent jokes about every academic title having a colon, but their title had a colon and a semi-colon:
Implementation: How Great Expectations in Washington Are Dashed in Oakland; Or Why It’s Amazing that Federal Programs Work at All, This being a Saga of the Economic Development Administration as Told by Two Sympathetic Observers Who Seek to Build Morals on a Foundation of Ruined Hopes
The study chronicles the efforts of the Economic Development Administration (EDA) to enact infrastructure projects in a way that will generate employment for African Americans. The EDA’s key projects were a new airport hanger and a new port, but it also included small business loans and a health center. The authors chronicle the unexpected problems that complicated and delayed progress, even in the case of things that all relevant actors theoretically supported.
For Pressman and Wildavsky the culprit is “the complexity of joint action,” a different way of describing what we’ve already been discussing. A complex program such as what the EDA in Oakland inherently brings together many different “participants” who are needed at different “decision points.” The authors calculate that even if each step has high probability of success, it takes surprisingly few steps to bring the joint probability of success, and hence the probability of success for the program, below one in two (Table 8: p. 107). They chronicle the number of “clearances” required to enact the EDA’s policies, finding a total that exceeds this 50-50 chance threshold.
This way of looking at things is has an intuitive appeal, but also some serious shortcomings. A natural first response is to simply make an observation about the world we live in: yeah, but super complex stuff works all the time. My next post in this series will look at some of those objections and explain why, in spite of them, I think a less extreme version of their argument is helpful for understanding when and why implementation fails and succeeds.
As a side note, earlier today I did quick read through a great piece by Josh Schultz comparing the implementation of health exchanges in Washington and Oregon–I will give it the close read it deserves later. Ever since I started hearing about the problems with Cover Oregon–and experiencing them myself until we realized The Overworked Pediatrician could add me to her plan–I’ve been curious about specifics of why Cover Oregon has been such a mess. I think that maybe the implementation literature may be helpful for connecting that case to larger patterns.
Next post in this series: Criticism of the complexity argument.
Also see: How Cover Oregon fits into my discussion of complexity.