Jump to:Page Content
Controlling risks, or harms, is a central challenge for government regulators charged with the task of reducing societal ills and preventing bad things from happening. Understanding and unraveling the chain of components that comprise risk is the focus of Malcolm Sparrow’s current research. Sparrow is professor of the practice of public management and author of “The Character of Harms: Operational Challenges in Control.”
Q: In your research you advocate the systematic disaggregating of broad generalities of risk into specific well-defined problems so that smaller-scale specific interventions can be attempted. Please elaborate.
Sparrow: A lot of my past research has focused on regulatory practice. Many agencies of social regulation exist basically to control some class of harms or risks to society. There is a new pattern of behavior emerging, and many of these agencies and professions seem to be inventing novel organizational practices for themselves, but without any common theory and without much communication between them. So, in our Harvard Kennedy School Executive Education programs, we routinely bring together classrooms full of executives from many different regulatory agencies and focus on precisely those dilemmas and aspirations that they all share, as regulators, and which distinguish their work from the service-provision aspects of government.
What is this new emerging practice about? An analogy that I’ve found quite useful lately is the business of undoing knots. If you give a somewhat complex knot to an adult who has developed all the relevant cognitive skills and discipline, and ask them to undo it, they don’t jump into action immediately. First they hold it carefully, turn it this way and that, looking at the knot from each side, until they understand the structure of the thing itself. Then a plan begins to form: “if I loosen this strand, it will release that one, and then I’ll be able to pass this through that loop,” and so on. If they’ve understood the structure correctly, and formed the plan based on that understanding, then the knot falls apart, and is no more.
Increasingly, we see police agencies, environmental agencies, occupational safety, even custom officials focusing deliberately on specific, carefully identified problems. They are learning to spot very specific patterns of hazard or risk concentrations, whether these “knots” are crime problems, or specific environmental issues, occupational hazards, or patterns of drug-smuggling. What these agencies are learning to do – and which they find organizationally quite awkward – is to spot specific issues, study their structure, and devise tailor-made interventions. When they act in that way, the solutions they invent usually represent substantial departures from their agency’s business-as-usual. When they do this well, you see these almost surgical interventions – producing significant reductions, sometimes the complete disappearance of a specific pattern of harm – all as a result of this type of disciplined thinking, consciously focused on subcomponents of some general class of harm.
I described the adult approach. By contrast, if you give the same knot to a child who hasn’t developed the same skills or mental habits, watch what they do. They jump into action immediately. They don’t pay such close attention to the structure of the thing. They pull and tug and probably make things worse.
I used this analogy in a conference discussion in Canada recently and as soon as I was finished a very senior Canadian regulatory executive came up to me and said, “Well, you know, that child that you described, that’s my agency.” He said his agency used a generalized approach, didn’t pay particular attention to disaggregation of the issues, and often ended up making things worse.
What’s odd, when you look at this new pattern of behavior, is that there does not seem to be a well-established language for it. Different professions have quite different vocabularies. In the police profession it’s called “problem-oriented policing.” In the environmental protection area it’s sometimes called “environmental problem-solving.” The U.S. Occupational Safety and Health Administration (OSHA) talks about “a strategic approach to hazard mitigation.” In tax administration, the equivalent language relates to “identifying and mitigating patterns of non-compliance.” So, actually, there are common concepts and a new kind of professional competence spread broadly across the regulatory frontier, but lacking any well-established vocabulary. Sometimes, any real clarity about what this is, and how it’s different, seems quite elusive too.
A big piece of what I think my contribution should be here is to clarify the nature of this business, in a way that makes it easier for organizations to grasp, and implement, so they can seize the opportunities that this approach presents.
It seems that the art of picking apart harms, and unraveling them piece by piece, might be rather central to a broad range of issues facing the human race. If you read the United Nations’ Millennium Declaration it starts off with a really quite intimidating list of harms not sufficiently controlled, covering everything from extreme poverty to slavery, corruption, terrorism, threats of nuclear terrorism; trafficking in women, drugs, and nuclear materials; pollution, corruption, and infectious diseases. The declaration lists these all as bad things to be controlled, rather than good things to be constructed. Some think that distinction is immaterial, as “controlling corruption” might be thought of as equivalent to “promoting integrity,” and maybe “crime control” is the same thing as “promoting public safety.” All these things can be described one way up or the other.
I think which way up you view these things has very important operational consequences. Focusing on specific bad things (risk-concentrations, trends, patterns, etc.) offers you the opportunity to think and act like a saboteur: to find a vulnerability of the harm itself, and remove it, or produce a scarcity which the opposing forces cannot cure. And sabotage is efficient, assuming you’ve adequately understood the object to be destroyed, and its vulnerabilities. Many agencies are beginning to appreciate the resource-efficiency and effectiveness that comes from this type of artful “sabotage of harms.” If it’s true that there is in fact an art to the destruction of bad things, which is different from the construction of good things, then it is surely an art that we really all ought to understand.
Q: “Catastrophic risk” is a very important subject these days. Please explain your advice to regulatory agencies in dealing with these kinds of problems.
Sparrow: There are actually a whole range of properties that some risks possess that routinely frustrate those responsible for controlling them. The general features of this particular class of risks – catastrophic risks – combines very low frequency or probability (in fact they may never have happened even once) with consequences that are extremely serious if it should happen. Obvious examples include nuclear or biological terrorism, major pandemics, or extraordinary natural disasters.
In dealing with catastrophic risks, authorities lack the frequent but low-level instances or versions of the harm which normally motivate and inform training, practice and adaptation. For ordinary harms, you expect a natural feedback loop to exist between the patterns of instances that you see, with learning from them being fed back into control systems to enhance prevention, early warning, and response. With catastrophic events you don’t have enough events to support such a feedback loop, and so you have to artificially substitute for all of the learning that would come from frequent lower-level incidents.
It’s an odd thing to say, but there aren’t now enough airplane crashes to properly inform aviation safety. So you watch the aviation industry and the Federal Aviation Administration deliberately defining precursors, near-misses, and reportable events which don’t actually amount to or lead to accidents, but which can nevertheless act as a richer dataset to provide learning and feedback and improvement, even in the absence of disasters.
Dealing with catastrophic risks demands this type of systematic debriefing of near misses, precursor events, as well as disasters that might have happened elsewhere. It also demands the deliberate use of imagination, to figure out all the ways in which events, or near-events, could have been much worse. To prevent accidents in nuclear power plants, the nuclear power industry, in similar fashion, works its way systematically backwards through the chronology of a potential disaster, driving down the incidence rates for precursor events, holding more and more redundant safety systems in reserve. This is analytically and intellectually demanding work: difficult to organize, and extremely complicated to evaluate. Setting a budget, and justifying a budget, to prevent a disaster which has never happened, and which some people might assume won’t happen anyway, presents some very particular challenges.
Q: What does it mean in the long term for agencies to become good at controlling risk?
Sparrow: What it means in the long-term to be good at controlling risk is that you can spot emerging problems quickly and suppress them before they do much harm. I liken this to the game of “Whack-a-Mole,” that kids play at the fairground. You watch them hovering over a board with a mallet in their hand. The idea of the game is, as soon as there’s any new movement – whack it back down again quickly and effectively.
That really is what we would hope for with agencies of social regulation or risk control, that they would be able to spot emerging movement very quickly. That’s the kind of analytic vigilance that we spend a lot of time trying to emphasize and instill into agency operations, not only to spot it quickly but to respond quickly. And that doesn’t mean waiting three years for the passage of new legislation, but using available tools and enlisting relevant partners and getting onto it quickly, devising a solution – and, by the way, you always need an effective implement in your hand, sufficient to suppress the risk if you can.
My experience working with government agencies suggests they put a lot of emphasis on the size of the mallet in the hand – “I need more powers. I need more resources. I need more draconian intervention techniques, more legislative authority” – but they pay somewhat less attention to the need for constant vigilance and rapid response to any new movement. Perhaps if we got better at those two parts of the equation, we wouldn’t need such weighty interventions later in the day.
Q: You advise a number of government and intergovernmental agencies and non-profits on risk management. Can you discuss how some of them have adopted new strategies to maximize the impact of their work in this area?
Sparrow: A lot of government agencies’ core task is to identify societal risks and to suppress those risks effectively and at minimum cost and intrusion. So you can think of risk management not so much as protecting one’s own agency, but protecting society’s best interest, and acting expeditiously on important and emerging issues. That is the context within which we try to teach this subject. It is not “risk management” in the normal sense of “managing risks to my organization.” It’s risk management as an operating framework for doing the agency’s core business, which is working on risks or harms to health, welfare, security, or the environment.
Who’s doing it and who’s doing it well? There are quite a few financial regulators now, particularly in London, Amsterdam and Australia, who have really picked up the risk management model and are trying to institutionalize it as a core operational approach. Their modern phraseology is to “identify risks to markets and investors” and to try to control them on behalf of society. That’s a very different perspective on their job from the more traditional regulatory view, which says the job was to enforce legal compliance with a set of existing statutory requirements.
We’ve watched OSHA adopt a problem-solving approach to hazard mitigation in the workplace. We’ve watched the U.S. Customs Agency adopt a problem-oriented approach to narcotics interdiction, where they spot and group together drug smuggling methods and patterns and organize projects around those risk concentrations in order to eliminate them. Many environmental agencies around the world are now using this approach quite consciously and deliberately. And the need for “problem-oriented policing” is pretty well established all around the world, even if the effective practice of that idea remains quite patchy.
Over the last few years, though, I’ve noticed a lot of interest in this subject and these methods from people who are not government regulators. A lot of non-regulators have started signing up for my Executive Education courses. People from education, from public health, from the non-profit world, and even from foundations seem to be interested. When they would apply, I used to say to them “I think you’ve applied to the wrong program. You’re not a regulator.” And they would say, “But what you’re teaching here is really operational harm reduction, and we do that, too.”
You don’t have to be a regulator to have your eyes on some pattern of behavior or environmental hazard that you would rather prevent. Or, for that matter, on any one of the insufficiently controlled classes of harm described in the Millenium Declaration. So the non-profit world is in this game, too. Many foundations are seeking to commission and fund work of this type; and they too need guidance as to what piece of a task to bite off, and how many bites, and what type of performance account should they expect at the end. The private sector is in this game too – under the rubric of corporate risk management – but their focus is predominantly on protecting the corporation itself from a variety of harms that might endanger its operations, reputation, profitability, or capacity to perform. International non-governmental organizations are absolutely all over this game, in peacekeeping and conflict reduction, and humanitarian relief. So there is an enormous range of organizations in the business of identifying and controlling harms, and many of them don’t fit the mold of conventional regulatory agencies.
So my hypothesis here is a pretty simple one: that controlling risk, harms, or “bad things” is a substantial art. And that we’ve seen some significant learning lately, but scattered broadly across multiple fields. I believe this is an art worth codifying, and mastering.