BookPDF Available

Title: Risk

Authors:

Abstract and Figures

Risk compensation postulates that everyone has a "risk thermostat" and that safety measures that do not affect the setting of the thermostat will be circumvented by behaviour that seeks to re-establish the level of risk with which people were originally comfortable. It explains why, for example, motorists drive faster after a bend in the road is straightened. Cultural theory explains risk-taking behaviour by the operation of cultural filters. It postulates that behaviour is governed by the probable costs and benefits of alternative courses of action which are perceived through filters formed from all the previous incidents and associations in the risk-taker's life.
Content may be subject to copyright.
RISK
Frontispiece “Newton” by William Blake (source: Tate Gallery, London/Bridgeman Art
Library, London).
RISK
John Adams
University College London
London and New York
© John Adams 1995
This book is copyright under the Berne Convention.
No reproduction without permission.
All rights reserved.
First published in 1995 by UCL Press
Third impression 1996
Fouth impression 1998
Fifth impression 2000
First published 2001 by Routledge
11 New Fetter Lane
London EC4P 4EE
Routledge is an imprint of the Taylor & Francis Group
This edition published in the Taylor & Francis e-Library, 2002.
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library.
Library of Congress Cataloguing in Publication Data
Adams, John, 1938–
Risk: the policy implications of risk compensation and plural
rationalities/John Adams.
p. cm.
Includes bibliographical references and index.
ISBN 1-85728-067-9.—ISBN 1-85728-068-7 (pbk.)
1. Risk—Sociological aspects. 2. Risk management—Social
aspects. I. Title.
HM256.A33 1995
302
.12—dc20 95–88
CIP
ISBN 0-203-49896-8 Master e-book ISBN
ISBN 0-203-80720-0 (Adobe eReader Format)
ISBNs: 1-85728-067-9 HB
1-85728-068-7 PB
v
CONTENTS
PREFACE ix
ACKNOWLEDGEMENTS xii
1 RISK: AN INTRODUCTION 1
2 RISK AND THE ROYAL SOCIETY 7
“Actual risk”: what is it? 7
Can risk be measured? 10
Exposure 13
The response to risk: risk compensation 14
Homo prudens and free will 16
Risk: an interactive phenomenon 19
Problems of measurement 21
Varieties of uncertainty 25
3 PATTERNS IN UNCERTAINTY 29
The world’s largest industry 31
Patterns in uncertainty 33
Myths of human nature 34
Divergent perspectives on environmental threats:
an example of the cultural construction of risk 38
The four rationalities as contending paradigms 40
The cultural construction of pollution 41
Adding cultural filters to the risk thermostat 42
Groping in the dark 45
The Sydney Smith dilemma 50
4 ERROR, CHANCE AND CULTURE 51
The conventional wisdom 51
Enter Homo aleatorius 52
Balancing behaviour 56
Contents
vi
Types of error 58
Acceptability of risk 59
The efficacy of intervention 59
The importance of context 64
Scale, and voluntary versus involuntary risk 65
Error, chance and culture 67
5 MEASURING RISK 69
Not enough accidental deaths 69
What gets recorded? 73
Regression-to-mean and accident migration 76
Cultural filtering 81
Noise and bias 83
Off the road 87
Near misses 89
6 MONETIZING RISK 93
Some problems 95
Contingent valuation 97
Death: the ultimate challenge 102
Cultural filters 106
Kakadu National Park: an example 108
Who wants to monetize risk? 111
7 ROAD SAFETY 1: SEAT BELTS 113
The UK seat belt law 121
Three postscripts 126
Cultural theory 128
Cultural filters 130
Introspection 133
8 ROAD SAFETY 2: MORE FILTERING 135
Safe roads and dangerous countries 135
Safer vehicles? 137
Safer roads? 141
Safer road users? 143
A speculation 143
Bicycle helmets 144
The reaction 147
Motorcycle helmets 147
Alcohol and ignorance 151
The spike 154
Unsupportable claims 156
vii
Contents
9 A LARGE RISK: THE GREENHOUSE EFFECT 159
Alternative futures 162
The debate 165
Arguing in the dark 167
Vogon economics and the hyperspatial bypass 170
Tomorrow the world 173
An introspective postscript 176
10 THE RISK SOCIETY 179
Beck and cultural theory 181
Beck versus Wildavsky 182
Prescriptions 184
Professional disaster 186
The unimportance of being right 190
To avoid suffocation, keep away from children 192
Can better science resolve the debate? 193
11 CAN WE MANAGE RISK BETTER? 197
Wishful thinking 197
Abstractions and the fallacy of misplaced concreteness 199
Complicating the theory—a little bit 201
The mad officials 205
So, can we manage risk better? 208
The advice of others 209
How to manage risk 214
REFERENCES 217
INDEX 225
ix
PREFACE
This book began as a collaborative venture with Michael Thompson. For
over 15 years my research into risk, mainly on the road, was focused on the
theory of “risk compensation”. This theory accords primacy in the
explanation of accidents to the human propensity to take risks. The theory
postulates that we all come equipped with “risk thermostats” and suggests
that safety interventions that do not affect the setting of the thermostat are
likely to be frustrated by behavioural responses that reassert the level of risk
with which people were originally content. My research had noted that there
were large variations in the settings of individual thermostats, but had little
to say about why this should be so.
About ten years ago I read Michael’s article “Aesthetics of risk” (Thompson
1980), and about five years later met the man himself. His research into risk
over the past 20 years has been central to the development of a perspective
that has come to be known as “cultural theory” (Thompson et al. 1990).
Risk, according to this perspective, is culturally constructed; where scientific
fact falls short of certainty we are guided by assumption, inference and belief.
In such circumstances the deterministic rationality of classical physics is
replaced by a set of conditional, probabilistic rationalities. Risk throws up
questions to which there can be no verifiable single right answers derivable
by means of a unique rationality. Cultural theory illuminates a world of
plural rationalities; it discerns order and pattern in risk-taking behaviour,
and the beliefs that underpin it. Wherever debates about risk are prolonged
and unresolved—as, for example, that between environmentalists and the
nuclear industry—cultural theory seeks an explanation not in further
scientific analysis but in the differences in premises from which the
participants are arguing. Michael thought that risk compensation was obvious
common sense, and I thought that cultural theory would cast helpful light
on how the thermostat was set.
This book grew out of a joint research project called “Risk and
rationality” that we undertook for the Economic and Social Research
Council. It draws upon much of our earlier work, and makes
connections that had earlier eluded us. When we first discussed the idea
of a book with Roger Jones of UCL Press we rashly promised to produce
“the complete theory of risk”. Trying has been an educational
Preface
x
experience, but the complete theory of risk now seems to me as likely as
the complete theory of happiness.
The writing did not go as planned—but then this is a book about risk.
Michael, who is self-employed, was distracted by consultancy offers from
all around the world that he could not refuse. I stayed at home and got on
with the writing, making use of Michael, when I could catch him, as a
consultant. Inevitably the book does not have the balance originally intended
between his perspective and mine. In Chapter 11 I refer to the “tension”
between cultural theory and risk compensation. This refers to my unresolved
difficulty in reconciling cultural theory with the reflexivity of risk. The world
and our perceptions of it are constantly being transformed by our effect on
the world, and its effect on us. My perceptions of risk have been altered by
the process of writing this book. I now see the stereotypes of cultural theory—
egalitarians, individualists, hierarchists, fatalists and hermits—everywhere
I look. But which am I?
I think I can see elements of all these stereotypes in my own make up.
Am I more sophisticated and complex than all the other risk-takers in the
world? I doubt it. In applying these stereotypes to others I am reducing their
complex uniqueness to something that I can (mis)understand. In its raw
state the reflexive fluidity of the world overwhelms our limited powers of
comprehension. We resort to simplification and abstraction in an attempt to
cope. Cultural theory postulates a high degree of pattern and consistency in
the midst of all the reflexive fluidity. The insistence in cultural theory
(Thompson 1990) on the impossibility of more than five “viable ways of
life” I find unproven and unprovable, but I still find the theory useful. For
me, limiting the number of risk-taking types to five is defensible, not just by
theoretical speculation, but by virtue of five being a small and
comprehensible number; theories of behaviour, to be useful and widely
communicable, must be simple. Risk compensation and cultural theory
provide a life-raft that saves one from drowning in the sea of reflexive
relativism; they are two sets of simplifying assumptions deployed in this
book in an attempt to make sense of behaviour in the face of uncertainty.
They are not the complete theory of risk.
In “test marketing” draft chapters of the book on a variety of people
with an interest in risk, it became apparent that many from the scientific
and managerial side of the subject are unaware of the anthropological
literature on risk, and its roots in the work of Weber, Durkheim, Marx,
Malinowksi, Parsons and other old masters of sociology and
anthropology; they have reacted with scepticism and impatience to the
theorizing of Douglas, Wildavsky, Thompson and other, more recent,
workers in this tradition. On the other hand, some in this tradition have
complained that my treatment of cultural theory is “superficial and
derivative”—to quote from the comments of one referee on a part of
Chapter 3 which was submitted to an academic journal as an article. The
xi
Preface
literature on risk, measured by pages published, is over-whelmingly
dominated by the scientific/managerial perspective. In trying to make
cultural theory accessible to the scientist-managers, I have stripped it of
most of its historical baggage, and many of its claims to “scientific”
authority. I have retained what I consider to be its easily communicated
essence; I have treated it as a set of abstractions that help to make sense of
many inter-minable debates about risk. I have no illusions that my efforts
to bridge the divide between the “hard” and “soft” approaches to risk will
satisfy everyone—indeed cultural theory warns that everyone will never
agree about risk. But attempting the impossible has been fun.
John Adams
LONDON
xii
ACKNOWLEDGEMENTS
By far the largest debt incurred in writing this book is owed to Michael
Thompson. His wide knowledge of the anthropological literature, his shrewd
insights, his gift for seeing a problem from a new angle, his patience when
he was having difficulty getting information through my cultural filter (see
Ch. 3 on the subject of cultural filters), and above all his ability to disagree
agreeably, have made the writing an enormously educational, stimulating
and enjoyable experience. I look forward to arguing with him for years to
come.
The earliest, indeed formative, influence on my thoughts about risk was
Gerald Wilde, who coined the term “risk compensation”. I thank him for his
hospitality and many entertaining tutorials on the subject over the years. As
I observe in Chapter 2, everyone is a risk expert. This has made the job of
consulting the experts quantitatively daunting. Argument, I believe, is the
most educational form of discourse, and this book is the result of years of
arguing with just about anyone who would tolerate my banging on about
the subject; for the risk researcher, life is one never-ending field trip. This
makes the task of acknowledging all my debts quite impossible—I did not
always make notes at the time—but toward the end of the process, the
participants in our ESRC-sponsored workshop on risk and rationality helped
me to get my thoughts into focus: David Ball, David Collingridge, Karl Dake,
Mary Douglas, Maurice Frankel, Gunnar Grendstadt, Joanne Linnerooth-
Bayer, Mark MacCarthy, Gustav Ostberg, Alex Trisoglio, Brian Wynne. In
addition Bob Davis, John Whitelegg, Mayer Hillman, Stephen Plowden, Robin
Grove White, Edmund Hambly, Jacquie Burgess, Carolyn Harrison, and the
late Aaron Wildavsky, have all been helpful. I doubt that any of these people
would agree with all of this book but, whether they like it or not, they have
all had an influence on it.
Louise Dyett, Tim Aspden and Guy Baker in the UCL Department of
Geography drawing office have played a vital rôle in producing, and helping
to design, the illustrations.
Anna Whitworth’s constructive criticism of preliminary drafts has been
much appreciated. I am also grateful for her sharp editorial eye which has
prevented the publication of many spelling mistakes and lapses in political
correctness.
1
Chapter 1
RISK:
AN INTRODUCTION
One of the pleasures of writing a book about risk—as distinct from one about
an esoteric subject such as brain surgery or nuclear physics—is that one has
a conversation starter for all occasions. Everyone is a true risk “expert” in
the original sense of the word; we have all been trained by practice and
experience in the management of risk. Everyone has a valid contribution to
make to a discussion of the subject.
The development of our expertise in coping with uncertainty begins in
infancy. The trial and error processes by which we first learn to crawl, and
then walk and talk, involve decision-making in the face of uncertainty. In
our development to maturity we progressively refine our risk-taking skills;
we learn how to handle sharp things and hot things, how to ride a bicycle
and cross the street, how to communicate our needs and wants, how to read
the moods of others, how to stay out of trouble. How to stay out of trouble?
This is one skill we never master completely. It appears to be a skill that we
do not want to master completely.
The behaviour of young children, driven by curiosity and a need for
excitement, yet curbed by their sense of danger, suggests that these junior
risk experts are performing a balancing act. In some cases it is a physical
balancing act; learning to walk or ride a bicycle cannot be done without
accident. In mastering such skills they are not seeking a zero-risk life; they
are balancing the expected rewards of their actions against the perceived
costs of failure. The apprehension, determination and intense concentration
that can be observed in the face of a toddler learning to toddle, the wails of
frustration or pain if it goes wrong, and the beaming delight when it
succeeds—are all evidence that one is in the presence of a serious risk-
management exercise.
Most decisions about risks involving infants and young children are taken
by adults. Between infancy and adulthood there is a progressive handing
over of responsibility. Adults are considered responsible for their actions,
but they are not always considered trustworthy or sufficiently well informed.
Risk: an introduction
2
A third tier of responsibility for the management of risk consists of various
authorities whose rôle with respect to adults is similar to that of adults
withrespect to children. The authorities are expected to be possessed of
superior wisdom about the nature of risks and how to manage them.
The news media are routinely full of stories in which judgement is passed
on how well or badly this expectation is met. Consider an ordinary news
day chosen at random—28 January 1994, the day this sentence was written.
A perusal of that day’s papers1 reveals that the business sections and the
sports pages contain virtually no stories that are not about the management
of risk. They are all about winning and losing, and winners and losers. The
heroes are people who struggled against the odds and won. Prudence and
caution, except for the occasional bit of investment advice for old age
pensioners, are mocked as boring. The arts pages were full of risk stories
within risk stories. A novel, to win critical acclaim, must be novel; cliché
and plagiarism are unpardonable sins. Mere technical competence is not
enough; suspense and tension must be deployed to catch and hold the
attention of the reader. Risk is embodied in great works of art; and, to capture
the interest of the arts pages, risks must be taken by their creators. They are
interesting only if they are attempting something difficult. Great art risks
failure. But to be boring, predictable and safe is to guarantee failure.
What of the features pages? The motoring sections of most of the papers
were dominated as usual by articles focused on the performance of cars—
although the main feature in one was devoted to question of whether or not
airbags caused injuries, and another paper ran a small story about a new car
seat for children, with the claim that it “reduced the risk by 90%”. The life-
style section of another ran a double-page spread on high-performance
motorcycles under the headline “Born to be wild”.
The health pages were of course entirely devoted to risk stories: a new
chickenpox vaccine whose effectiveness remains to be proven; a series of
mistakes in cervical cancer screening that “put patients’ lives at risk”; the
risk of blood transfusions transmitting hepatitis-B; a vasectomy that did not
work; concern that epidural anaesthetics administered during childbirth
might harm the babies; the fear that bovine spongiform encephalopathy might
have spread to humans in the form of Creutzfeld-Jakob disease; doubts about
the efficacy of drugs prescribed to control high blood pressure; doubts about
the accuracy of the diagnosis of high blood pressure, and claims that it is
increased by the act of measuring it; claims that “the Government’s present
[health] screening programme cannot be justified by the results”; a lottery
held to choose who would be given a scarce new and unproven drug for
treating multiple sclerosis; and a member of parliament who died while
waiting for a heart transplant, with credit for the shortage of donors being
1. The Times, the Guardian, the Sun, the Daily Express, the Daily Mail, and the London
Evening Standard.
Risk: an introduction
3
given to the seat belt law. Even the gardening pages were dominated by
problems of decision-making in the face of uncertainty: combinations of
soil, climate, aspect, fungicides and insecticides might be propitious for
this plant and not for that.
The news pages were overwhelmingly devoted to risk. Risk it would
appear is a defining characteristic of “news”. On 28 January 1994 an aid
worker had been killed in Bosnia; the US President’s wife, Hilary Clinton,
visited the aftermath of the Los Angeles earthquake, most of whose victims
were reported to be uninsured; an Englishman staked his life savings of
£150,000 on one spin of the roulette wheel in Las Vegas, and won; the death
of a budgerigar was blamed on passive smoking, and a woman was turned
down as a prospective adoptive parent because she smoked; the roof of a
supermarket in Nice collapsed killing three people (56 column-inches), and
a fire in a mine in India killed 55 people (nine column-inches); Prince Charles
was fired at in Australia by a man with a starting pistol, and Princess Diana’s
lack of security was highly publicized, and lamented; further restraints were
threatened on cigarette advertising; death threats were made by Moslem
fundamentalists to a couturier and a fashion model following publicity about
a ball gown embroidered with a passage from the Koran; the Government
launched its “green” plan, and environmentalists complained about its in-
adequacy. A few more headlines: “Rogue train ignored signals”, “Russia’s
high-risk roulette”, “Mountaineer cleared of blame for woman’s death fall”,
“£440,000 losers in a game of Russian roulette (the costs of a lost libel action)”,
“Libel law proves costly lottery”, “Fall in family fortunes”, “The cat with 11
lives”, “Gales strand trains and cause road havoc”, “Fire-bombs in Oxford
St raise fear of fresh IRA campaign”, “Israelis have 200 N-bombs” and “Diet-
conscious add years to life expectancy”.
Television news and documentary programmes on the same day provided
a further generous helping of things to worry about, and films added fictional
accounts of neurosis, angst, murder and mayhem. Daily we are confronted
with a fresh deluge of evidence that in this world nothing can be said to be
certain, except death—stories of large-scale tax evasion having removed taxes
from the short list of certainties. How do we cope?
Grown-up risk-taking, like that of children, is a balancing act. Whether it
be the driver at the wheel of a car negotiating a bend in an icy road, or a
shopper trying to decide whether to buy butter or the low-fat spread, or a
doctor trying to decide whether to prescribe a medicine with unpleasant
side-effects, or a property speculator contemplating a sale or a purchase, or
a general committing his troops to battle, or a President committing his
country to curbing the emission of carbon dioxide, the decisions that are
made in the face of uncertainty involve weighing the potential rewards of
an act against its potential adverse consequences.
Every day around the world, billions of such decisions get made. The
consequences in most cases appear to be highly localized, but perhaps they
Risk: an introduction
4
are not. Chaos theorists have introduced us to a new form of insect life
called the Beijing butterfly—which flaps its wings in Beijing and sets in
motion atrain of events that culminates two weeks later in a hurricane in
New York. Extreme sensitivity to subtle differences in initial conditions,
the chaos theorists tell us, makes the behaviour of complex natural systems
inherently unpredictable. Prediction becomes even more difficult when
people are introduced to such systems—because people respond to
predictions, thereby altering the predicted outcome. Rarely are risk decisions
made with information that can be reduced to quantifiable probabilities; yet
decisions, somehow, get made.
The universality of expertise in risk management is a problem for those
who aspire to recognition as risk EXPERTS. The certified experts—those
who write books, learned articles and official reports on risk—have an
abstracted expertise that is sometimes useful, but is more often misleading.
They can demonstrate that the general public’s ability to estimate mortality
rates for different causes of death is often very wide of the mark (Fischhoff
et al. 1981); they can demonstrate, in the words of the Royal Society quoted
in Chapter 2, that there is a “gap between what is scientific and capable of
being measured, and the way in which public opinion gauges risks and
makes decisions”. They can demonstrate that ordinary people in managing
the risks in their lives, rarely resort to precise quantification. But what do
their scientific measurements signify? Very little, this book suggests.
Risk management is big business; the formal sector of the authorities—
the realm of the expert—involves government, commerce, and industry; it
employs actuaries, ambulance drivers, toxicologists, engineers, policemen,
mathematicians, statisticians, economists, chaos theorists, computer
programmers and driving instructors—to name but a few. The work of this
sector is highly visible. It holds inquests and commissions research. It passes
laws and formulates regulations. It runs safety training programmes and
posts warning signs. It puts up fences and locks gates. It employs inspectors
and enforcers—many in uniform. Its objective is to reduce risk.
But there is also the informal sector consisting of children and grown-up
children, and it is much bigger business; it consists of billions of freelance
risk managers—ordinary common-or-garden experts—each with his or her
own personal agenda. They go about the business of life—eating, drinking,
loving, hating, walking, driving, saving, investing, working, socializing—
striving for health, wealth and happiness in a world they know to be
uncertain. The objective of these risk managers is to balance risks and
rewards.
The formal and informal sectors co-exist uncomfortably. For the freelance
risk managers, the activities of the formal sector form a part of the context
within which they take their decisions. Sometimes the efforts of the formal
sector are appreciated: when, for example, it assumes responsibility for the
safety of the water you drink. Sometimes its efforts are thought to be
Risk: an introduction
5
inadequate: when it fails to slow down the traffic on your busy street.
Sometimes its efforts are resented: when it sets speed limits too low, or its
safety regulations interfere with activities you consider safe enough. But in
all ofthese cases, behaviour in the informal sector is modified by the activities
of the formal sector. You do not boil your water if they have made it safe.
You take more care crossing the road that their negligence makes dangerous.
You watch out for the police or safety inspectors whose silly rules you are
breaking.
The formal sector responds to the activities of freelance risk-managers in
various ways. Often it is patronizing. Road engineers with their accident
statistics frequently dismiss condescendingly the fears of people living
alongside busy roads with good accident records, heedless of the likelihood
that the good accident records reflect the careful behaviour of people who
believe their roads to be dangerous. Those who live alongside such roads,
and know their dangers, are more likely than the engineer, beguiled by his
statistics, to cross them safely. Sometimes the formal sector’s response is
abusive: the people who flout their rules are stupid, irresponsible or childish.
But most commonly the formal sector is mystified and frustrated. How, they
wonder—despite all their road improvements, vehicle safety regulations,
speed limits, alcohol limits, warning notices, inspection procedures and
fail-safe devices—do so many people still manage to have accidents?
A significant part of the explanation appears to lie in the formal sector’s
division of labour. Risk-management at an individual level involves no
division of labour; the balancing calculations that precede a risky act are all
done in the head of the individual. But when institutions assume
responsibility for risk management, it becomes difficult to identify where
the balancing act is done. Consider road safety. One can list institutions
concerned with maximizing the rewards of risk taking: the car industry, the
oil industry, the road builders, that part of the Department of Transport which
sees its function as aiding and abetting the process that generates increasing
traffic, the Treasury and Department of Trade and Industry who point to this
increase as evidence of growing prosperity. One can list other institutions
concerned with minimizing the accident costs of road traffic: the police, the
casualty services, PACTS (the Parliamentary Advisory Committee on Traffic
Safety), ROSPA (the Royal Society for the Prevention of Accidents), Friends
of the Earth and Greenpeace, who are concerned about the global threats of
traffic pollution as well as the danger to cyclists and pedestrians, and that
part of the Department of Transport responsible for road safety. But where,
and how, is the balancing act done? How do institutional risk-managers
manage individual risk-managers? And how do individual risk-managers
react to attempts to manage them? And can we all do it better?
The search for answers begins, in Chapter 2, with a look at the prevailing
orthodoxy, as exemplified by the Royal Society’s approach to risk
management.
7
Chapter 2
RISK AND
THE ROYAL SOCIETY
In 1983 Britain’s Royal Society published a report called Risk assessment
Its tone, in keeping with the Royal Society’s standing as the country’s pre-
eminent scientific institution, was authoritative, confident and purposeful.
The report drew upon and exemplified the prevailing international orthodoxy
on the subject of risk, and became a major work of reference. In 1992 the
Society returned to the subject with a new report entitled Risk: analysis,
perception and management. Although it was published by the Royal Society,
the Society was sufficiently embarrassed by its contents to insist in the preface
that it was “not a report of the Society”, that “the views expressed are those
of the authors alone” and that it was merely “a contribution to the ongoing
debate”. By 1992 the Royal Society was no longer capable of taking a
collective view about risk; it had become a mere forum for debate about the
subject. What happened? What is this “ongoing debate”, and how did it
derail their inquiries into the subject?
For their 1992 report the Society invited a group of social scientists—
psychologists, anthropologists, sociologists, economists and geographers—
to participate in their study. The social scientists, with the exception of the
economists, could not agree with the physical scientists of the Royal Society.
The disagreement that is found between the covers of the 1992 report can be
found wherever there are disputes about safety and danger. It is a
disagreement about the nature and meaning of “risk”. The resolution of this
disagreement will have profound implications for the control and distribution
of risk in all our lives.
“Actual risk”: what is it?
The 1983 report distinguished between objective risk—the sort of thing “the
experts” know about—and perceived risk—the lay person’s often very
different anticipation of future events. Not surprisingly, given the report’s
Risk and the Royal Society
8
provenance, it approached its subject scientifically. This is how it defined
the subject of its study in 1983:
The Study Group views “risk” as the probability that a particular
adverse event occurs during a stated period of time, or results from a
particular challenge. As a probability in the sense of statistical
theory, risk obeys all the formal laws of combining probabilities.
The Study Group also defined detriment as:
a numerical measure of the expected harm or loss associated with an
adverse event…it is generally the integrated product of risk and
harm and is often expressed in terms such as costs in £s, loss in
expected years of life or loss of productivity, and is needed for
numerical exercises such as cost-benefit analysis or risk-benefit
analysis.
The Royal Society’s definition of “detriment”, as a compound measure
combining the probability and magnitude of an adverse effect, is the definition
of “risk” most commonly encountered in the risk and safety literature (see for
example Lowrance 1980). It is also the definition of common parlance; people
do talk of the “risk” (probability) of some particular event being high or low,
but in considering two possible events with equal probabilities—say a fatal
car crash and a bent bumper—the former would invariably be described as
the greater risk. But, definitional quibbles aside, the Royal Society and most
other contributors to the risk literature are agreed about the objective nature
of the thing they are studying. There is also general agreement that progress
lies in doing more of what physical scientists are good at: refining their methods
of measurement and collecting more data on both the probabilities of adverse
events and their magnitudes. One of the main conclusions of the 1983 report
was that there was a need for “better estimates of actual risk based on direct
observation of what happens in society” (p. 18).
Across the Atlantic in 1983 the American scientific establishment was
also taking an interest in risk and coming to similar conclusions. The
National Research Council, which is the principal operating agency for
the National Academy of Sciences and the National Academy of
Engineering, published a report entitled Risk assessment in the Federal
Government: managing the process. Like their counterparts in the Royal
Society, they stressed the importance of the distinction between the
“scientific basis” and the “policy basis” of decisions about risk. Their
report repeatedly stressed the importance of maintaining “a clear
conceptual distinction between assessment of risks and the consideration
of risk management alternatives.” They warned that “even the perception
that risk management considerations are influencing the conduct of risk
assessment in an important way will cause the assessment and regulatory
decisions based on them to lack credibility”.
9
In the study leading up to its 1992 report, the Royal Society set out
maintaining the distinction between objective and perceived risk. The Study
Group’s terms of reference invited it to:
consider and help to bridge the gap between what is stated to be
scientific and capable of being measured, and the way in which
public opinion gauges risks and makes decisions.
It failed. The gap remains unbridged. The introduction of the 1992 report
repeats the 1983 report’s definitions of risk and detriment, and the first four
chapters of its 1992 publication still cling to the distinction between objective
and perceived risk. They are illustrated by the usual tables of objective risk—
the risk of dying from being a miner, or a smoker, or not wearing a seat belt,
and so on. They contain many qualifications about the accuracy of many
risk estimates, and admonitions against using them mechanistically, but these
warnings are presented as exhortations to try harder to obtain accurate
quantified estimates of objective risk. Chapter 4 concludes that “if risk
assessment is to be more than an academic exercise, it must provide
quantitative information that aids decisions…”.
However, by Chapter 5 the distinction between objective risk and
perceived risk, fundamental to the approach of the Royal Society’s 1983
report and the first four chapters of its 1992 report, is flatly contradicted:
the view that a separation can be maintained between “objective”
risk and “subjective” or perceived risk has come under increasing
attack, to the extent that it is no longer a mainstream position.
A contention of chapters 5 and 6 of the 1993 report, that the physical
scientists found variously maddening or frustrating, is that risk is
culturally constructed. According to this perspective, both the adverse
nature of particular events and their probability are inherently
subjective. Slipping and falling on the ice, for example, is a game for
young children, but a potentially fatal accident for an old person. And
the probability of such an event is influenced both by a person’s
perception of the probability, and by whether they see it as fun or
dangerous. For example, because old people see the risk of slipping on
an icy road to be high, they take avoiding action, thereby reducing the
probability. Young people slipping and sliding on the ice, and old people
striving to avoid doing the same, belong to separate and distinct
cultures. They construct reality out of their experience of it. They see
the world differently and behave differently; they tend to associate with
kindred spirits, who reinforce their distinctive perspectives on reality in
general and risk in particular.
Before exploring (in Ch. 3) the variety of ways in which risk is constructed,
I turn first to the way in which this variety frustrates those who seek to
subject risk to the measuring instruments of objective science.
“Actual risk”: what is it?
Risk and the Royal Society
10
Can risk be measured?
Lord Kelvin once said “Anything that exists, exists in some quantity and
can therefore be measured” (quoted in Beer 1967).
Physical scientists tend to be suspicious of phenomena whose existence
cannot be verified by objective replicable measurement, and Kelvin’s dictum
epitomizes the stance of those who address themselves to the subject of
risk. They might be called “objectivists”, or perhaps “Kelvinists” in keeping
with the theological character of their position—the dictum that underpins
their objective science is itself incapable of objective proof.
Chapter 5 of the 1992 Royal Society report overstates the current strength
of the opposition to the Kelvinists. The view that there is a distinction to be
made between, real, actual, objective, measurable risk that obeys the formal
laws of statistical theory and subjective risk inaccurately perceived by non-
experts is still the mainstream position in most of the research and literature
on safety and risk management. Certainly the view that there is no such
thing as “objective risk” and that risk is “culturally constructed” is one that
some members of the Royal Society appear to find incomprehensible, and
others, more robustly, dismiss as relativistic, airy-fairy nonsense. Much can
depend on whether or not they are right.
Britain’s Department of Transport belongs to the Kelvinist camp. It
measures the safety or danger of a road by its casualty record—the
consequences of real accidents. It draws a clear line between actual danger
and perceived danger. The Department is prepared to spend money only to
relieve actual danger. If a road does not have a fatality rate significantly
above “normal” (about 1.2 per 100 million vehicle kilometres), it is not
eligible for funds for measures to reduce the danger.
Sir Patrick Brown (1991), Permanent Secretary of Britain’s Department
of Transport, has announced that “funds for traffic calming will be judged
on casualty savings, not environmental improvements or anxiety relief. All
up and down the country there are people living alongside roads that they
perceive to be dangerous, but which have good accident records. They are
told in effect that if you don’t have blood on the road to prove it, your road
is officially, objectively, safe, and your anxiety is subjective and emotional.
In the road safety literature, and the safety literature generally, it is still
the mainstream position that casualty statistics provide the only reliable
measure of the success or failure of safety schemes. The foremost
academic journal devoted to safety issues is Accident analysis and
prevention; the metric of success and failure is embedded in its title. Safe
roads, or airlines, or factories, or hospitals, or schools, or playgrounds, are
those with few accidents. The objective of accident analysis is accident
prevention.
Why was the Royal Society studying risk in the first place? In 1983 they
put it this way:
11
Governments are now seen to have a plain duty to apply themselves
explicitly to making the environment safe, to remove all risk or as
much of it as is reasonably possible.
They were seeking to offer advice about how risk might be eliminated, or reduced,
or at least better managed. By 1992 the objective of managing risk had been
included in the title of their report Risk: analysis, perception and management.
This, one might think, is a worthwhile and uncontentious objective; it is one
shared with hundreds of journals and campaigning organizations concerned with
safety all around the world. The “plain duty” to reduce accidents permeates the
study of risk. If risk exists, according to the Kelvinists, it exists as a probability
that can be measured—by accident statistics. But can it?
The area of risk-taking that generates the greatest volume of accident
statistics is danger on the road. It is a category of risk that can be clearly
distinguished from other areas of risk-taking activity. Although major problems
are encountered in defining the categories of injury severity, the fatality
statistics in most highly motorized countries are probably accurate within a
few percentage points, and the circumstances of each fatal accident are
recorded systematically and in considerable detail. Furthermore, the numbers
of fatal accidents are large enough to permit statistical analysis of intervention
effects. The British Medical Association (1980) has observed that
deaths and injuries on the road are one of the few subjects where
preventive medicine can be based on reliable statistics on the effects
of intervention.
But controversy still surrounds the interpretation of these statistics. Consider
this view of the change over time of safety on the roads.
I can remember very clearly the journeys I made to and from school
because they were so tremendously exciting…The excitement
centred around my new tricycle. I rode to school on it every day
with my eldest sister riding on hers. No grown-ups came with
us…All this, you must realize, was in the good old days when the
sight of a motor car on the street was an event, and it was quite safe
for tiny children to go tricycling and whooping their way to school
in the centre of the highway. (Roald Dahl, in Boy, recalling his
childhood in Llandaff, Glamorgan in 1922.)
The young Roald Dahl was doing something that was “tremendously
exciting” and yet “quite safe”. Was he taking a risk?
Figure 2.1 shows that between 1922 and 1986, while the motor vehicle
population of England and Wales increased 25 fold, the number of children
under the age of 15 killed in road traffic accidents fell from 736 per annum
to 358. Allowing for changes in population, the road accident death rate for
children is now about half its level of 70 years ago. The child road death
rate per motor vehicle has fallen by about 98 per cent.
Can risk be measured?
Risk and the Royal Society
12
On the basis of these statistics, the conventional wisdom would conclude
that Roald Dahl’s subjective belief that the roads used to be “quite safe” was
simply wrong; objectively—statistically—they have become very much safer.
Certainly, this is the way politicians and safety officials routinely interpret
the evidence. A British Government road safety report (Department of
Transport 1990) says, for example:
Over the last quarter of a century, Britain’s roads have become much
safer. Road accidents have fallen by almost 20% since the mid–
1960s; the number of deaths is down by one third. At the same time,
traffic has more than doubled.
The orthodox school of risk assessment treats accident statistics as objective
measures of risk. The most commonly used scale is the number of “events”
(accidents or injuries) per 100,000 persons per unit of time (see, for example,
Urquhart & Heilmann 1984 and Hambly 1994). These measures are interpreted
by experts as objective indices of risk, and sometimes compared with the
subjective judgements of lay people, usually with the aim of demonstrating the
hopeless inaccuracy of the latter. A British Medical Association study (1987),
for example, reports enormous disparities between lay estimates of deaths and
Figure 2.1 Child road accident fatalities (source:
Hillman et al. 1990).
13
actual deaths attributed to a variety of causes. High-way engineers swap
anecdotes about roads that have good accident records, but which nevertheless
provoke complaints from local residents about their danger. The engineers insist
that the roads with few accidents are safe, and that the complainers are neurotic.
Exposure
More sophisticated members of the orthodox school of risk assessment might
raise questions about the appropriate measure of exposure of children to
traffic. Although traffic has increased enormously since the 1920s, the
exposure of children to traffic has probably decreased. It is not possible to
measure this decrease as far back as the 1920s, but surveys conducted in
England in 1971 and 1990 suggest a large reduction in children’s exposure
over that 19 year period in response to the perceived growth in the threat of
traffic. In 1971, for example, 80 per cent of seven and eight year old children
in England travelled to school on their own, unaccompanied by an adult.
By 1990 this figure had dropped to 9 per cent; the questionnaire survey
disclosed that the parents’ main reason for not allowing their children to
travel independently was fear of traffic.
But this evidence is still only broadly indicative of changes in exposure.
It is far from providing a precise measure. Indeed, in the case of children’s
road safety, it is not clear how exposure might be measured. How might one
measure the duration of their exposure to traffic, or the intensity of this
exposure, or its quality? Children are impulsive, energetic and frequently
dis-obedient (characteristics commonly found also in adults). They have
short attention spans and their judgement in assessing traffic dangers varies
greatly with age. They rarely walk purposefully to school or anywhere else.
They frequently have balls or tin cans to kick about, or bicycles or skate
boards on which to vie with one another in feats of daring. No survey
evidence exists to chart changes over time in the time they spend playing in
the street, as distinct from alongside the street, or the changes in speed,
volume and variability of traffic to which they are exposed.
But the most intractable measurement problem of all is posed by changes
in levels of vigilance, of both children and motorists, in response to variations
in perceptions of the likelihood of colliding with one another. As perceived
threats increase, people respond by being more careful. The variety of ways
in which this might be done has defeated attempts to measure it. The
unquantified, and mostly unquantifiable, changes in the exposure of children
to traffic danger are characteristic of the difficulties encountered by all
attempts to produce “objective” measures of risk outside of the casino where
the odds can be mechanistically controlled.
The problem for those who seek to devise objective measures of risk is
that people to varying degrees modify both their levels of vigilance and
Exposure
Risk and the Royal Society
14
their exposure to danger in response to their subjective perceptions of risk.
The modifications in response to an increase in traffic danger, for example,
might include fitting better brakes, driving more slowly or carefully, wearing
helmets or seat belts or conspicuous clothing, or withdrawing from the threat
completely or—in the case of children no longer allowed to play in the
street—being withdrawn from the threat by their parents.
In the physical sciences the test of a good theory is its ability to predict
something that can be measured by independent observers. It is the Royal
Society’s purpose in studying risk—to manage it—that frustrates efforts to
produce such predictions. Because both individuals and institutions respond
to their perceptions of risk by seeking to manage it, they alter that which is
predicted as it is predicted.
The league tables of “objective risk” commonly compiled to rank the
probabilities of death from different causes—from “radiation” to “being a coal
miner”—are constructed from data of immensely variable quality (see Ch. 5).
But even if the accuracy and reliability of the data could be ensured, there
would remain an insuperable problem in interpreting them as objective measures
of risk for individuals. They are aggregate measures of past risk-taking by large
and disparate populations of risk-takers, and they are a part of the evidence that
shapes the perceptions of risk which influence future risk-taking.
All risks are conditional. It would, for example, be possible to devise a
bowling machine that would, at randomized intervals, roll a child-size ball
across a street. If the volume and speed of traffic were known, and if the
speed and frequency of the balls were known, it would be possible to calculate
the objective probability of ball and vehicle colliding within a specified time.
But it would be useless as an estimate of the number of children one would
expect to be knocked down on a residential street, because both children and
drivers have expectations of each other’s behaviour, observe it, and respond
to it. The probabilities cited in the league tables would be valid predictors
only if they could be concealed from the people affected, or if the people
affected could be relied upon to behave as unresponsive dumb molecules;
that is only if the known probabilities were not acted upon. And that, of course,
would defeat the purpose of calculating them.
The response to risk: risk compensation
Figure 2.2, a model of the theory of risk compensation, illustrates the
circularity of the relationships that frustrate the development of objective
measures of risk. It is a model originally devised by Gerald Wilde in 1976,
and modified by Adams (1985, 1988). The model postulates that
everyone has a propensity to take risks
this propensity varies from one individual to another
this propensity is influenced by the potential rewards of risk-taking
15
perceptions of risk are influenced by experience of accident losses—
one’s own and others’
individual risk-taking decisions represent a balancing act in which
perceptions of risk are weighed against propensity to take risk
accident losses are, by definition, a consequence of taking risks; the
more risks an individual takes, the greater, on average, will be both the
rewards and losses he or she incurs.
The arrows connecting the boxes in the diagram are drawn as wiggly
lines to emphasize the modesty of the claims made for the model. It is an
impressionistic, conceptual model, not an operational one. The contents of
the boxes are not capable of objective measurement. The arrows indicate
directions of influence and their sequence.
The balancing act described by this illustration is analogous to the
behaviour of a thermostatically controlled system. The setting of the
thermostat varies from one individual to another, from one group to another,
from one culture to another. Some like it hot—a Hell’s Angel or a Grand Prix
racing driver for example; others like it cool—a Mr Milquetoast or a little
old lady named Prudence. But no one wants absolute zero.
The young Roald Dahl, tricycling and whooping his way to school,
exemplifies the need for excitement inherent in all of us. Psychologists
sometimes refer to this as a need for a certain level of arousal. This need
clearly varies, but there are no documented cases of its level being zero.
And even if such a case could be found, it could not produce a zero-risk life.
The single-minded pursuit of a zero-risk life by staying in bed would be
likely, paradoxically, to lead to an early death from either starvation or
atrophy. There is no convincing evidence that anyone wants a zero-risk life—
it would be unutterably boring—and certainly no evidence that such a life
The response to risk: risk compensation
Figure 2.2 The risk “thermostat”.
Risk and the Royal Society
16
is possible. The starting point of any theory of risk must be that everyone
willingly takes risks. This is not the starting point of most of the literature
on risk.
Homo prudens and free will
To err is human. So is to gamble. Human fallibility and the propensity to
take risks are commonly asserted to be the root causes of accidents. How
should responsibility for accidents be shared between these causes? The
safety literature favours, overwhelmingly, human error as the cause of
accidents (Reason 1990). No one wants an accident, therefore, it is argued,
if one occurs it must be the result of a mistake, a miscalculation, a lapse of
concentration, or simple ignorance of the facts about a dangerous situation.
Risk management in practice is overwhelmingly concerned not with
balancing the costs and benefits of risk but with reducing it. Most of the
literature on the subject is inhabited by Homo prudens—zero-risk man. He
personifies prudence, rationality and responsibility. Large corporations such
as Shell Oil hold him up as an example for all employees to emulate in their
campaigns to eliminate all accidents.
The safety challenge we all face can be very easily defined—to eliminate
all accidents which cause death, injury and damage to the environment
or property. Of course this is easy to state, but very difficult to achieve.
Nevertheless, that does not mean that it should not be our aim, or that
it is an impossible target to aim for. (Richard Charlton (1991), Director
of Exploration and Production, Shell Oil)
The aim of avoiding all accidents is far from being a public relations
puff. It is the only responsible policy. Turning “gambling man” into
“zero-risk man” (that is one who manages and controls risks) is just
one of the challenges that has to be overcome along the way. (Koos
Visser (1991), Head of Health, Safety and Environment, Shell Oil)
Homo prudens strives constantly, if not always efficaciously, to avoid
accidents. Whenever he has an accident, it is a “mistake” or “error”. When
this happens, if he survives, he is acutely embarrassed and he tries, with the
help of his expert advisers, to learn from his mistakes. Every major accident
is followed by an inquiry into the events leading up to it in order to ensure
that it can never happen again.
But people do willingly take risks. Gamblers may not like losing, but
they do like gambling. Zero-risk man is a figment of the imagination of the
safety profession. Homo prudens is but one aspect of the human character.
Homo aleatorius—dice man, gambling man, risk-taking man—also lurks
within every one of us. Perusal of films, television and the newspapers
17
confirms that we live in a society that glorifies risk. The idols of the sports
pages and financial pages are risk-takers. Our language is littered with
aphorisms extolling the virtues of risk: “nothing ventured nothing gained”,
“no risk, no reward”. Excessive caution is jeered at: “Prudence is a rich,
ugly old maid courted by Incapacity” quoth William Blake.
The oil industry, which now seeks to turn its workers into zero-risk men,
historically has been run by big risk-takers. The heroes of the industry in the
public imagination are “wildcats” like Getty and the Hunts, or Red Adair—
high-stakes gamblers prepared to put their lives or fortunes on the line. The
workers involved in the processes of mineral exploration and production were
“rough-necks”—men who were physically tough, and with a propensity to
take risks. Their exploits have been enshrined in the folklore of the 49’ers of
California, the Dangerous Dans of the Klondike, and innumerable other
characters from mining areas all around the world. And the 49’ers have become
a San Francisco football team, famous for its daring exploits.
We respond to the promptings of Homo aleatorius because we have no choice;
life is uncertain. And we respond because we want to; too much certainty is
boring, unrewarding and belittling. The safety literature largely ignores Homo
aleatorius, or, where it does acknowledge his existence, seeks to reform him. It
assumes that everyone’s risk thermostat is set to zero, or should be.
Imagine for a moment that the safety campaigners were successful in
removing all risk from our lives. What would a world without risk be like?
A world without risk would be a world without uncertainty. Would we want
it if we could have it? This is a question that has also been considered by
some eminent scientists. Einstein argued about it with Max Born.
You believe in a God who plays dice, and I in complete law and
order in a world which objectively exists, and which I in a wildly
speculative way, am trying to capture. I firmly believe [Einstein’s
emphasis], but I hope that someone will discover a more realistic
way, or rather a more tangible basis than it has been my lot to find.
Even the great initial success of the quantum theory does not make
me believe in the fundamental dice game, although I am well aware
that some of our younger colleagues interpret this as a consequence
of senility. No doubt the day will come when we will see whose
instinctive attitude was the correct one.
Albert Einstein, in a letter to Max Born, 7 September, 1944 (Born 1971)
The uncertainty that quantum physicists believe is inherent in physical nature
was anathema to Einstein. He believed that his inability to account for certain
physical phenomena in terms of strict cause and effect was a consequence
of his ignorance of the laws governing the behaviour of the phenomena in
question, or his inability to measure them with sufficient precision.
Max Born, to whom he was writing, was one of the leading figures in the
development of quantum mechanics. He was not persuaded by Einstein’s
Homo prudens and free will
Risk and the Royal Society
18
strict determinism, but could not find evidence or argument to persuade
Einstein to accept the quantum theory. Einstein objected that “physics should
represent reality in time and space free from spooky actions at a distance”.
One of the reasons, perhaps the main reason, why Born preferred spooky
actions at a distance to strict determinism was that “strict determinism
seemed…to be irreconcilable with a belief in responsibility and ethical
freedom”. He lamented:
I have never been able to understand Einstein in this matter. He was,
after all, a highly ethical person, in spite of his theoretical
conviction about predetermination…. Einstein’s way of thinking in
physics could not do without the “dice-playing God”, to be
absolutely correct. For in classical physics the initial conditions are
not predetermined by natural laws, and in every prediction one has
to assume that the initial conditions have been determined by
measurement, or else to be content with a statement of probability. I
basically believe that the first case is illusory…
And he wrote back to Einstein:
Your philosophy somehow manages to harmonize the automata of
lifeless objects with the existence of responsibility and conscience,
something which I am unable to achieve.
Uncertainty, according to Born, is the only thing that permits us the possibility
of moral significance. Only if there is uncertainty is there scope for
responsibility and conscience. Without it we are mere predetermined automata.
In the realm of theology the debate about determinism goes back far beyond
Born and Einstein. For centuries believers in free will have contended with
believers in predestination. No resolution of the debate is in prospect.
Theologians usually retire from the discussion with explanations like the
following:
The relation between the will of God and the will of man is
mysterious. The former is eternal and irreversible, the latter real and
free, within its proper limits. The appearance of contradiction in this
arises from the finiteness of our understanding, and the necessity of
contemplating the infinite and the immutable from a finite and
mutable point of view. (Hall 1933)
In the face of uncertainty, both scientists and theologians fall back on belief.
And even devout determinists such as Einstein cannot forgo the occasional
use of the word “ought”, which presupposes choice, and which brings in its
train both uncertainty and moral significance. If the determinists are wrong,
it would appear that risk is inescapable. It also appears to be desirable. Some
risk-taking behaviour appears to be a confirmation of moral autonomy.
Dostoevsky suggests that such confirmation in itself might be considered
19
the ultimate reward for risk-taking. Only by invoking such a reward can one
account for behaviour that would otherwise be seen as perverse and self-
destructive. This is how Dostoevsky (1960) puts it:
What man wants is simply independent choice, whatever that
independence may cost and wherever it may lead…. Reason is an
excellent thing, there’s no disputing that, but reason is nothing but
reason and satisfies only the rational side of man’s nature, while will
is a manifestation of the whole life…I repeat for the hundredth time,
there is one case, one only, when man may consciously, purposely,
desire what is injurious to himself, what is stupid, very stupid—
simply in order to have the right to desire for himself even what is
very stupid and not be bound by an obligation to desire only what is
sensible. Of course, this very stupid thing, this caprice of ours, may
be in reality, gentlemen, more advantageous for us than anything
else on earth, especially in certain cases. And in particular it may be
more advantageous than any advantage even when it does us
obvious harm, and contradicts the soundest conclusions of our
reason concerning our advantage—for in any circumstances it
preserves for us what is most precious and most important—that is,
our personality, our individuality.
From a Dostoevskian perspective, the greater the success of the safety
regulators in removing uncertainty from our lives, the stronger will become
the compulsion to reinstate it. This perspective has challenging implications
for accident reduction programmes that attempt to promote safety by
producing “failsafe” or “foolproof environments, or by the use of rewards
for safe behaviour, or penalties for dangerous behaviour. A world from which
all risk had been removed would be one with no uncertainty or freedom or
individuality. The closer one approaches such a state, the greater is likely to
be the resistance to further progress, and the more likely will be outbreaks
of Dostoevskian “irrationality”.
Risk: an interactive phenomenon
Figure 2.2 above can be used to describe the behaviour of a motorist going
around a bend in the road. His speed will be influenced by the rewards of
risk. These might range from getting to the church on time, to impressing
his friends with his skill or courage. His speed will also be influenced by his
perception of the danger. His fears might range from death, through the cost
of repairs following an accident, to embarrassment. It will also depend on
his judgement about the road conditions: Is there ice or oil on the road?
How sharp is the bend and how high the camber?—and the capability of his
car—How good are the brakes, suspension, steering and tyres?
Risk: an interactive phenomenon
Risk and the Royal Society
20
Overestimating the capability of the car, or the speed at which the bend
can be safely negotiated, can lead to an accident. Underestimating these
things will reduce the rewards at stake. The consequences, in either direction,
can range from the trivial to the catastrophic. The safety literature almost
universally ignores the potential loss of reward consequent on behaviour
that is too safe. Its ambition is to eliminate all accidents.
Figure 2.3 introduces a second car to the road to make the point that risk
is an interactive phenomenon. One person’s balancing behaviour has
consequences for others. On the road other motorists can impinge on your
“rewards” by getting in your way and slowing you down, or help you by
giving way. One is also concerned to avoid hitting them, or being hit by
them. Driving in traffic involves monitoring the behaviour of other motorists,
speculating about their intentions, and estimating the consequences of a
misjudgement. If you see a car approaching at speed and wandering from
one side of the road to another, you are likely to take evasive action, unless
perhaps you place a very high value on your dignity and rights as a road
user and fear a loss of esteem if you are seen to be giving way. During this
interaction enormous amounts of information are processed. Moment by
moment each motorist acts upon information received, thereby creating a
new situation to which the other responds.
Figure 2.4 introduces another complication. On the road, and in life
generally, risky interaction frequently takes place on terms of gross inequality.
The damage that a heavy lorry can inflict on a cyclist or, pedestrian is great;
the physical damage that they might inflict on the lorry is small. The lorry
driver in this illustration can represent the controllers of large risks of all
Figure 2.3 The risk “thermostat”: two drivers interacting.
21
sorts. Those who make the decisions that determine the safety of consumer
goods, working conditions or large construction projects are, like the lorry
driver, usually personally well insulated from the consequences of their
decisions. The consumers, workers or users of their constructions, like the
cyclist, are in a position to suffer great harm, but not inflict it.
Problems of measurement
Risk comes in many forms. In addition to economic risks—such as those
encountered in the insurance business—there are physical risks and social
risks, and innumerable subdivisions of these categories: political risks, sexual
risks, medical risks, career risks, artistic risks, military risks, motoring risks,
legal risks…The list is as long as there are adjectives to apply to behaviour
in the face of uncertainty. These risks can be combined or traded. In some
occupations people are tempted by danger money. Some people, such as
sky-diving librarians, may have very safe occupations and dangerous hobbies.
Some young male motorists would appear to prefer to risk their lives rather
than their peer-group reputations for courage.
Although the propensity to take risks is widely assumed to vary with
circumstances and individuals, there is no way of testing this assumption
by direct measurement. There is not even agreement about what units of
measurement might be used. Usually the assumption is tested indirectly by
reference to accident outcomes; on the basis of their accident records young
men are judged to be risk seeking, and middle-aged women to be risk averse.
But this test inevitably gets muddled up with tests of assumptions that
accidents are caused by errors in risk perception, which also cannot be
Figure 2.4 The risk “thermostat”: lorry driver and cyclist interacting.
Problems of measurement
Risk and the Royal Society
22
measured directly. If Nigel Mansell crashes at 180mph in his racing car, it is
impossible to determine “objectively” whether it was because he made a
mistake or he was taking a risk.
Both the rewards of risk and the accident losses defy reduction to a
common denominator. The rewards come in a variety of forms: money, power,
glory, love, affection, (self)-respect, revenge, curiosity requited, or simply
the sensation (pleasurable for some) accompanying a rush of adrenaline.
Nor can accident losses be measured with a single metric. Road accidents,
the best documented of all the realms of risk, can result in anything from a
bent bumper to death; and there is no formula that can answer the question—
how many bent bumpers equals one life?
Yet the Royal Society insists that there has to be such a formula. Its
definition of detriment—“a numerical measure of the expected harm or loss
associated with an adverse event”—requires that there be a scale “to facilitate
meaningful additions over different events”. Economists have sought in vain
for such a measure. The search for a numerical measure to attach to the
harm or loss associated with a particular adverse event encounters the
problem that people vary enormously in the importance that they attach to
similar events. Slipping and falling on the ice, as noted above, is a game for
children, and an event with potentially fatal consequences for the elderly.
The problems encountered in attempting to assign money values to losses
will be discussed further in Chapter 6; the main problem is that the only
person who can know the true value of an accident loss is the person suffering
it, and, especially for large losses—such as loss of reputation, serious injury,
or death—most people find it difficult, if not impossible, to name a cash
sum that would compensate them for the loss.
Figure 2.5 The risk “thermostat” stretched.
23
Figure 2.5 is a distorted version of Figure 2.2 with some of the boxes
displaced along an axis labelled “Subjectivity-Objectivity”. The box which
is displaced farthest in the direction of objectivity is “balancing behaviour”.
It is possible to measure behaviour directly. It was noted above, for example,
that parents have withdrawn their children from the streets in response to
their perception that the streets have become more dangerous. It is possible
in principle to measure the decline in the amount of time that children
spend in the streets exposed to traffic, but even here the interpretation of
the evidence is contentious. Do children now spend less time on the street
because they spend more time watching television, or do they spend more
time watching television because they are not allowed to play in the streets?
All of the elements of the risk compensation theory, and those of any
contenders of which I am aware, fall a long way short of the objective end of
the spectrum. Behaviour can be measured, but its causes can only be inferred.
And risks can be displaced. If motorcycling were to be banned in Britain,
it would save about 500 lives a year. Or would it? If it could be assumed that
all the banned motorcyclists would sit at home drinking tea, one could simply
subtract motorcycle accident fatalities from the total annual road accident
death toll. But at least some frustrated motorcyclists would buy old bangers
and try to drive them in a way that pumped as much adrenaline as their
motorcycling, and in a way likely to produce more kinetic energy to be
dispersed if they crashed. The alternative risk-taking activities that they
might get up to range from sky-diving to glue sniffing, and there is no set of
statistics that could prove that the country had been made safer, or more
dangerous, by the ban.
Figure 2.6, the dance of the risk thermostats, is an attempt to suggest a
few of the multitudinous complications that flow from the simple
relationships depicted in Figures 2.2 to 2.5. The world contains over 5 billion
risk thermostats. Some are large; most are tiny. Governments and large
businesses make decisions that affect millions if not billions. Individuals
for the most part adapt as best they can to the consequences of these decisions.
The damage that they individually can inflict in return, through the ballot
box or market, is insignificant, although in aggregate, as we shall see in
Chapter 3, they can become forces to be reckoned with.
Overhanging everything are the forces of nature—floods, earthquakes,
hurricanes, plagues—that even governments cannot control, although they
sometimes try to build defences against them. And fluttering about the dance
floor are the Beijing Butterflies beloved of chaos theorists; they ensure that
the best laid plans of mice, men and governments gang aft agley. Figure 2.6
shows but an infinitesimal fraction of the possible interactions between all
the world’s risk thermostats; there is not the remotest possibility of ever
devising a model that could predict reliably all the consequences of
intervention in this system. And chaos theorists now tell us that it is possible
for very small changes in complex non-linear systems to produce very large
Problems of measurement
Figure 2.6 The dance of the risk thermostats.
Varieties of uncertainty
25
effects. And finally, Figure 2.6 includes a line (broken to indicate scientific
uncertainty) to symbolize the impact of human behaviour on nature and all
the natural forces that overhang the dance floor. Discussion of the second
winged species, the angel, is reserved for Chapter 11.
The physical science involved in predicting such things as earthquakes
and hurricanes is far from producing useful forecasts and even further from
resolving the controversies over the greenhouse effect and ozone holes.
Forecasting the weather more than a week ahead is still thought to be
impossible, and when human beings get involved things become even more
difficult. The clouds are indifferent to what the weather forecaster might
say about them, but people respond to forecasts. If they like them, they try
to assist them; if not they try to frustrate them.
The history of the diffusion of AIDS, for example, is very imperfectly
known, not only because of problems associated with defining and measuring
the symptoms of the disease, but also because it is a stigmatizing disease
that people try to conceal, even after death; the order of magnitude of the
number of people in the world now suffering from AIDS is still in dispute.
The future course of the disease will depend on unpredictable responses to
the perceived threat, by governments, by scientists and by individuals—
and responses to the responses in a never-ending chain. Forecasts of the
behaviour of the disease will inform the perception of the threat, and
influence research budgets, the direction of research, and sexual practices,
which will in turn influence each other.
Scientific uncertainty about the physical world, the phenomenon of risk
compensation, and the interactive nature of risk all render individual events
inherently uncertain.
Varieties of uncertainty
“Risk” and “uncertainty” have assumed the rôle of technical terms in the
risk and safety literature since 1921, when Frank Knight pronounced in his
classic work Risk, uncertainty and profit that
if you don’t know for sure what will happen, but you know the odds,
that’s risk, and
if you don’t even know the odds, that’s uncertainty.
But in common, non-technical, usage this distinction between risk and
uncertainty is frequently blurred and the words are used interchangeably.
And even in the technical literature the distinction is often honoured in
the breach. Virtually all the formal treatments of risk and uncertainty in
game theory, operations research, economics or management science
require that the odds be known, that numbers be attachable to the
probabilities and magnitudes of possible outcomes. In practice, since such
numbers are rarely available, they are usually assumed or invented, the
Risk and the Royal Society
26
alternative being to admit that the formal treatments have nothing useful
to say about the problem under discussion (see Ch. 6 on monetizing risk
for examples).
The philosopher A.J.Ayer (1965), discussing the various senses in which
the word chance is used to refer to judgements of probability made a useful
threefold distinction. There are, he said, offering examples:
judgements of a priori probability: “the chance of throwing double-six
with a pair of true dice is one in 36”,
estimates of actual frequency: “there is a slightly better than even
chance that any given unborn infant will be a boy”, and
judgements of credibility: “there is now very little chance that Britain
will join the Common Market”.
The first two of these senses, often combined in the form of inferential
statistics, are the basis of most treatments of “objective risk”. Allied to the
law of large numbers, these treatments provide useful routine guidance in
many areas of life. Insurance companies for example consult past claim
experience in calculating the premiums they charge to cover future risks.
But even insurance companies, with their actuaries, powerful computers
and large databases, find this guidance far from infallible in the face of
true uncertainty. At the time of writing, most insurance companies in
Britain are reporting large losses, and some Lloyds Names are confronting
bankruptcy.
Reports in the press of the plight of some of the Lloyds Names suggest
that many of them did not appreciate the true nature of the business to
which they had lent their names. To become a Lloyds Name, no
investment is required. One must provide proof of personal wealth in
excess of £250,000, and give an undertaking of unlimited liability for
losses incurred by one’s syndicates; a Name is then entitled to a share of
the annual profits—the difference between the syndicates’ premium
income and payments of claims. It is now clear that many of the Names,
although wealthy, were naïve. Lloyds’ history of profitability had
persuaded them that they had simply to lend their names and collect the
profits; another case, they thought, of “to them that hath shall be given”.
But they had signed up for the uncertainty business.
Uncertainty as defined by Knight is inescapable. It is the realm not of
calculation but of judgement. There are problems where the odds are known,
or knowable with a bit more research, but these are trivial in comparison
with the problems posed by uncertainty. Blake’s picture of Newton
concentrating on making precise measurements with a pair of callipers while
surrounded by the mysterious “sea of time and space” (frontispiece) is an
apt metaphor for the contribution that science is capable of making to the
resolution of controversies rooted in uncertainty. Newton’s approach can
only ever deal with risk as narrowly defined by Knight and the Royal
Society—as quantifiable probability. The concern of this book is with the
27
more broadly defined risk of everyday English and everyday life—
unquantifiable “danger, hazard, exposure to mischance or peril” (OED). Risk
in these senses embodies the concepts of probability and magnitude found
in the quantified “scientific” definitions of risk, but does not insist that they
be precisely knowable. If one retreats from the unattainable aspiration of
precise quantification, one may find, I believe, some useful aids for navigating
the sea of uncertainty.
Varieties of uncertainty
29
Chapter 3
PATTERNS IN
UNCERTAINTY
The subjectivity and relativity of risk—so strongly resisted by those who
aspire to a scientific, objective treatment of the subject—have respectable
scientific antecedents. According to Einstein’s theory of relativity, the size
of an object in motion depends on the vantage point of the measurer. The
length of a moving train carriage, to take a much-used illustration, will be
longer if measured by someone travelling inside the carriage than if measured
by someone standing beside the track. For the speeds at which trains travel,
the difference will be negligible. The difference becomes highly significant
as the speed of light is approached; at such speeds “objective” measures—
independently verifiable measures—of speed, mass and time must be
accompanied by a specification of the frame of reference within which they
were made. And according to Heisenberg’s uncertainty principle, the act of
measuring the location of a particle alters the position of the particle in an
unpredictable way.
Similar problems of relativity and indeterminacy confront those who seek
to pin down risk with objective numbers. Risk is constantly in motion and it
moves in response to attempts to measure it. The problems of measuring
risk are akin to those of physical measurement in a world where everything
is moving at the speed of light, where the act of measurement alters that
which is being measured, and where there are as many frames of reference
as there are observers.
A part of the dance of the risk thermostats described in Chapter 2 takes
place in financial markets. Forecasters, market tipsters, even astrologers, predict
the future course of currency exchange rates, share and commodity prices,
interest rates, and gross domestic products. People buy and sell, guided by
their expectations, which are modified in the light of the behaviour of other
buyers and sellers. Other dancers can be found in supermarkets pondering
information about calories and cholesterol and fibre and pesticides, while the
supermarket owners monitor their concerns and buying behaviour through
questionnaires and, sometimes, closed circuit television—customer and owner
30
Patterns in uncertainty
constantly modify their behaviour in response to that of the other. Dancers
can also be found in ministries of defence all around the world, spying on
each other, and spending vast sums of money to defend themselves against
the “defence” of the enemy. And still others can be found on the road.
Throughout the world hundreds of millions of motor vehicles mix with billions
of people. A pedestrian crossing a busy road tries to make eye contact with an
approaching motorist. Will he slow down? The motorist tries to divine the
intentions of the pedestrian. Will he give way? Shall I? Shan’t I? Will he?
Won’t he? As the distance between them closes, signals, implicit and explicit,
pass between them at the speed of light. Risk perceived is risk acted upon. It
changes in the twinkling of an eye as the eye lights upon it.
“Risk” is defined, by most of those who seek to measure it, as the product
of the probability and utility of some future event. The future is uncertain
and inescapably subjective; it does not exist except in the minds of people
attempting to anticipate it. Our anticipations are formed by projecting past
experience into the future. Our behaviour is guided by our anticipations. If
we anticipate harm, we take avoiding action. Accident rates therefore cannot
serve, even retrospectively, as measures of risk; if they are low, it does not
necessarily indicate that the risk was low. It could be that a high risk was
perceived and avoided.
It is the very determination of the measurers to change risk that frustrates
their ability to measure it. In carefully contrived circumstances—such as the
spinning of a roulette wheel, or the example offered in Chapter 2 of the child-
size ball rolled across a road—one can estimate objective probabilities for
specified events. But in the real interactive world of risk management, where
the purpose of measurement or estimation is to provide information to guide
behaviour, risk will be contingent on behavioural responses, which will be
contingent on perceived risk, and so on. And even where the probability of
the event itself may lie beyond the control of the measurer—as in the case of
a predicted meteor impact—the information will still have consequences. Some
might pray, others might get drunk, others might dig deep burrows in which
to shelter; and, if the event did not happen, it would be likely to take some
time for society to recover and get back to business as usual. If the event were
an astronomical certainty, it would, of course, not be a risk.
In Chapter 2 it was noted that the direction of the change intended by the
risk measurers and managers is almost invariably down. The purpose of
risk research is risk reduction. Funds are made available for such research
in the hope or expectation that it will lead to a lowering of risk. The ultimate
objective is often proclaimed to be the removal of all risk, the elimination of
all accidents. Governments around the world continue to add to the existing
libraries full of safety laws and regulations. Safety campaigners are relentless
in their efforts to make the world safer. On achieving a regulatory or statutory
goal, they do not stop. They identify a new risk, and press for new laws or
regulations, or stricter enforcement of existing ones.
31
The world’s largest industry
The relentless pursuit of risk reduction has made safety an enormous
industry. It is not possible to be precise about its size, because safety merges
with everything else—manufacturers of safety glass, for example, produce
both glass and a safety product. But a few figures from some of the areas in
which it operates in Britain will be sufficient to demonstrate the scope and
economic significance of the risk reduction industry:
Safety in the home. Thousands are employed devising and enforcing
the regulations specifying the methods and materials used in house
construction. Thousands more are employed in the caring professions
to protect the elderly by installing hand-rails and skid-proof bath mats,
and overseeing the welfare of children.
Fire. There are 40,000 firemen in Britain, plus administrative support
staff, fire engineers, fire consultants, fire insurers, fire protection
services, people who make smoke alarms, people who make fire doors,
people who install them, and a fire research station to help devise
further protection.
Casualty services. In addition to those employed in hospital casualty
departments there are 20,000 ambulance drivers in the country and a
further 70,000 St John’s ambulance volunteers trained in first aid.
Safety at play. Thousands more are employed to ensure safe play by
inspecting playgrounds, installing rubberized surfacing, working as
play supervisors and lifeguards in swimming pools, plus the people
who train all these people.
Safety at work. The Health and Safety Executive responsible for
overseeing safety at work employs 4,500 people, but this is the tip of
the iceberg. At University College London, where this book is being
written, we have six full-time staff devoted to supervising our safety,
plus a 13–person central safety committee, plus 97 members of staff
designated as safety officers, plus 130 who have qualified in first aid
by taking a one-week course sponsored by the college. I am writing
behind a fire-resistant door, in a building made out of glass and
concrete and steel, with frequent fire drills, and windows that will not
open wide enough for me to fall out—or jump out. In factories safety is
taken more seriously; goggles, helmets and steel-capped boots must be
worn, machinery guarded, and “failsafe” procedures observed.
Safety on the road. The most highly regulated activity of all is
motoring. Most road traffic law is justified on safety grounds.
Annually the number of people proceeded against in court for motor
vehicle offences is over 2.5 million. They account for 75 per cent of
all court proceedings and an enormous amount of police time. In
addition, motorists pay fines for over 5 million fixed-penalty offences
(mostly parking offences), but many of these are proceeded against for
The world’s largest industry
32
Patterns in uncertainty
safety reasons. Involved in all this legal work are 27,000 magistrates
and judges, and untold numbers of police, lawyers, administrators and
statisticians. Over 0.5 million motorists are tested for alcohol every
year, almost 2 million new drivers are tested, and 19 million motor
vehicles are given safety checks. Vehicle safety regulations cover
almost every aspect of motor vehicles—the tread of tyres, the type of
glass in wind-screens, brakes, crash-worthiness—and add billions of
pounds to the total annual cost. Safety also forms a major part of the
justification for most new road building, an activity valued at over £2
billion a year, and employing many thousands.
Beyond all this there are the police forces employing over 150,000 people,
the security industry selling personal and property protection, and reputed to
be the world’s fastest growing industry, the insurance industry,
environmental health officers and pollution inspectors, the safety and
environmental pressure groups, and the National Health Service, employing
1.5 million. Affecting the lives of even more people are measures addressing
mega-risks such as nuclear power, ozone holes and the greenhouse effect.
And the armed forces employ a further 300,000 and command an annual
budget of £19 billion, to defend Britain against the risk of attack from other
countries who spend similar or larger amounts of money on “defence”. And
finally, world wide, there are the billions of part-time self-employed—all of
us—routinely monitoring our environment for signs of danger and
responding as we judge appropriate. Definitional problems preclude
precision, but, when all its constituent parts are combined, the risk reduction
industry almost certainly deserves to be called the world’s largest industry.
Is the world getting value for money from the vast resources committed
to risk reduction? A clear answer is seldom possible. Is, for example, the
relatively small number of fatalities thus far attributable to the nuclear
industry proof of safety-money effectively spent, or is it proof of money
wasted on unnecessary defence against an exaggerated threat? It is
impossible to say; no one can prove what would have happened had
cheaper, less stringent, design standards and safety procedures been
adopted. No one can foretell the frequency of future Chernobyls and, given
the long latency period of low-dose radiation before health effects become
apparent, no one can say what the ultimate cost of Chernobyl will be. And,
the proponents of nuclear power might ask, if this cost could be known,
might it be judged a price worth paying for the benefits of nuclear power?
The world’s largest industry in all its manifestations from the provision
of skid-proof bath mats to the design of nuclear containment vessels and
Star Wars defence shields, appears to be guided by nothing more than hunch
and prejudice—billions of hunches and billions of prejudices. The dance of
the risk thermostats appears, at first sight, to be an inchoate, relativistic
shambles. But further scrutiny discloses order and pattern in the behaviour
of the dancers.
33
Patterns in uncertainty
The frustration of scientists attempting to measure risk suggests the direction
in which we ought to turn for help. Ever since Thomas Kuhn’s (1962) The
structure of scientific revolutions, scientific frustration has been seen as
symptomatic of paradigm conflict—of discord between the empirical
evidence and the expectations generated by the scientists’ paradigm, or world
view. The history of science as described by Kuhn is a process of paradigm
formation and paradigm overthrow in a never-ending quest for truth and
meaning. The actors in this drama are characterized by their constellations
of assumptions and beliefs about the nature of the reality they are exploring.
Occasionally some of these beliefs assume the form of explicit hypotheses
that can be tested, but mostly they are implicit and subconscious. Risk,
however, presents problems and challenges not just to scientists but to lay
persons as well. We all, daily, have our world views confronted by empirical
evidence. And the world about which we must take a view when we are
making decisions about risk comprises not just physical nature, but human
nature as well.
Douglas & Wildavsky (1983) began their book Risk and culture with a
question and an answer: “Can we know the risks we face now and in the
future? No, we cannot; but yes, we must act as if we do.” The subtitle of
their book was An essay on the selection of technological and environmental
dangers, and the question they were addressing was why some cultures
face risk as if the world were one way, and others as if it were very different.
Why do some cultures select some dangers to worry about where other
cultures see no cause for concern?
The management of ecosystems such as forests, fisheries or grasslands
provides a good example of the practical consequences of behaving as if the
world were one way rather than another. Ecosystem managers must make
decisions in the face of great uncertainty. Ecologists who have studied
managed ecosystems have found, time and again, that different managing
institutions faced with apparently similar situations have adopted very
different management strategies. Holling (1979, 1986) discerned patterned
consistencies in these differences that appeared to be explicable in terms of
the managers’ beliefs about nature. He noted that, when confronted by the
need to make decisions with insufficient information, they assumed that
nature behaves in certain ways. He reduced the various sets of assumptions
he encountered to three “myths of nature”—nature benign, nature ephemeral,
and nature perverse/tolerant. Schwarz & Thompson (1990) added a fourth—
nature capricious—to produce the typology illustrated by Figure 3.1. The
essence of each of the four myths is illustrated by the behaviour of a ball in
a landscape, and each, they concluded was associated with a distinctive
management style.
Patterns in uncertainty
34
Patterns in uncertainty
Nature benign: nature, according to this myth, is predictable, bountiful,
robust, stable, and forgiving of any insults humankind might inflict upon
it; however violently it might be shaken, the ball comes safely to rest in the
bottom of the basin. Nature is the benign context of human activity, not
something that needs to be managed. The management style associated
with this myth is therefore relaxed, non-interventionist, laissez-faire.
Nature ephemeral: here nature is fragile, precarious and unforgiving. It is
in danger of being provoked by human carelessness into catastrophic
collapse. The objective of environmental management is the protection
of nature from humans. People, the myth insists, must tread lightly on
the Earth. The guiding management rule is the precautionary principle.
Nature perverse/tolerant: this is a combination of modified versions of the
first two myths. Within limits, nature can be relied upon to behave pre-
dictably. It is forgiving of modest shocks to the system, but care must be
taken not to knock the ball over the rim. Regulation is required to prevent
major excesses, while leaving the system to look after itself in minor
matters. This is the ecologist’s equivalent of a mixed-economy model. The
manager’s style is interventionist.
Nature capricious: nature is unpredictable. The appropriate management
strategy is again laissez-faire, in the sense that there is no point to
management. Where adherents to the myth of nature benign trust nature to
be kind and generous, the believer in nature capricious is agnostic; the
future may turn out well or badly, but in any event it is beyond his control.
The non-manager’s motto is que sera sera.
Myths of human nature
The four myths of nature are all anthropocentric; they represent beliefs not
just about nature but about humankind’s place in nature. The four myths of
Figure 3.1 The four myths of nature.
35
nature, by focusing attention on the managers’ beliefs, have proved
remarkably fruitful in helping to make sense of the things that managers do.
They carry out their responsibilities as if nature could be relied upon to
behave in a particular way.
The central theme of Risk and culture is that risk is “culturally constructed”.
This theme has been further refined, and linked to Holling’s myths of nature,
by Schwarz & Thompson (1990) in Divided we stand and by Thompson et al.
(1992) in Cultural theory. In these works the authors inquire into the origins
of the beliefs about nature that guide risk-taking decisions and, like Holling,
they discern patterns. The essence of these cultural patterns has also been
distilled into a fourfold typology, illustrated by Figure 3.2.
This typology, originally known rather cryptically as “grid/group”, has
two axes. Moving along the horizontal axis from left to right, human nature
becomes less individualistic and more collectivist. The vertical axis is
labelled “prescribed/unequal” and “prescribing/equal”; at the top, human
behaviour is “prescribed”—constrained by restrictions on choice imposed
Figure 3.2 The four myths of human nature.
Myths of human nature
36
Patterns in uncertainty
by superior authority, and social and economic transactions are
characterized by inequality. At the bottom there are no externally prescribed
constraints on choice; people negotiate the rules as equals as they go along.
In the lower left-hand corner of this diagram we find the
individualist and in the upper right-hand corner the hierarchist. These
are familiar characters to sociologists and anthropologists accustomed
to the division of cultures into those organized as hierarchies and
those in which markets mediate social and economic relations. This
traditional bi-polar typology has been expanded by the cultural
theorists to include two new archetypes, the egalitarian and the
fatalist. A full description of this typology can be found in Divided we
stand and Cultural theory; in brief:
Individualists are enterprising “self-made” people, relatively free from
control by others, and who strive to exert control over their
environment and the people in it. Their success is often measured by
their wealth and the number of followers they can command. The self-
made Victorian mill owner would make a good representative of this
category.
Hierarchists inhabit a world with strong group boundaries and binding
prescriptions. Social relationships in this world are hierarchical, with
everyone knowing his or her place. Members of caste-bound Hindu
society, soldiers of all ranks, and civil servants, are exemplars of this
category.
Egalitarians have strong group loyalties but little respect for externally
imposed rules, other than those imposed by nature. Group decisions
are arrived at democratically and leaders rule by force of personality
and persuasion. Members of religious sects, communards, and
environmental pressure groups all belong to this category.
Fatalists have minimal control over their own lives. They belong to no
groups responsible for the decisions that rule their lives. They are non-
unionized employees, outcasts, untouchables. They are resigned to
their fate and they see no point in attempting to change it.
In Divided we stand Schwarz & Thompson proposed that this typology of
human nature could be mapped onto the typology of physical nature. Figure
3.3 illustrates this mapping. World views they argued were inseparable from
ways of life, and viable ways of life were those with world views that helped
them to survive in the face of the uncertainty of physical nature, and also in
the face of competing world views. The capriciousness of nature, they
suggest, complements, and is complemented by, a sense of fatalism. A
capricious nature cannot be governed; one can only hope for the best, and
duck if you see something about to hit you. Individualism accords with a
benign nature that provides a supportive context for the individualist’s
entrepreneurial, trial-and-error way of life. An ephemeral nature demands
that we tread lightly on the Earth and complements the “small-is-beautiful”
37
ethic of the egalitarian. And the perverse/tolerant view of nature
complements the hierarchist’s managerial approach to both nature and social
relations; research is needed to identify the limits of nature’s tolerance, and
regulation is required to ensure that the limits are not exceeded.
These four distinctive world views are the basis of four different
rationalities. Rational discourse is usually recognized by its adherence to
the basic rules of grammar, logic and arithmetic. But in an uncertain world
the premises upon which rational arguments are constructed are themselves
beyond the reach of rationality. Disputes about risk in which the participants
hurl charges of stupidity and irrationality at each other are usually seen
upon dispassionate inspection to be arguments in which the participants
are arguing from different premises, different paradigms, different world
views—different myths of nature, both physical and human. These different
rationalities tend to entrench themselves. Both the paradigms of science
and the myths of cultural theory are powerful filters through which the
world is perceived, and they are reinforced by the company one keeps.
Figure 3.3 The four rationalities.
Myths of human nature
38
Patterns in uncertainty
The combined typology of Figure 3.3 forms the central framework of
Cutural theory. Empirical support for the theory is, the authors concede,
sparse, and in its current form it presents many challenges to those who
would frame it as a quantifiable, refutable hypothesis. “What”, the authors
ask, “would count as evidence against our theory?” “Most damaging”, they
answer, “would be a demonstration that values are little constrained by
institutional relationships.” But values, as we shall see in subsequent chapters
(especially Ch. 6), are as elusive as risk itself.
Both scientists and “ordinary people” confront the world armed only
with their myths of nature. Cultural theory might best be viewed in the
uncertain world we inhabit as the anthropologists’ myth of myths. The
validity of such a super-myth is not to be judged by the statistician’s
correlation coefficients and t-tests, but by the degree to which it accords
with people’s experience. And its utility can be judged only by the extent to
which people find it helpful in their attempt to navigate the sea of uncertainty.
In the following section an attempt is made to assess the validity and utility
of this myth of myths by applying it to a dispute about an environmental
threat that is typical of a wide range of such arguments.
Divergent perspectives on environmental threats: an
example of the cultural construction of risk
Capital Killer II: still fuming over London’s traffic pollution (Bell 1993) is a
report by the London Boroughs Association on the health effects of traffic
pollution. It is an example of a common problem—the never-ending
environmental dispute that appears to be unresolvable by science. The
London Boroughs Association (LBA) has a membership of 19, mainly
Conservative-controlled London boroughs plus the City of London. As the
subtitle of this report indicates, the association is unhappy—indeed fuming—
over the lack of action by the Conservative central government to reduce
traffic pollution in London. The report complains about the lack of resources
provided by central government to deal with “this most serious of issues”,
and reproaches the government for its failure to follow its own policy advice
as propounded in its White Paper on the Environment: “Where there are
significant risks of damage to the environment, the government will be
prepared to take precautionary action…even where scientific knowledge is
not conclusive”. The particular precautionary action that the LBA seeks is
“action…now to reduce levels of traffic and pollution in London”.
Why should political allies (or at least politicians belonging to the same
party) fall out over this issue? Why should the local government politicians
see an urgent need for action where their central government counterparts
plead that there is insufficient evidence to justify such action? Let us look at
the evidence on the health effects of traffic pollution summarized in the
39
LBA report. The summary begins by citing the conclusions of its 1990 report
Capital Killer: “exhaust emissions from road vehicles may cause major health
problems. Since publication of the report research has continued to suggest
links between air pollution and health.” It accepts that “it will be difficult to
get hard information on the long-term effects of air pollution on health”. It
says that “the link between air pollution and health is not proven but research
is increasingly suggesting that there is such a link”.
The report notes an alarming fivefold increase in the number of hospital
admissions nationally for childhood asthma between 1979 and 1989 and
says “it may well be” that air pollution is one of the factors contributing to
the increased incidence and severity of asthma, and that “traffic exhausts
may exacerbate” asthma and allergic disease. Unfortunately for this
hypothesis, the magnitude of the changes in traffic and emissions between
1979 and 1989 are small relative to the health effects they are invoked to
account for. Although childhood asthma is reported to have increased by
400 per cent, traffic nationally increased by only 57 per cent, and in urban
areas, where the concentrations of pollutants are greatest, by only 27 per
cent; further, during this period, reported emissions of nitrogen oxides,
sulphur dioxide and lead decreased, and emissions of carbon monoxide
increased by only 8 per cent and volatile organic compounds by only 3 per
cent (DOT 1990). For such small changes in traffic and emissions to account
for such a large change in the incidence of asthma requires a sensitivity of
response for which the report presents no evidence.
The report goes on to look at the evidence for a link between traffic
pollution and hay fever. It reports a theory from one study that car fumes
damage the lining of the nose “and could explain why hay fever in cities is
now three times more prevalent than in rural areas”; it cites another report
that found that “more people appear to be suffering from hay fever”, and
yet another that creates “suspicion that worsening pollution is responsible”
for the increased incidence of hay fever.
The report presents more evidence in similar, heavily qualified, vein,
and then quotes, in tones of disappointment and incredulity, the
government’s response to the evidence: “there is [the Government says]
perceived to be a growth in the incidence of respiratory illnesses, and many
respiratory physicians do believe that there is an increase in the prevalence
of asthma; but suggestions that the change in asthma levels is as a result of
air pollution remain unproven”. In the previous two paragraphs all the italics
have been added by me to stress the LBA’S acknowledgement of the tenuous
nature of the evidence linking air pollution and ill health. In this paragraph
the italics have been added by the author of the LBA report. The LBA’S
italics appear to be intended to encourage a sense of anger and incredulity
in the reader. How, the report seems to ask, could the Government respond
to such compelling evidence by suggesting that it was mere perception and
belief? How could the Government not be moved to action?
Divergent perspectives on environmental threats
40
Patterns in uncertainty
The four rationalities as contending paradigms
These contrasting responses to the same evidence, or lack of it, provide an
excellent example of the cultural construction of risk. This phenomenon
can be found at work wherever disputes about health and safety are
unresolved, or unresolvable, by science. For years the nuclear industry and
environmentalists have argued inconclusively, producing mountains of
“scientific” evidence in the process. Food safety regulations, AIDS, the
greenhouse effect, seat belts and bicycle helmets are but a few other examples,
from many that could be chosen, of health and safety controversies that
have generated a substantial “scientific” literature without generating a
consensus about what should be done. In all these cases, and a multitude of
others, the participants commonly cast aspersions on the rationality of those
who disagree with them. The approach of cultural theory suggests not that
some are rational and others irrational, but that the participants are arguing
rationally from different premises. This can be illustrated by the disagreement
between the LBA and the Government about traffic pollution in London.
Individualists tend to view nature as stable, robust and benign, capable of
shrugging off the insults of man, and rarely retaliating. They are believers in
market forces and individual responsibility, and are hostile to the regulators
of the “nanny State”. For them the use of seat belts and helmets and the risks
associated with sexual behaviour should be matters of individual discretion.
The safety of food, like its taste and appearance, they would leave to the
market. Where evidence is inconclusive, as in the case of the greenhouse
effect and the health effects of air pollution, they place the onus of proof on
those who would interfere. They tend to an optimistic interpretation of history,
and are fond of citing evidence of progress in the form of statistics of rising
gross domestic product and lengthening life expectancy.
Egalitarians cling to the view of nature as fragile and precarious. They
would have everyone tread lightly on the Earth and in cases of scientific
doubt invoke the precautionary principle. They join the individualists in
opposition to the compulsory use of bicycle helmets and seat belts, but for
different reasons; they argue that compelling people to wear helmets inhibits
the use of an environmentally benign form of transport, and that seat belts
and other measures that protect people in vehicles encourage heedless
driving that puts pedestrians and cyclists at greater risk. AIDS confirms their
view of the need for prudent and cautious behaviour in a dangerous world.
The precariousness of individual health justifies protective measures in the
form of food safety regulations. The greenhouse effect and the health effects
of traffic pollution are both issues that cry out for the application of the
precautionary principle. Egalitarians incline to an anxious interpretation of
history; they read it as a series of dire warnings and precautionary tales of
wars, plagues and famines, and species and civilizations extinguished
through human greed or carelessness.
41
Hierarchists believe that nature will be good to them, if properly managed.
They are members of big business, big government, big bureaucracy. They
are respecters of authority, both scientific and administrative; those at the
top demand respect and obedience, those at the bottom give it, and those in
between do some of each. They believe in research to establish “the facts”
about both human and physical nature, and in regulation for the collective
good. If cyclists or motorists do not have the good sense to wear helmets or
belt up, they should be compelled to do so. Food safety regulation,
accompanied by monitors and enforcers to ensure compliance, is required
to protect us from the careless or unscrupulous. Pending the discovery by
scientists of an AIDS vaccine, sex is an activity demanding education, moral
instruction or condoms, depending on the hierarchy’s particular religious
or secular persuasion. Hierarchists take a “balanced” view of history; it
contains warnings but also holds out the promise of rewards for “correct”
behaviour.
Fatalists, the fourth of cultural theory’s categories, believe nature to be
capricious and unpredictable. They hope for the best and fear the worst; life
is a lottery over whose outcome they have no control. They tend to be found
at the bottom of the socioeconomic heap, and hence are exposed to more
than their share of traffic pollution, but they do not get involved in arguments
about what should be done about it because they see no point; nothing they
do will make any difference. They have high death rates both from “natural
causes” and accidents. They do not study history.
These representatives of the categories of cultural theory are caricatures.
Real people are more complex. But it is nevertheless possible in an
examination of most long-running debates about health and safety to identify
approximations of such caricatures among the leading participants.
The cultural construction of pollution
The debate about the health effects of traffic pollution is unlikely to be settled
conclusively for some time, if ever. Describing the risks of traffic pollution
as culturally constructed is not to say that they are mere figments of fevered
imaginations. There is an obvious cause for concern; the exhaust emitted by
cars contains many toxins, plus carbon dioxide, which is implicated in the
greenhouse effect—a scientific controversy in its own right (see Ch. 9). The
toxins are dispersed unevenly, in highly diluted form, over long periods of
time. Some may be concentrated in the food chain, others may be transported
great distances and/or combined with other pollutants to be rendered
harmless, or more damaging. The environment into which they are dispersed
contains plants, animals and people with varying susceptibilities to different
toxins. Some toxins will be persistent and their effects cumulative. Some
might have direct effects, others might weaken immune systems with results
The cultural construction of pollution
42
Patterns in uncertainty
being manifest in the symptoms of a wide variety of opportunistic diseases.
There are often long time-lags between exposure to pollutants and consequent
damage to health, and most of the symptoms of illness caused by the
ingredients of exhaust emission could have other causes. Some emissions
might even be beneficial; in certain circumstances, for example, acid rain
and carbon dioxide increase plant yields.
With few exceptions the toxic nature of the emissions is not in dispute.
The unresolved question is whether they are emitted in quantities that cause
significant damage, and if so whether this damage outweighs the benefits of
the activity, motoring, that produces them. The LBA report reviewed here
ostensibly addresses the health effects of traffic emissions, but it does so in
the context of a wider debate about the benefits of a Government transport
policy that is fostering an increase in traffic. Both the Government and the
LBA are agreed that the evidence linking pollution to ill health is somewhat
tenuous. They disagree about the appropriate policy response. This suggests
that the real difference between them lies not in their view of the damage
done by traffic emissions, but in their view of the benefits of the traffic. If
the benefits are considered great, then the evidence required to justify a
sacrifice of some of these benefits to reduce emissions should be compelling.
The smaller the benefits, the stronger becomes the case for invoking the
precautionary principle. And if the benefits are considered negative then
even a suspicion that damage might result from emissions becomes an
additional justification for curbing traffic.
Adding cultural filters to the risk thermostat
Both the perceived danger of pollution from traffic and the perception of the
rewards of growing traffic will influence the balancing behaviour described by
Figure 2.3: anxious cycling environmentalists wear masks when cycling in traffic,
and campaign for measures to reduce traffic pollution; the Mr Toads in their
large cars remain cheerfully oblivious; the Government seeks to “balance”
“legitimate concerns about the environment” and “legitimate aspirations of
motorists”; and the fatalist continues to stand at the bus stop inhaling traffic
fumes while waiting for the bus that never comes. These diverse behavioural
responses to the same objective reality imply that reality is filtered by paradigms,
or myths of nature, both physical and human. Figure 3.4, in which the risk
thermostat is fitted with cultural filters, suggests a way of combining the
phenomena of risk compensation with the insights of cultural theory.
These filters are resistant to change, but they are not immutable. The
positions adopted in the LBA report and the Government’s response to it
reflect a variety of pressures. They exhibit the biases of the civil servants
responsible for writing the report and drafting the response. They are
influenced by the views of the politicians responsible for commissioning
43
the report, and the pressures to which they are subjected by their constituents,
who in turn are influenced by all the multifarious forces at work shaping
public opinion. In a dispute such as that over traffic emissions and what
should be done about them, explanations for the longevity of the argument
and the inability of scientific evidence to resolve it are more likely to be
found not in more scientific evidence, but in an examination of the sources
of bias in the participants. Why should the LBA see traffic emissions as an
urgent problem, while the Government dismisses their case as unproven?
One clue is provided by the LBA report. It advocates a few measures to
monitor air pollution, and to encourage further research on the effects of air
pollution. But most of its recommendations assume that the case against
emissions is proven; the main thrust of the recommendations is that traffic
should be reduced. It recommends that the Government and local authorities
should invest more in the railways, increase taxes on car use, end subsidies
for company cars and off-street parking, lower speed limits and increase
their enforcement, calm traffic, provide their employees with bicycles and
implement a 1,000–mile cycle route network in London, discourage out-of-
town development, develop light rail, provide priority routes for buses, make
better provision for people with disabilities, and provide secure parking for
bicycles at stations.
The fact that this impressive list of recommendations emerges from a
survey of the effects of traffic emissions on health, in which almost all the
evidence was characterized in one way or another as inconclusive, suggests
that the LBA and the Government have very different views about the
desirability of the activity generating the emissions. The evidence for such
a difference is compelling. While the LBA seeks to reduce dependence on
Figure 3.4 The risk thermostat with cultural filters.
Adding cultural filters to the risk thermostat
44
Patterns in uncertainty
the car, the Government “welcomes the continuing widening of car
ownership as an important aspect of personal freedom and choice”, and in
a recent speech Transport Minister Robert Key (1993) declared
I love cars of all shapes and sizes. Cars are a good thing…. I also love
roads…. The car is going to be with us for a long time. We must start
thinking in terms that will allow it to flourish.
The car is an individualist form of transport. It transports its owner,
sometimes accompanied by invited guests, in privacy. It offers freedom, and
independence from the constraints of public transport. Its benefits and
advantages pass easily through the cultural filter of the individualist, its
disbenefits and disadvantages are efficiently filtered out, especially in the
suburbs and the countryside where most Conservative voters live and where
there is still driving room. But in urban areas the individualist increasingly
experiences cognitive dissonance. The car provides the freedom to sit in
traffic jams inhaling the emissions of the car in front. And traffic jams provide
time to reflect upon the merits of alternative transport policies.
Most of the older built environments in Britain were built to a scale that
cannot allow the car to flourish. These environments were designed for
pedestrians and cyclists and, for journeys beyond the range of these modes
of travel, users of public transport. Attempts to accommodate the still-growing
numbers of cars are causing damage that is perceptible to even the most
resolute individualist. Market forces do not appear to be providing solutions.
There is no space for more roads. The need for restraint and some alternative
way of getting about becomes increasingly difficult to resist.
The hierarchists of the Department of Transport and the motor industry
offer their services. Transport is in a mess because it is badly or insufficiently
regulated. They insist on catalytic converters and lead-free petrol to reduce
emissions. They work on cleaner, more efficient engines, and traffic control
systems to make more efficient use of the existing road system. They
commission research into electronic road pricing, computer-based route-
guidance systems, and “intelligent” cars and roads. The problem of road
safety demands even more engineering and regulation, and education to
foster responsible attitudes. The use of seat belts and helmets is made
compulsory. Road safety education inculcates attitudes of deference to the
car. Barriers are erected to force pedestrians over footbridges or through
tunnels. Ever more safety is “engineered” into cars. New roads are built to
separate people and traffic. It is all justified by cost-benefit analysis.
The egalitarians also enter the fray. For the Friends of the Earth and other
environmentalists, the damage done by the car is symptomatic of a deeper
malaise. Runaway economic growth, unbridled materialism, and the hubris
of science and technology threaten global catastrophe. The egalitarian filter
blocks many of the benefits of growth, materialism, science and technology,
while allowing through and magnifying all evidence of the destructiveness
45
of these forces, and even threats of destructiveness. Their solutions to our
transport problems focus on modes of travel that are environmentally benign
and socially constructive. The car does violence to their communal ethos;
walking, cycling and public transport promote community spirit.
The fatalists have no comment to offer. They do not participate in policy
debates.
Groping in the dark
The above speculations are relevant to all disputes that are unresolved or
unresolvable by science. Wherever the evidence is inconclusive, the scientific
vacuum is filled by the assertion of contradictory certitudes. For the
foreseeable future scientific certainty is likely to be a rare commodity, and
issues of health and safety—matters of life and death—will continue to be
decided on the basis of scientific knowledge that is not conclusive. The
conventional response to this unsatisfactory state of affairs is to assert the
need for more science.
More trustworthy scientific information will do no harm, but the prospect
is remote of settling most current controversies within the time available to
make decisions; where adherents to the precautionary principle perceive the
possibility of serious harm, they press for action as a matter of urgency. Just
how remote the prospect of scientific resolution, and how large the scientific
vacuum, can be illustrated graphically with the help of some numbers taken
from the 1983 report by the National Research Council in the USA entitled
Risk assessment in the Federal Government: managing the process. This report
was the product of a study whose purpose was “to strengthen the reliability
and the objectivity of scientific assessment that forms the basis for federal
regulatory policies applicable to carcinogens and other public health hazards”.
The report noted that about 5 million different chemical substances are known
to exist, and that their safety is theoretically under regulatory jurisdiction. Of
these, it pointed out, fewer than 30 have been definitely linked to cancer in
humans, 1,500 have been found to be carcinogenic in tests on animals, and
about 7,000 have been tested for carcinogenicity.
The black rectangle in Figure 3.5 represents the darkness of ignorance:
what we do not know about the carcinogenic effects of most substances.
The size of the little pinprick of light in the upper right-hand corner relative
to the size of the black rectangle represents 30 as a proportion of 5 million.
The small rectangle in the lower left-hand corner represents the 7,000
substances that have been tested.
These white rectangles greatly exaggerate the extent of existing knowledge.
Given the ethical objections to direct testing on humans, most tests for
carcinogenicity are done on animals. The report observes “there are no doubt
occasions in which observations in animals may be of highly uncertain
Groping in the dark
Figure 3.5 How much light amid the encircling gloom?
47
relevance to humans”; it also notes that the transfer of the results of these
tests to humans requires the use of scaling factors which “can vary by a factor
of 35 depending on the method used”, and observes that “although some
methods for conversion are used more frequently than others, a scientific
basis for choosing one over the other is not established”. A further difficulty
with most such experiments is that they use high doses in order to produce
results that are clear and statistically significant for the animal populations
tested. But for most toxins the dose levels at issue in environmental
controversies are much lower. Extrapolating from the high dose levels at which
effects are unambiguous to the much lower exposures experienced by the
general human population in order to calculate estimates of risk for people in
the real world requires a mathematical model. The report notes that “the true
shape of the dose-response curve at doses several orders of magnitude below
the observation range cannot be determined experimentally”. It continues “a
difficulty with low-dose extrapolation is that a number of the extrapolation
methods fit the [high dose] data from animal experiments reasonably well,
and it is impossible to distinguish their validity on the basis of goodness of
fit”. Figure 3.6 illustrates the enormous variety of conclusions that might be
drawn from the same experimental data depending on the assumptions used
in extrapolating to lower doses. It shows that the estimates produced by the
five different models converge in the upper right-hand corner of the graph.
Here the five models agree that high dose levels produce high response levels.
The supra-linear model assumes that the level of response will remain high
as dose levels are reduced. The threshold model assumes that when dose
levels fall below the threshold there will be no response. Below the dose
levels used in the experiment one can but assume.
There is, in general, no necessary connection between cultural theory
type and belief in a particular dose-response relationship. The egalitarian/
environmentalist opposition to nuclear power assumes a linear or supra-
linear relationship between risk and radiation as the curve is extrapolated
into the low-dose region of the graph; there is, they insist, no safe level of
radiation. The defenders of nuclear power on the other hand are predisposed,
in the absence of conclusive evidence, to adhere to the threshold model,
and the belief that the effect on the general public of the radiation they
produce is negligible. In the debate about childhood asthma discussed above,
the positions are reversed. There has been a large increase in the number of
asthma cases diagnosed that are associated with relatively small increases
in the suspect pollutants. In order to convict the pollutants, and the car, of
being the cause of the increase in asthma, one must invoke a sublinear or
threshold type model in which small increases in dose above a certain level
produce large increases in response. The defenders of the car will in this
case find that the linear or supra-linear models conform better to their
expectations.
The argument is further clouded by doubts about the data. Although the
Groping in the dark
48
Patterns in uncertainty
reported incidence of childhood asthma has increased, the asthma death
rate for children has actually decreased between 1980 and 1990; it is possible
that the increase in reported asthma is merely a recording phenomenon, or
that the decrease in fatalities is the result of better treatment, or some
combination of these explanations. Lenney et al. (1994) conclude that “it is
unclear whether the incidence is rising” (see Ch. 5 for a discussion of a
similar problem in the interpretation of road accident data.) There are also
doubts about the official estimates of trends in pollution, with
environmentalists complaining about the sparse sampling on which the
estimates are based.
Even amongst those who believe that the increase in childhood asthma is
real, there is dispute about the cause. A letter to The Times (18 February
1994) from the director of Britain’s National Asthma Campaign captures the
distinction between individualistic and egalitarian responses to the evidence.
There is a very real danger that, if a general belief develops that asthma
is all down to pollution, people will ignore important educational
Figure 3.6 Alternative dose-response extrapolations
from the same empirical evidence (source: National
Research Council 1992:26).
49
measures about preventive steps that they themselves can take. The
scenario of the parents of an asthmatic child who continue to smoke,
refuse to give up the family cat, and neglect basic dust-control measures
while blaming the asthma on other people’s cars, is already worryingly
common.
Some go further and blame the increase on excessive environmentalist zeal.
Hawkes (1994) notes that energy-saving measures such as double glazing
and draught proofing have reduced the number of air changes in houses
from as much as four per hour to as little as a half per day—creating an
atmosphere in which dust mites flourish.
Beyond the problems of identifying the causes of morbidity and mortality
and specifying the dose-response relationships, there are four other sources
of uncertainty of even greater significance. First, variability in susceptibility
within exposed human populations, combined with the variability in their
levels of exposure, make predictions of the health effects of the release of
new substances at low dose levels a matter of guesswork. Secondly, the long
latency period of most carcinogens and many other toxins—cigarettes and
radiation are two well known examples—make their identification and
control prior to the exposure of the public impossible in most cases. Thirdly,
the synergistic effects of substances acting in combination can make innocent
substances dangerous; and the magnitude of the number of combinations
that can be created from 5 million substances defies all known computers.
And fourthly, the gremlins exposed by chaos theory will always confound
the seekers of certainty in complex systems sensitive to initial conditions.
After summarizing the difficulties confronting scientists trying to assist the
federal regulators of carcinogens, the National Research Council report says
“we know still less about chronic health effects other than cancer”.
Reports of the National Research Council in the USA, like those of the
Royal Society in Britain, carry great scientific authority. The NRC’S 1983
report on risk assessment, in keeping with almost all such reports, concludes
with a call for more science. The report says:
The primary problem with risk assessment is that the information on
which decisions must be based is usually inadequate. Because the
decisions cannot wait, the gaps in information must be bridged by
inference and belief, and these cannot be evaluated in the same way
as facts. Improving the quality and comprehensiveness of knowledge
is by far the most effective way to improve risk assessment.
But when all the unresolved uncertainties discussed in their report are taken
into account, the little beacons of scientific light shining in the dark of Figure
3.5 become invisible to the naked eye. The National Research Council’s
account of what they do not know amounts to an admission that they are
groping in the dark. The prospect of future research breakthrough lighting
Groping in the dark
50
Patterns in uncertainty
more than a few flickering candles in the vast darkness enveloping the
problems they are addressing is not encouraging. Indeed, the problem appears
to be getting worse as the rate continues to increase at which chemists,
physicists and genetic engineers create new dangers. Even more urgent than
the need for more science is the need for a better understanding of the bridge
of inference and belief.
The Sydney Smith dilemma
The mythological figures of cultural theory are caricatures, but they have
many real-life approximations in debates about risk. Long-running
controversies about large-scale risks are long running because they are
scientifically unresolved, and unresolvable within the timescale imposed
by necessary decisions. This information void is filled by people rushing in
from the four corners of cultural theory’s typology, asserting their
contradictory certitudes. The clamorous debate is characterized not by
irrationality, but by plural rationalities.
It has probably always been thus. Over 150 years ago the Reverend Sydney
Smith was being taken on a conducted tour of an Edinburgh slum. Down a
narrow alley between two high-rise tenements he came upon two women
shrieking abuse at each other across the alley. Smith stopped, looked up,
and listened. He then shook his head and walked on, lamenting, “they’ll
never agree; they’re arguing from different premises”.
The enormous gulf between what scientists know or are likely to discover,
and what needs to be known before decisions about risk can be based on
conclusive evidence, suggests that we are doomed for the foreseeable future
to continue to argue from different premises. But the argument is likely to
be more civilized to the extent that we are sensitive to these differences and
understand their causes. It is, of course, desirable to have as much solid
scientific information as possible to inform decisions about risk. There will
be occasions when the production of such information will be able to resolve
disputes. But for as far ahead as one can see, the future will remain uncertain.
The big issues will not be resolvable by science. How then ought we to
proceed? How might we manage risk better?
These are questions to which I will return in the concluding chapter. The
answers, thus far, appear to depend on whom you ask. The individualist
can be relied upon to assert that we are already over-regulated; things should
be left to the market to sort out. The egalitarians will invoke the precautionary
principle and press for urgent action. The hierarchists will suggest that things
are about right as they are, while conceding that more research and a slight
nudge to the tiller might be advisable. And the fatalists will carry on watching
television and buying lottery tickets.
51
Chapter 4
ERROR, CHANCE
AND CULTURE
“On the technical side, this accident, while no one wanted it, has a
statistical probability that falls within the predicted probability of
this type of accident.”
Chauncy Starr commenting on the accident at Three Mile Island1
The conventional wisdom
The implicit assumption of most safety research and safety regulation is that
accidents are unwanted.2 For example, Wright et al. (1988) argue that:
It is a plausible hypothesis that no road user deliberately sets out to have
an accident; to put it another way, if it were clear to a road user that a
particular course of action would lead inevitably to an accident, he
would adopt some other alternative (assuming that one were available).
There is a semantic difficulty here. If it were clear that a particular course of
action would lead inevitably to an “accident” then the outcome should not
be called an accident, because accidents are events that are unwanted and
unintended. Defining accidents as unwanted and unintended appears to leave
only two possible ways to account for them. Either they are “acts of God”—
unanticipatable events beyond the control of the victim—or they are the
result of human error—mistakes, misjudgements, lapses of concentration.
Events that are truly unanticipatable and unpreventable lie in a realm of
great theological difficulty. Calling them acts of God raises questions about God’s
intentions, and whether or not He plays dice. We confine ourselves here to the
realm of events that are in principle preventable or avoidable by human
behaviour, events that hindsight reveals (or could reveal) as preventable.
1. Quoted in Science for People, 42, Summer 1979.
2. The view of accidents as “Freudian slips”—as the consequence of behaviour
subconsciously intended—is a significant exception to this generalization.
52
Error, chance and culture
Some preventable accidents are doubtless attributable to human error.
But are all such accidents the result of human error? Most safety research
assumes that they are. Summala (1988), for example, suggests that “no
consideration is normally given [by motorists] to risks…in most situations,
drivers obviously know exactly what they should not do if they want to
avoid a certain, or almost certain accident”. Again we encounter the
oxymoronic concept of the inevitable or certain accident. Such terminology
is incapable of distinguishing accidental death from murder or suicide.
Summala, like the Royal Society, distinguishes between subjective and
objective assessments of danger and argues that accidents are the result of
unperceived danger. Risk compensation—such as driving faster if a bend in
the road is straightened—Summala describes as “a behavioural response to
environmental change”. He insists that it should not be called risk
compensation because “drivers do not normally feel any risk” (my emphasis).
Enter Homo aleatorius
Whether or not God plays dice, Homo aleatorius does, out of both choice
and necessity. As we observed in Chapter 2, no one wants an accident, but
everyone appears to want to be free to take risks, and to be his own judge of
these risks; a society’s accident rate will thus reflect its members’ propensities
to take risks. Even so, the possibility must be considered that Homo aleatorius
has more accidents than he bargains for. Might we perhaps hope, not for the
elimination of accidents, but for a decrease in accidents by training
programmes and improved information to make the gamblers in the casino
of life more skilful and better informed?1 I think not.
Those who consider human error to be the sole or principal cause of
accidents advocate safety measures that reduce the likelihood of nasty
surprises by signposting dangers, by improving coping skills, or by creating
“failsafe” or “foolproof” environments. But this approach is one-sided. It
ignores the positive reasons that people have for taking risks -the rewards of
risk to be found in the upper right-hand corner of the risk thermostat diagram
in Figure 2.2.
It is not disputed that some accidents are the result of inaccurate risk
assessment. If people underestimate risk, they will have more accidents than
they bargained for. However, if it is accepted that people do take risks, then
inaccurate risk assessment can cause not only too many accidents but also
too few. If, as we have argued in Chapter 2, risk-taking is a balancing act,
then it is clearly possible to fall off the tightrope in two directions; a risk-
taker might take too much risk or too little. Figure 4.1 helps to explain.
1. Such a suggestion can be found in van der Colk (1988). It is also the principal justifica-
tion of almost all forms of safety training.
53
The solid line in Figure 4.1 represents a hypothetical risk-taking frequency
distribution. In the middle, X, is the intended level of risk-taking, the level
represented by the propensity to take risks in the upper left-hand corner of
Figure 2.2. For an individual who successfully balances perceived and
intended risk, X will be the probability of a particular action resulting in an
unwanted consequence commonly called an accident (lower right-hand
corner). If many individuals were to perform the particular action, and the
probability of the outcome were to be known, then, although no single action
would lead inevitably to an accident, these collective actions would lead
inevitably to a predictable number of accidents. The variance about X will
reflect both the number of errors made and the variation in result attributable
to chance.
Consider a knowledgeable gambler flipping coins. His intended level of
risk is 0.5. If a head is a “success” and a tail is an “accident” and he tosses a
fair coin many times, then his predicted number of accidents will be half
the number of tosses. His accident rate will equal his intended level of risk.
At the end of an evening tossing coins, he would expect his rewards to
approximately equal his accident losses. But if the coin were biased, the
result would be more or fewer accidents than he bargained for. In practice,
for the reasons discussed in Chapter 2, in the uncertainty that exists outside
controlled conditions such as those found in casinos, it is not possible to
know X, and rarely outside the casino would an individual be able to attach
a number to his intended level of risk. So Figure 4.1 is a set of notional
Figure 4.1 “Error” in risk-taking.
Enter Homo aleatorius
54
Error, chance and culture
frequency distributions embodying the supposition that people live their
lives wanting/ accepting some non-zero level of risk, and that actual risk-
taking behaviour reflects this level imperfectly.
Along the solid line in Figure 4.1, tailing off from X in both directions, is
the actual level of risk taken. In discussing distributions such as this, it will
be helpful to distinguish between departures from X that are stochastic in
nature and those that are attributable to error. The result of a single toss of a
coin is usually considered to be down to chance. But if you attempt to estimate
the probability of a head by tossing many coins, and come up with an estimate
different from 0.5, your estimate is in error—if the coin is fair. It should be
noted that this distinction between chance deviation and error is frequently
unclear, with chance often being another name for ignorance, which is closely
related to error. If, for example, you were sufficiently skilful and precise, there
would be no chance element in the result of your coin tossing.
In practice however, whether through ignorance, error or pure chance,
the actual result of behaviour commonly deviates from the intended result
in a way that can be described by the frequency distributions illustrated by
Figure 4.1. The number of accidents that occur in any particular group is a
function of the number of risk-takers and their intended level of risk, plus or
minus the “mistakes” they make in matching their behaviour to their
intentions. The “balancing behaviour” depicted in Figure 2.2 will rarely be
precise; errors in perception and/or behaviour will result in a person taking
more or less risk than intended.
To the right of X more risks are taken than desired, and to the left
fewer. The variance of the distribution will vary directly with ignorance
and ineptitude. The better informed people are about risks, and the more
skilful at judging and responding to risks, the smaller will be the
variance—the closer they will come more of the time to taking the risks
that they intend. If an individual’s variance is reduced, there will be fewer
occasions when he is exposed to more extreme risk than intended,
thereby reducing the chances of an accident; but offsetting this will be the
fewer occasions when he is exposed to extreme safety, thereby increasing
the chances of an accident. A Grand Prix racing driver, for example, will
be better informed about the dangers associated with the track and his car,
a better judge of driving risks, and a more skilful driver than the average
motorist—but not less likely to have an accident. He will use his superior
skill and judgement to convert some of the average motorist’s safety
margin into increased performance. If, as is likely, he has a higher than
average propensity to take risks—for which there are lucrative rewards—
he will be likely to have more accidents. Williams & O’Neil (1974) found
that drivers with specialist training for off-highway racing had worse on-
highway accident records than ordinary drivers; they were more skilful
drivers, but not safer. If we let the solid line represent the distribution of
errors by the average motorist (AM), the dashed line, with its higher level
55
of intended risk and smaller variance (GP), might represent the
distribution of errors by a Grand Prix racing driver.
This distinction between level of risk and error is at odds with Mary
Douglas’s (1986) view of risk.
The essence of risk-taking lies in the structure of the probabilities,
their variance. A prudent individual seeks less, the risk-taker prefers
more variance. A theory of decision-making that takes the mean of
the distribution of probabilities disregards the very thing that risk-
taking is all about, the distribution itself.
This view cannot distinguish between skilful takers of high risks, such as
the Grand Prix driver—judging with great precision just how fast he can
take a corner—and ignorant, inept prudence. On average the ignorant and
ineptly prudent (II) will have fewer accidents (dotted line in Fig. 4.1). They
will on occasion stray into great danger, but these occasions will be more
than offset by those occasions in which their caution leads them to be
excessively careful.
Excessive prudence is a problem rarely contemplated in the risk and
safety literature. Cases of accidents resulting from ignorance or
incompetence are numerous, and well documented in accident reports.
There are literally hundreds of journals devoted to the examination of
accident statistics with the aim of reducing accidents. But there are also
many much less well documented examples of people taking less risk
than they desire through ignorance or incompetence. It is widely ignored
because, from the perspective of those seeking to increase safety, it is not
a problem. A few examples will serve to indicate the pervasiveness of this
neglected phenomenon:
Overestimates of risk can lead people to spend more on insurance than
they otherwise would.
Motorists drive very slowly if they believe, falsely, that there are
patches of black ice on the road.
In the construction industry excessive prudence can lead to an
enormous waste of money if buildings are designed for stresses with
which they are unlikely to have to cope: applying earthquake zone
standards in areas unlikely to experience earthquakes, for example.
On the railways in Britain, a spate of recent accidents has arguably led
to excessive safety, excessive in the sense that the new safety measures
will be paid for by fare increases that will encourage some passengers
to travel by more dangerous cars instead.
Inordinate fear of mugging or physical attack leads some women and
elderly people to confine themselves to quarters and deny themselves
freedoms that they would otherwise enjoy.
A personal example. I still have a vivid childhood memory of
excessive safety leading to social isolation. I refused, through days of
Enter Homo aleatorius
56
Error, chance and culture
agonizing, to play with friends in a favourite haunt until it was
explained to me that the rusty sign “Trespassers will be prosecuted”
did not mean “electrocuted”.
Again it must be stressed that because of intractable measurement problems
it is not possible to attach numbers to the speculations embodied in Figure
4.1. They are analogies, applications of statistical concepts to a problem
that cannot be reduced to numbers. All attempts to formalize and quantify
the making of decisions about risk are fragile vessels afloat on the sea of
uncertainty; even in the casino one might doubt the honesty of the staff and
management. As Chauncy Starr demonstrates in the quotation at the
beginning of this chapter, it is not possible to formulate falsifiable statements
about unique future events in terms of probabilities. If God does play dice,
then the attempts of mere humans to attach probabilities to the outcome of
the celestial craps game will always be laughable. Nevertheless, the intended
level of risk and the variance about this level produced by chance and error
are concepts that it is important to separate in any attempt to understand
risk-taking behaviour.
Balancing behaviour
Having separated these concepts and related them to the idea of risk as a
balancing act called risk compensation, let us now speculate further with
the help of Figure 4.2. It illustrates the way in which cultural theory helps
to account for different settings of the risk thermostat and different styles of
balancing act.
Figure 4.2 Error cultures.
57
Figure 4.2a illustrates the intended level of risk and the variance implicitly
assumed in most of the safety literature. For example in Human error, a
recent comprehensive work on the topic, Reason (1990) does not consider
either chance or deliberate risk-taking as significant causes of accidents. He
advances the “basic premise…that systems accidents have their primary
origins in fallible decisions”. The level of risk intended by most of those in
charge of safety, and by the researchers whose work they fund, is zero in
Figure 4.2a. There is only one direction in which one can fall if one loses
one’s balance—in the direction of greater than desired risk. This perspective
on risk is characteristic of egalitarians who adhere to the myth of nature
ephemeral. Because the potential consequences of error are so enormous,
they strive unrelentingly to reduce the variance (to move in the direction of
the dotted line), and hence the risk of things going wrong; it is sometimes
acknowledged that zero risk is an unattainable ideal but, nevertheless, one
towards which we should all continually strive. Those who believe it is
actually attainable are clearly deluded.
The flat frequency distribution of Figure 4.2b, ranging from zero to one,
characterizes the perspective of fatalists. Nature is simply unpredictable.
One variant of fatalism holds that all is predestined, another that God throws
dice. But ignorance precludes adherents to either perspective knowing what
the future holds. As fatalists they are entitled to no intentions with respect
to risks, only hopes. They can but hope for the best—and duck if they see
something about to hit them.
Figure 4.2c represents the hierarchist style of risk-taking. The solid line
represents their conviction that those under their authority, persistently have
more accidents than they should. They seek to reduce risk. They usually
concede the impossibility of reducing it to zero, but seek to manage it more
efficiently. Implicit in their attempts to manage risk better are two beliefs:
first, that through ignorance or incompetence people persistently take higher
risks than they intend, with the result that the number of accidents is greater
than that implied by the accepted risk level X; secondly, that many people
under their authority are irresponsible and accept higher levels of risk than
they should. The hierarchist adopts a paternalistic approach to risk
regulation; not only must people be dissuaded or prevented from behaving
in a way that puts other people at risk (as in campaigns against drunken
driving), they must also be protected from themselves (as in seat belt
legislation). Sometimes they resort to exclusion. One line of safety research
going back many years seeks to identify the accident prone. A recent variation
on this theme is research in pursuit of “hazardous thought patterns” (Holt et
al. 1991). The Swedish airforce tries to identify such patterns with its Defence
Mechanism Test, and rejects any aspiring pilot who fails the test (Dixon
1987). Hierarchists strive through engineering measures, persuasion,
regulation, training and exclusion to shift the frequency distribution of risk-
taking behaviour to the left and reduce its variance (dotted line).
Balancing behaviour
58
Error, chance and culture
Figure 4.2d represents individualists. They also seek to reduce their
variance (dotted line), and are assiduous collectors of information about
risk—whether it be on the race track or the stock market. But they are more
alert to the rewards of risk-taking. They are self-conscious risk-takers and
they trade slogans such as “no pain, no gain” and “no risk, no reward” and
are convinced that a benign nature will ultimately reward those who trust
her. They trust individuals to make their own decisions about risk and scorn
the regulators of the nanny state.
Types of error
These different styles of risk-taking can be related to the type 1 and type 2
errors of the statistician. A type 1 error is committed if one accepts the
hypothesis that one ought to reject, and a type 2 error is committed when one
rejects the hypothesis that one ought to accept.1 The statistician’s “confidence
level” is a measure of the risk of error. The 95 per cent confidence level most
commonly employed in social science research means that the researcher
accepts the probability of getting it wrong one time in twenty. The four myths
of nature are contextual hypotheses constantly being subjected to partial tests.
Consider the specific hypothesis that CO2 emissions threaten a runaway
greenhouse effect. Egalitarians whose working hypothesis (myth of nature)
states that catastrophic consequences will flow from a failure to respect the
fragility of nature will insist on a very high standard of proof before rejecting
this hypothesis; in the statistician’s language he will be prepared to run a
high risk of a type 1 error and a low risk of type 2. Conversely individualists
who are convinced of the robustness of nature will require a very high standard
of proof before accepting the hypothesis. The hierarchist who believes in
stability within limits will return the hypothesis to the sender, requesting
greater specificity with respect to critical limits. Disputes amongst adherents
to these different perspectives usually turn out to be arguments not about
“facts” but about where the burden of proof should lie.
The field of risk and safety research is dominated by the concern to reduce
risk and accidents. The two risk-taking stereotypes that share this concern
are egalitarian and hierarchist. The egalitarians are usually more risk-averse
than the hierarchists. The hierarchists are usually responsible for putting
safety measures into effect. They commonly find themselves lobbied from
two sides, with the egalitarians urging more action to reduce risk, and the
individualists insisting on less. The fatalists see no point in arguing.
1. Strictly, a type 1 error is committed by wrongly rejecting the null hypothesis, and as
a consequence provisionally accepting the hypothesis that is the converse of the null
hypothesis, and a type 2 error is committed by wrongly accepting the null
hypothesis.
59
Acceptability of risk
There are long-running arguments in the risk literature about what risks and
levels of risk are acceptable (see, for example, Douglas 1986 and Fischoff et
al. 1981). Hierarchists, egalitarians and individualists are all participants in
these arguments, and the arguments will continue to run because the
participants are arguing from different premises. At one extreme are those
who argue that one death is one too many, and at the other those who interpret
the prevailing accident rate in areas of voluntary risk-taking (about which
more shortly) as a measure of the level of risk that society as a whole finds
acceptable. In between are those who argue, not very specifically, for less
risk. The behaviour of the hierarchists and egalitarians in debates about safety
policy can be considered a form of risk compensation; their striving to reduce
risk for the general population implies that the danger they perceive is greater
than the risk they consider acceptable. And yet ironically most of the risk-
reducing measures they propose and implement deny the existence of this
phenomenon in the people on whose behalf they would claim to legislate.
The efficacy of intervention
Because people compensate for externally imposed safety measures, the
risk regulators and safety engineers are chronically disappointed in the
impact that they make on the accident toll. In most countries in the developed
world the rate at which new safety regulations are added to the statute book
greatly exceeds the rate at which old ones are removed. Although a few
libertarians have railed against the excesses of the nanny State, the
preponderance of political pressure over many decades has been on the
side of more State interference to reduce risk. There is an even greater
imbalance in the area of research. Huge sums of money are spent on safety
research; although there is considerable research in the economic realm
associated with individualist and market-based ideologies, minute sums are
spent on countervailing research in the realm of physical risk and safety.
What has been the effect of this long-term imbalance?
Figure 4.3 show indices of death by accident and violence for 31 countries
over the first 75 years of the twentieth century. The indices are standardized
mortality ratios; this means that the effect of differences between countries,
or over time, resulting from differences in age and sex distributions have
been removed. The indices show averages for periods of five years, so the
last value shown for each country represents the average standard mortality
ratio for the period 1971–5.
Interpreting data covering such a long time and so many different countries
is notoriously difficult. The data cover a period in which the international
conventions for classifying causes of death underwent several changes. The
The efficacy of intervention
60
Error, chance and culture
quality of the data can be assumed to vary over time and between countries; for
example, the indices include suicide, a cause of death believed to be variably
under-recorded according to the stigma attaching to it in different countries.
Figure 4.3 Death by accident and violence. Standardized mortality ratios for accidental and
violent death for 31 countries between 1900 and 1975. Source: Adams 1985.
61
However, with few exceptions, the data exhibit no clear long-term trend—there
is a slight upward drift after the Second World War, during a period when
safety efforts in most countries intensified. But I am aware of no source of
recording bias common to all 31 countries that could mask a falling trend.1 The
75–year period covered by Figure 4.3 is a time over which great improvements
were made in casualty treatment. Over this period, all countries conducted
many inquests and safety inquiries, passed volumes of safety regulations, and
appointed small armies of safety regulation enforcers. But the graphs of rates of
accidental and violent death remained remarkably flat—with the exception of
marked spikes associated with wars or very large natural disasters.
It should be noted that this flatness does not appear remarkable to some
historians and demographers. In a survey of death by accident and violence
in Britain since the thirteenth century, Hair (1971) reports changes over
time in the particular causes of accidental and violent death, but no apparent
trend in the rates for all causes aggregated together. Although rates in 1970
were below mid-nineteenth-century rates, they were higher than estimates
for most preceding centuries. He concluded
British society throughout the centuries has struggled to control
violence, and has frequently succeeded in taming one form—only to
find another emerging. The axe of the drinking companion and the
neighbour’s open well were regulated, to be replaced by unruly
horses and unbridged streams; when these were brought under
control it was the turn of unfenced industrial machinery and
unsignalled locomotives: today we battle with the drinking driver.
And in a demographic study projecting English and Welsh mortality rates into
the 21st century, Benjamin & Overton (1981) construct several scenarios. Their
“optimistic” scenario incorporates the assumption that up to the year 2018
“the risk of accidental death remains the same, as some of the improvements in
the environment are balanced by the appearance of new hazards”.
Can Figure 4.3 be interpreted as support for the view that risk
compensation has been taking place on a societal scale with invariant risk-
taking propensities over a very long period of time? Does it constitute support
for the individualist position that the accident outcome is a measure of risk
acceptability? Perhaps. Judging by these statistics, risk appears to have been
suppressed in some activities only to pop up in others. Certainly there appears
to be little to show in the aggregated statistics of death by accident and
violence for all the labours of the risk reducers—the regulators, the police,
the doctors, the safety engineers and all the others involved in the safety
industry over many decades. The casualty rate associated with the dance of
1. The statistics for different countries suggest that fatalities attributable to war are not
treated consistently. For Japan the series is simply broken during the Second World
War. The statistics for England and Wales appear not to contain war fatalities, although
those for Spain, Finland, Czechoslovakia and the Netherlands appear to include them.
The efficacy of intervention
62
Error, chance and culture
the risk thermostats appears to have been remarkably little perturbed by
their activities. During the 20th century dance, some of the tunes and the
dance steps have changed, old dancers have left the floor and new ones
have arrived—fewer people are trampled by horses and more are killed by
cars—but the overall level of mayhem, accidental and intentional, continues
unabated at levels that display no trend up or down.
But there are certain difficulties with interpreting this lack of trend as
evidence for risk compensation. The risk compensation hypothesis is an
explanation of individual, not collective, behaviour, and there is nothing in
the hypothesis that requires either the propensity to take risk or the
perception of danger to be constant over time. Further, the multi-
dimensionality of risk and all the problems associated with measuring it
discussed earlier, preclude the possibility of devising any conclusive
statistical tests of the hypothesis.
Even death passes through cultural filters. During the past 25 years of
“The Troubles” in Northern Ireland, for example, every death attributed to
terrorism has received great publicity, but only a few dedicated collectors
of statistics are likely to be aware that over this period twice as many
people were killed in road accidents. A terrorist murder conveys powerful
messages; some lives are expendable, some invaluable. From one side of
the sectarian divide come cheers for the perpetrators, from the other come
vows of revenge. The forces of law & order react to the challenge to their
authority. Those uncommitted to either side of the struggle deplore its
“irrationality” and resent the interference in their lives of both the
terrorists and the heavy hand of the security services. Perhaps only for the
true fatalist is the random sectarian killing comparable (in the scales in
which loss is measured) to the equally meaningless fatal road accident.
For the rest, there are no units in which the rewards and losses of traffic
and The Troubles can be measured.
People take or have imposed upon them many risks not related to Figure
4.3. They die of causes other than accident and violence, and the importance
of some of these other causes has diminished greatly in relative importance
over time. Figure 4.4 shows the remarkable progress that has been made in
reducing the effects of infectious diseases; over the same period the lack of
progress, at the aggregate level, in dealing with accident and violence has
greatly increased the relative significance of the latter as a cause of death for
people below the age of 40. For men aged 20, in 1931 infectious diseases
accounted for about 36 per cent of fatalities, and accident and violence only
20 per cent. By 1982 infectious diseases accounted for less than 2 per cent,
and accident and violence for about 70 per cent.
Medical risks are difficult to compare with the risks of accidents and
violence, because they tend to operate more slowly and their diagnoses can
be contentious. Most of the causes of death by accident and violence can be
distinguished from most of the other causes of death by the greater speed
63
with which cause leads to effect, and by the greater clarity and
intelligibility of the connection between cause and effect. The established
international convention used for classifying road accident fatalities
attributes a death to a road accident only if the victim dies within 30 days of
the accident; after that time it is attributed to a complication—pneumonia,
kidney failure, and so on. Where the relationship between cause and effect
is unclear and long delayed, as with smoking and lung disease or radiation
and cancer, there are few opportunities for risk thermostats to function.
Nevertheless, if there has been no collective turning down of the setting of
risk thermostats, the pattern of the graphs in Figure 4.3 is what one would
expect to find, despite the large number of safety measures that were taken
Figure 4.4 Causes of mortality by age (source: British Medical Association
1987).
The efficacy of intervention
64
Error, chance and culture
in all the countries represented in the graphs during the first 75 years of this
century.
The importance of context
The four-part typology of cultural theory has been previously presented as a
typology of “ways of life”, each with its associated “myth of nature”. People,
the Cultural Theorists have argued, cling tenaciously to one of the four ways
of life unless and until confronted with overwhelming evidence of its lack
of viability, at which point they adopt one of the other three. But work by
Dake (1991) on “orienting dispositions in the perception of risk” suggests
that things are not so simple.
Dake set out to test the hypothesis that societal concerns are predictable
given people’s cultural biases. He had limited success. He found that
egalitarianism correlated positively with concerns about matters such as
nuclear war and environmental pollution. On such issues egalitarians were
notably more risk averse than hierarchists or individualists. Hierarchists
were concerned about threats to authority, while individualists were worried
by overregulation and threats to the economy, which forms the framework
within which their free-market ideology operates. And both hierarchists
and individualists were much less concerned than egalitarians about the
prospect of nuclear annihilation, but much more concerned about the threat
of Soviet expansion.
Although Dake found statistically significant correlations between the
categories of cultural theory and concerns about risk, it has proved a difficult
theory to test convincingly. Attempts to categorize and predict founder in
tautology; people are categorized by their beliefs, and these categories are
in turn used to account for their beliefs. Ultimately it is not clear whether
people are fatalists because they feel they have no control over their lives,
or they feel they have no control because they are fatalists.
And although Dake’s analysis yielded several correlations between
cultural bias and concern about risk that were statistically significant and
consistent with hypotheses generated by cultural theory, the correlations
were weak and of limited predictive value; the strongest was 0.46, indicating
that cultural bias could account for only just over 20 per cent of the variance
in concern about nuclear war found amongst his sample of egalitarians.
Most of the other correlations could account for considerably less of the
variance. Dake concludes that:
The perception of risk is set in a continuing cultural conflict in which
the organization of rights and obligations, and of community loyalties
and economic interests, matter at least as much as scientific evidence.
But it must be conceded that attempts thus far at verifying the speculations
65
of cultural theory with quantifiable data have been very partial successes.
Or, put another way, insofar as it has been possible to formulate testable
hypotheses from the theory, it appears to offer only a partial explanation of
the variance observable in people’s responses to risk.
It is common ground shared by psychology and anthropology that the
world is experienced through filters that are the product of earlier experience.
Disciplinary biases lead psychologists to focus on the unique character of
these filters and anthropologists to generalize about their social origins and
common features. In Chapters 2 and 3 we noted the uniqueness of every
risk thermostat, and concluded that there are as many frames of reference in
the world as their are risk-takers. But unless one can find some patterns in
all this uniqueness there is little further to be said.
Scale, and voluntary versus involuntary risk
Consider a case where cultural theory throws up a paradox. Greenpeace
campaigners will take enormous personal risks in their flimsy rubber boats
in order to reduce risks at a planetary level. Are they individualists or
egalitarians? Can they be both at the same time? The answer appears to be
bound up with issues of scale and voluntariness.
Two different risks are being addressed simultaneously in the Greenpeace
example, and in both cases the risk thermostat can be seen to be in operation.
At the personal level the perception of the rewards of risk (saving the whale
or the world, and perhaps the glory that attaches to such an achievement)
lead to a high propensity to take personal risks. In the taking of the actual
risk in the rubber boat, vigilance levels are high, and the balancing act requires
great skill. The daredevil attitudes and abilities required invite comparison
with the Grand Prix driver. They are voluntary and individualistic in
character. The myth of nature commonly associated with such activities is
nature benign—the risk is taken optimistically in the expectation that the
gamble will pay off.
But the action of the Greenpeace boatmen ostensibly addresses another
risk, much larger in scale and involuntary in the sense that it is perceived as
imposed by others. This larger risk, to the whale or the world, threatens
something of value to the group to which they belong, and collective action
is taken to reduce it. The activity in the rubber boat is the culmination of a
host of group activities involving political action, fund-raising and buying
boats. The racing driver on the other hand, although also dependent on the
support of a skilled team, is motivated by rewards that are selfish; he performs
in a milieu that applauds the pursuit of self interest, and evinces little concern
for the wider environmental impacts of its activities.
Several studies have attempted to distinguish between voluntary and
involuntary risk, and have concluded that people are prepared to accept
Scale, and voluntary versus involuntary risk
66
Error, chance and culture
much higher levels of risk from activities that are voluntary. Starr, for
example, reported that the public is willing to accept risk from voluntary
activities such as skiing, which are roughly a thousand times greater than
those it will tolerate from involuntary activities that provide the same level
of benefit (reported in Fischoff et al. 1981).
Several difficulties arise from such contentions. First there are the
problems already discussed about measuring risk; how, for example, does
one compare the risks of skiing to those of nuclear power stations? Secondly,
there are equally intractable problems encountered in measuring benefits;
how does one compare the benefit of skiing with the benefit of electricity
from a nuclear power station. Thirdly, how does one measure voluntariness?
Risk compensation and cultural theory cast some helpful light on these
questions. The dance of the risk thermostats suggests that there are degrees
of volition in the taking of risk, depending on the relative sizes of those
imposing the risk and those imposed upon; the greater the relative size of
the person or agency imposing the risk, the less voluntary the risk will appear
to those imposed upon. But the perception of voluntariness will also depend
on where the threat comes from. If it comes from within one’s own culture,
it will not be seen as imposed. To a supporter of nuclear power, for example,
the risks associated with the industry will be willingly borne, while to an
opponent they will feel imposed; the former will place a higher value than
the latter on the rewards of nuclear power. The perception of risk will also
be affected; fewer dangers are likely to get through the supporter’s cultural
filter. Natural hazards, represented by lightning and the Beijing butterfly,
will either be viewed fatalistically, if nothing can be done about them, or
considered voluntary, if exposure to them can be controlled. Acceptability
of risk also has an economic dimension. Poverty will affect the perception
of rewards and dangers and can induce people to take extra risks. There is a
steep social-economic class gradient to be found in accident rates, with the
poorest experiencing much higher rates than the wealthiest. The concept of
“danger money” is sometimes explicit in the pay structures of dangerous
industries. The fact that there is no clear downward trend to be seen in
Figure 4.3, despite the fact that all the countries represented experienced
great increases in affluence over the first 75 years of this century, provides
some support for the relative income hypothesis that maintains that, above
subsistence levels, it is relative not absolute income that spurs people on.
Despite limited success in validating the cultural theory typology
statistically, it remains useful; but it is necessary to modify its application.
The four myths of nature are partial truths; each is valid some of the time in
some places. People clearly vary their risk-taking according to circumstances.
The variability found in the risk-taking behaviour of an individual cannot
long be contained within a single one of the four “ways of life” depicted in
Figure 3.3. The same person might, depending on context, behave as an
egalitarian supporting environmentalists in their campaign to save the whale,
67
as a hierarchist campaigning for more government regulation of the motor
car or the stock market, as an individualist resentful of the law that requires
him to wear a motor cycle helmet, and a fatalist on receipt of the news that
he has cancer.
“Ways of life” implies a stability, consistency and comprehensiveness of
value systems that is difficult to find in a pluralistic world. Indeed, it has
proved impossible to find pure examples of the cultural theory stereotypes.
Breathes there a person outside the asylum who has never made common
cause with others to achieve some objective, or who acknowledges no social
constraints on his behaviour, or in whom all traces of individuality have
been extinguished by subservience to authority, or who is not rendered
fatalistic by the contemplation of his own mortality? A more flexible term is
needed; cultural theory is better understood not as a set of categories of
“ways of life” but as a typology of bias. Some cultures are more individualistic
than others, some more hierarchical, some more egalitarian and some more
fatalistic. But in certain circumstances risk can draw out each of these
tendencies in any culture.
Error, chance and culture
Insurance companies sell policies at a price that they hope will cover future
claims and leave them a reasonable profit. Their anticipations can be
represented by a frequency distribution such as the solid line Figure 4.5.
The claim rate for the average policy is ACR. To the right are the bad risks,
to the left the good risks. Sometimes they disaggregate their policies, charging
higher rates to insure young male motorists, or lower rates for non-smokers,
for example. They know that within the population that purchases their
policies there are other groups that will have higher or lower than average
claim rates, but the costs of identifying them accurately, devising sales
campaigns to reach a segmented market, and designing separate charging
schedules for them, are considered not worth the effort. They rely on the
law of large numbers to balance the bad risks with good ones.
If a policy holder submits a claim it could be a consequence of a high risk
deliberately taken, or of ignorance or incompetence, or it could be from
someone who was careful but unlucky. The solid line in Figure 4.5 conceals
variation attributable to error, chance and culture (dotted lines). Any
approach to risk that does not acknowledge the rôle of error and chance and
culture in shaping attitudes, influencing behaviour and determining
outcomes will be inadequate for coping both in the insurance industry and
in the casino of life.
Consider again the dance of the risk thermostats. However big and
powerful you are, there is almost always someone bigger. However small
and insignificant, there is almost always someone smaller. There are different,
Error, chance and culture
68
Error, chance and culture
competing bands in each corner of the floor, playing different tunes with
different rhythms. The dancers form clusters; some prefer formation dancing,
others individualistic jiving, some have flailing arms and legs and are given
a wide berth by others, some are wall flowers lurking on the margins, some
will loosen up after a drink or two. Some move about the floor, others tend
to stay put. All human life is there, but no one on the dance floor can possibly
have more than a partial view of what is going on. Risk compensation and
cultural theory provide a precarious imaginary vantage point above the dance
floor, discern motives and pattern in all this activity. They provide a
conceptual framework for making sense of this ever-changing order in
diversity, and a terminology with which people can discuss how best to
cope with it.
Figure 4.5 Good risks, bad risks.
69
Chapter 5
MEASURING RISK
La révolution est un bouleversement qualitatif des statistiques.
From “The Sayings of President Sankara of Burkina Faso”1
Risk, according to the definition most commonly found in the safety
literature, is the probability of an adverse future event multiplied by its
magnitude. We have already noted the elusiveness of objective measures of
risk; records of past accidents are not trustworthy guides to the future because
people respond to risks as soon as they see them, and thereby change them.
But, given the still deeply entrenched view that accident statistics are a
useful measure of safety and danger, and given that they are still virtually
the only measure used for assessing the success of safety interventions, I
turn now to two further problems with these measures. The first is the
unreliability of the historic accident record; not only is it an untrustworthy
guide to the future, it is an untrustworthy guide to the past. The second is
the absence of an agreed scale for measuring the magnitude of adverse events.
Cultural theory can shed light on both these problems. It suggests that
risks are viewed through cultural filters; the institutional arrangements for
monitoring risks through the collection and analysis of statistics relating to
mortality, morbidity, economic damage, and near misses will all reflect the
biases of the collectors and analysts.
Not enough accidental deaths
For the risk analyst, death has two great attractions. It is the least ambiguous
of all the available measures of accident loss and, because it is usually
considered the ultimate loss, it is the most accurately recorded. Deaths are,
however, sufficiently infrequent, and their causes sufficiently diverse, to
make them an unreliable guide to remedial action. Any analysis of the causes
of accidents always leads to the conclusion that they are a stochastic or
1. I am grateful to Andrew Warren for spotting this on a billboard in Ouagadougou.
70
Measuring risk
probabilistic phenomenon. They result from a conjunction of circumstances
to which the victim had, a priori, in most cases assigned a negligible
probability. In the case of fatal accidents, the probabilities are very low
indeed: in 1991 in Britain, in a population of 57 million, there were 12,816
accidental fatalities out of a total of 628,000 deaths by all causes. Figure 5.1
illustrates how, in terms of absolute numbers, fatal accidents barely register
on a graph alongside all causes of death.
These 12,816 accidental deaths were spread over hundreds of thousands
of kilometres of roads, millions of motor vehicles and billions of vehicle
kilometres, plus countless boats, ladders, stairs, rivers, windows, swimming
pools, guns, knives, electrical appliances, inflammable materials, medicines
and toxic substances, to name but a few of the more obvious hazards. Every
year in Britain alone there are many billions of potentially fatal events that
imagination might construe out of all these hazards. The problem for the
safety planner is that the connection between potentially fatal events and
actual fatal events is rendered extremely tenuous by the phenomenon of
risk compensation. Precisely because events are recognized as potentially
fatal, they are rarely so; avoiding action is taken, with the result that there
are insufficient fatal accidents to produce a pattern that can serve as a reliable
guide to the effect of specific safety interventions.
Figure 5.1 Accidents as a cause of death (source: Road accidents Great Britain 1992,
Department of Transport).
71
As a consequence, safety planners seek out other measures of risk:
usually—in ascending order of numbers but in descending order of
severity—they are injury, morbidity, property damage and near misses.
Where numbers of accidents are small, the accident statistics commonly
display great variability, both temporally and geographically. The ratio of
all reported injuries to fatalities is usually large—for road accidents in
Britain about 70 to 1, and it is much easier to achieve “statistical
significance” in studies of accident causation if one uses large numbers
rather than small. Safety researchers therefore have an understandable
preference for non-fatal accident or incident data over fatality data,
especially when dealing with problems or areas having few accidents. In
exercising this preference, they usually assume that fatalities and non-
fatal incidents will be consistent proportions of total casualties, and that
the larger, non-fatal, numbers provide the best available indicators of life-
threatening circumstances.
Figure 5.2 casts doubt on this assumption. It shows that, measured by
road accident injury statistics, in 1985 London was the most dangerous
jurisdiction in Britain with 759 injuries per 100,000 population. But, in
company with most of the other English conurbations, London had one of
the lowest recorded death rates (7.3 per 100,000). The average injury: fatality
ratio for Britain cited above of 70:1 conceals a wide range—from 103:1 for
London down to 23:1 for Dumfries and Galloway.1 The correlation between
fatality rates and injury rates is very weak. Figure 5.2 raises two interesting
questions for those who use casualty statistics as a measure of risk. First, is
the weak correlation between injuries and fatalities real or simply a recording
phenomenon? Secondly, how many injuries equal one life?
There are two contending explanations for the pattern displayed in Figure
5.2. The first is that it reflects real differences in driving conditions. For
every mile travelled on a road in a built-up area (a road with a speed limit of
40mph or less) in Britain in 1991 a motorist had a 127 per cent greater
chance of being injured than he would if travelling on a road in a non-built-
up area (with a higher speed limit). Does this mean that the roads with
lower speed limits are more dangerous for motorists? Not necessarily. The
same source of statistics (Department of Transport 1992) suggests that the
chance of being killed per mile travelled is 12 per cent higher on the roads
with the higher speed limits. The ratio of injuries to fatalities on roads in
built-up areas is 98:1 and in non-built-up areas is 39:1. Thus, at least a part
of the explanation for Figure 5.2 appears to lie in the fact that London is so
congested and traffic speeds so low that there are large numbers of minor
accidents, but that high-speed crashes resulting in more serious damage are
relatively rare. Conversely, on the rural roads of Dumfries and Galloway
1. The ratio for the Isle Wight in 1985 was 140:1, but this is based on a fatality figure
which is very small (5) and unstable from one year to the next.
Not enough accidental deaths
72
Measuring risk
there is much less traffic, but it travels at speeds that make the consequences
of accidents, when they do occur, much more serious.
So which class of road, or area, is safer? How many injuries equal one
life? Is London the most dangerous part of Britain, or one of the safest? If
safety measures could be implemented, which slowed traffic on the road
with the result that the injury rate increased but the fatality rate decreased
by a lesser amount, could the measures be described as a safety success?
Whose safety record is better, Dumfries and Galloway’s, or London’s? Safety
measures such as straightening bends in roads, lengthening sight lines,
improving cambers or raising the coefficient of friction of road surfaces could
all have the effect of reducing numbers of accidents, but also of increasing
speeds and the number of fatal accidents.
A further uncertainty arises from the fact that most road safety measures
have highly localized effects. In a small area, or at the site of a treated accident
black-spot, a decrease in the number of injury accidents might be “statistically
significant”, while a “real” increase in fatalities could remain statistically
undetectable. And even if the effects of a safety measure on both injuries
and fatalities could be detected with confidence, until these two measures
of risk can be measured on a common magnitude scale there can be no
objective way of deciding which accident record to prefer. But before
contemplating this preference further, we must first consider a second
possible explanation for the difference between the accident records of
Figure 5.2 Road accident death and injury rates in Great Britain per 100,000
population. Source: Adams 1988.
73
London and Dumfries and Galloway: that it is not a real difference at all, but
an artefact of the way the statistics are collected.
What gets recorded?
The government are very keen on amassing statistics. They collect
these, raise them to the nth power, take the cube root and prepare
wonderful diagrams. But you must never forget that every one of
these figures comes in the first instance from the village watchman
who puts down what he damn pleases. Sir Josiah Stamp (quoted in
Nettler 1974)
The problem of the reliability of data that we are discussing is not new. Figure
5.3 and Sir Josiah together suggest a second explanation for the variation
displayed in Figure 5.2, or at least for part of it. Injuries are under-recorded—
variably, substantially and, almost certainly, systematically. Between 1930
and 1993 the number of people killed in road accidents in Britain has decreased
by 48 per cent (from 7,305 to 3,814). Over the same period the number of
recorded road accident injuries has increased by 72 per cent (from 178,000 to
306,000). Since 1930 there have been improvements in rescue services and
the medical treatment of crash victims, and cars have become more
crashworthy; so perhaps the increase over time in the ratio of injuries to
fatalities is real. But cars have also become much more powerful and faster,
and lorries have become much heavier, with the result that the physical damage
they can cause in a crash is much greater. Further, by 1992 there were more
than twice as many police in Britain as in 1930. So perhaps the change is
simply the result of a larger fraction of injury accidents being recorded now
than in 1930. It is also possible that at least part of the geographical differences
in the injury: fatality ratios might be accounted for by a higher degree of under-
reporting of minor injuries in more sparsely populated areas where police are
thinner on the ground. London has almost twice as many police per 1,000
population as the rest of Britain. One does not need to go very far in London
to find a policeman in order to report a minor accident.
The “Severity Iceberg”, Figure 5.3, shows the way in which uncertainty
in the data increases as the severity of injury decreases. The fatality statistics
are almost certainly the most accurate and reliable of the road accident
statistics. Death on the road in most countries is treated very seriously. Several
studies from countries all around the world (summarized in Hutchinson
1987) which have compared police and hospital statistics have found that
virtually all fatalities recorded by hospitals are also recorded by the police.
For injuries, however, the situation is less satisfactory. In most countries the
classification of injuries is done within a short time after the accident by
medically unqualified police. The categorization and recording of injuries
What gets recorded?
74
Measuring risk
is generally not informed by any evidence from a medical examination. In
Britain, according to evidence given by the British Medical Association (1983)
to the House of Commons Transport Select Committee, the resulting numbers
are not only defective, but positively misleading. They said
The existing definitions on which records are based are misleading.
Only one in four casualties classified as seriously injured are, in
fact, seriously injured and many of those classified as slightly
injured are in fact seriously injured. The existing [police] definition
of “seriously injured” covers everything from a broken finger to total
paralysis and to death occurring more than 30 days after the
accident. Within these unsatisfactory definitions there is wide-
spread under-reporting and mis-reporting of casualties and the
distribution of these errors varies widely between different
categories of road user. The information is very defective in the case
of pedestrians and cyclists, who are at high risk of serious injury as a
result of their lack of protection.
Curious about how medically untrained police might cope with the
distinction between “slight shock” and “severe general shock”—one of the
criteria by which slight accidents are distinguished from serious accidents—
I asked some policemen that I encountered in the street in London. None of
them was very sure. One of them, trying to be helpful, recalled a case where
an electricity cable had fallen across a car and electrocuted the occupant;
that, he was sure, was serious shock.
The BMA went on to present evidence that some 30 per cent of traffic
accident casualties seen in hospital are not reported to the police, and that
Figure 5.3 The Severity Iceberg: the areas of the rectangles are proportional to the
numbers of casualties recorded in Road accidents Great Britain 1993 (source:
Department of Transport 1994).
75
at least 70 per cent of cyclist casualties go unrecorded. It is not known how
much of the variation in the injury Natality ratios displayed in Figure 5.2 is
real and how much is the result of variation in recording practice. The BMA
also noted that the degree of under-reporting increases as severity decreases.
Towards the bottom of the Severity Iceberg under-reporting will approach
100 per cent; there will be a degree of severity which is sufficiently slight
that neither the injured person nor the police will consider it worth reporting.
It is widely accepted that the “Iceberg” effect can be found in many other
statistics originating with the police; for example a much higher percentage
of murders are reported than minor assaults or burglaries. And certainly the
numbers of drivers found guilty of speeding or drunken driving are related
to the resources devoted to detecting these offences; a Home Office study
has estimated that only about one drink-driving offence in 250 results in a
conviction. Thus, the changes from one year to the next in the official statistics
relating to drunken-driving convictions are much more likely to measure
changes in enforcement than changes in the amount of drunken driving.
Many studies have found a strong positive correlation between expenditure
on the police and recorded crime. It is accepted by most criminologists that
police crime statistics have a very tenuous connection with crime.
The instructions for categorizing road accident casualties in Britain contain
no guidance about how to distinguish a slight injury, which should be
recorded, from one that is real but so slight as to be not worth recording.
The decision about what accidents to record is ultimately subjective. It is
likely to vary from victim to victim and recorder to recorder, and to be
influenced by the priority that individual police officers and police forces
place on road safety relative to other demands on their limited resources. It
will also vary with the number of staff available to record and process the
information. Because the number of injuries increases as severity decreases,
a small move of the recording threshold up or down is capable of producing
a large change in the numbers officially injured. When large numbers of
police are diverted to the pursuit of bombers or murderers, or are embroiled
in industrial disputes, or occupied containing civil unrest, the reduced
number available for recording injuries, especially minor ones, is likely to
lead to a raising of the threshold at which an injury is deemed worthy of
recording—thereby producing a “safety improvement”.
In all countries that report road accident statistics, it is the police, not the
health services, that collect the information. The rectangles of the Severity
Iceberg represent the numbers of casualties officially reported by the British
police for 1993. A Transport and Road Research Laboratory study comparing
the police numbers with hospital records has found a 21 per cent under-
recording of serious injuries and a 34 per cent under-recording of slight
injuries. Hutchinson, in a review of all such studies he could find around
the world, suggests that in many other countries the degree of under-reporting
of non-fatal injuries is much greater. In Ohio one study estimated that as
What gets recorded?
76
Measuring risk
many as 45 per cent of injuries are not recorded by the police, another in the
Netherlands estimated 55 per cent, another in Finland found 75 per cent of
casualties needing medical attention were not recorded in the police statistics,
and in Sweden, for cyclists, the figure unrecorded was 80 per cent.
The volume of the unrecorded bottom of the Iceberg—real injuries not
considered worth reporting or recording—is likely to be very large. The
application of existing definitions of injury is at the discretion of the police,
and variable pressure on police resources renders the exercise of this
discretion inevitably variable. What gets recorded is likely to be a mixture
of what the policeman “damn pleases” and what circumstances permit. The
deeper one goes below the surface of Figure 5.3 the fishier the numbers
become.
London can serve to illustrate some of the limitations of these numbers
for understanding accidents. In terms of its population size it bears
comparison with many countries, being only a little smaller than Sweden
and considerably larger than Denmark or Norway. Figure 5.4a shows the
great variability of road accident fatality statistics from one year to the next
for the 32 London boroughs. The average population of each borough is
about 200,000; although fatality statistics may be accurately recorded, for
cities or other jurisdictions this size or smaller, short time-series and trends
in fatality data should be viewed with great suspicion because of the
instability of the small numbers. By contrast Figure 5.4b shows that for the
same boroughs the injury data from one year to the next correlate extremely
highly. Figure 5.4c showing the strong correlation between the level of
policing and the level of recorded accidents lends support to the speculation
that the small year-to-year variability in recorded injuries is at least in part a
function of the size and stability of the bureaucracy that records them. The
extreme outlier in Figures 5.4b and 5.4c is the Borough of Westminster. It
has a high number of injuries relative to its resident population because of
the large daily influx of non-resident civil servants and other office workers
who work there. It is heavily policed because of its large day-time population
and to ensure the security of Parliament and the central government.
The relationship between road accident fatality rates (Fig. 5.4a) and road
accident injury rates (Fig. 5.4b) for London Boroughs (per 100,000 population)
for two successive years. Figure 5.4c the relationship between the number
of police tours of duty performed by uniformed divisional constables on
street duty in 1986 (per 100,000 population) and numbers of recorded injuries
(per 100,000 population)