ArticlePDF Available

The fifth age of safety: The adaptive age



It has been argued that OHS has developed and evolved through a technical age, a human factors age and a management systems age or through a technical wave, a systems wave and a culture wave. A fourth age of safety has been described as the integration age. As the limitations of OHS management systems and safety rules that attempt to control behaviour are becoming evident, it is proposed that we are moving into a fifth age of safety, the ‘adaptive age’; an age which transcends rather than replaces the other ages of safety. The adaptive age embraces adaptive cultures and resilience engineering and requires a change in perspective from human variability as a liability and in need of control, to human variability as an asset and important for safety. Embracing variability as an asset challenges the comfort of management. However, the gap between work as imagined and work as performed and the failure of OHS management systems and safety rules to adequately control risk mean that a new perspective is required. © 2009, Royal Society of Medicine Press Ltd. All rights reserved.
It has been argued that OHS has developed and evolved through a technical age, a human factors age and a
management systems age or through a technical wave, a systems wave and a culture wave. A fourth age of safety has
been described as the integration age. As the limitations of OHS management systems and safety rules that attempt to
control behaviour are becoming evident, it is proposed that we are moving into a fifth age of safety, the ‘adaptive age’;
an age which transcends rather than replaces the other ages of safety. The adaptive age embraces adaptive cultures
and resilience engineering and requires a change in perspective from human variability as a liability and in need of
control, to human variability as an asset and important for safety. Embracing variability as an asset challenges the
comfort of management. However, the gap between work as imagined and work as performed and the failure of OHS
management systems and safety rules to adequately control risk mean that a new perspective is required.
This paper presents a review of existing and emerging
approaches for managing occupational health and
safety (OHS) and puts forward the view that, under
certain circumstances, more adaptive approaches to
managing OHS are required.
Hale and Hovden (1998) have argued that OHS
has developed and evolved through three so-called
‘ages of safety’. The first age was a technical age,
the second a human factors age and the third a
management systems age. A different sequence of
development was put forward by Hudson (2007),
who suggested that safety has evolved through three
waves. The first was a technical wave, the second a
systems wave and the third a culture wave. Both of
these views suggest that the process of development
has been sequential. Glendon et al. (2006) posits an
alternative view, that each period of development
does not leave behind, but rather builds on, what has
gone before. He refers to this process of development
as the fourth age of safety or the ‘integration age’
where previous ways of thinking are not lost, but
remain available to be reflected upon as multiple,
more complex perspectives develop and evolve.
Notwithstanding the suggested integration age
(Glendon et al., 2006), it may be timely to introduce
the possibility that we are moving into a fifth age
of safety or an ‘adaptive age’. The adaptive age
transcends all other ages without discounting them,
whilst introducing the concept of ‘adaptation’, the
adaptive age goes beyond simply integrating the
past. This notion is informed by current discussions
around resilience engineering (Hollnagel, 2006)
Cite this article as: Borys, D., Else, D.
& Leggett, S., (2009) The fifth age of
safety: the adaptive age?, J Health &
Safety Research & Practice, (1)1, 19-27
1VIOSH Australia, University of
Ballarat, University Drive, Mt
Helen, Victoria, Australia
Correspondence: David Borys, VIOSH
Australia, University of Ballarat,
University Drive, Mt Helen, Victoria,
Key words
resilience, mindfulness,
culture, adaption
The fth age of safety: the adaptive age
DaviD Borys1
Dennis else1
susan leggett1
and ‘efficiency-thoroughness trade-offs’
(ETTO) (Hollnagel, 2009a) that take us
beyond the contemporary ways of thinking
about managing OHS that typically focus
on OHS management systems (OHSMS),
safety culture and safety rules.
beyond oHs mAnAgement
systems to AdAptIve cultures
Increasingly, the limitations of an over-
emphasis on documented management
systems have started to emerge. Robson
et al. (2005) in their systematic review of
health and safety management systems
found that “there is insufficient evidence
in the published, peer-reviewed literature
on the effectiveness of OHSMSs to make
recommendations either in favour of or
against OHSMSs” (p. 9). The 1999 Report
of the Longford Royal Commission into
the explosion at Esso’s Longford gas plant
in Victoria found that although Esso had
a world class OHSMS, the system had
taken on a life of its own, “divorced from
operations in the field” and “diverting
attention away from what was actually
happening in the practical functioning
of the plants at Longford” (Dawson &
Brooks, 1999, p. 200).
Similarly, Hopkins (2007), in his analysis
of the 1996 Gretley mine disaster concedes
that “experience is now teaching us that
safety management systems are not enough
to ensure safety” (p. 124). Further, a 2007
report commissioned by the New South
Wales Mines Advisory Council argued
that an OHSMS should be built on the
principles of mindfulness and not be a
“complex, paper-based OHS management
system” (p. xiii).
Reason (2000) contends that managers
believe that OHSMS sit apart from culture.
He suggests that an over-reliance on
systems and insufficient understanding of,
and insufficient emphasis on, workplace
culture, can lead to failure because “it is the
latter that ultimately determines the success
or failure of such systems” (p. 5).
Safety culture has emerged as a major
focus in improving OHS performance.
Hopkins (2005) argues that this stems in
part from recognition of the limitations
of OHSMS. In his analysis of the 1999
Glenbrook train crash involving a
commuter train and the Indian Pacific,
Hopkins identifies the danger of a culture of
rules, a culture of silos, a culture of on-time
running, together with the related dangers
of a culture that is risk-blind or risk-
denying. These are matters that are outside
the scope of traditional OHSMS and it may
be that OHSMS mask the emergence of
these cultures which become all too readily
available to see with hindsight.
Hopkins (2007) views safety culture as
one aspect of organisational culture, or more
particularly an organisational culture that is
focused on safety. Further, culture is viewed;
as a group, not an individual, phenomenon;
efforts to change culture, should, in the
first instance, focus on changing collective
practices (the practices of both managers and
workers) and the dominant source of culture
is what leaders pay attention to. Much of
Hopkins work draws on Reason’s (1997)
notion that a safe culture is an informed
culture and Weick and Sutcliffe’s (2001;
2007) principles of collective mindfulness.
Reason (1997) argues that culture can be
socially engineered by managers and that
a safe culture is an informed culture. He
argues that in navigating the safety space
between increasing vulnerability to risk and
increasing resistance to risk, organisations
should strive for maximum resistance
to risk (as opposed to the unobtainable
goal of ‘zero risk’). He goes on to argue
that there are three cultural drivers that
allow organisations to achieve maximum
resistance to risk: (i) Commitment reflected
in the provision of resources to mitigate
risk, even in tough times; (ii) Cognisance
reflected in an awareness of the dangers
that threaten operations; (iii) Competence
gained from an information system that
provides managers with an understanding
of where they are relative to the edge of
safety without having to fall over it first.
The latter point is achieved through the
engineering of an informed culture and in
Reason’s view; an informed culture is a
safety culture. An informed culture is made
up of the four interlocking sub-cultures of a
reporting culture, a learning culture, a just
culture and a flexible culture.
Hudson suggests (2007) that safety
culture evolves and may be represented
by a five step ladder of distinct stages:
pathological, reactive, calculative,
proactive and generative. Progression up
the ladder is associated with increasing
trust, accountability and informedness
(as in Reason’s informed culture). What
remains unclear is how organisations move
from one step on the ladder to another.
An alternative view suggests that culture
is not homogeneous within organisations
and can be both differentiated and
fragmented (Richter & Koch, 2004).
Much as managers may espouse the safety
values associated with a single corporate
culture, organisations may consist of many
cultures based on professional groupings
(Gherardi et al., 1998; Schein, 1996) or
other communities of practice (Gherardi &
Nicolini, 2000).
The adaptive age requires an acceptance
by organisational leaders that groups of
workers may, through interaction with
one another and the tasks they perform
together, create their own shared meanings
about what it is to work safely. Under this
view that culture is ‘socially constructed’
(Gherardi & Nicolini, 2000), leaders do not
so much hope to engineer a single culture
but attempt to understand and influence
these differentiated and fragmented
cultures such that they are at least aligned
with the corporate culture (Martin, 2002).
Further, Weick and Sutcliffe (2007) argue
that where integrated cultures deny
ambiguity, differentiated and fragmented
cultures handle ambiguity better, a feature
more consistent with High Reliability
Organisations. The implication is that the
adaptive age requires adaptive cultures.
The notions of an adaptive age and
adaptive cultures may also require a
change in perspective in relation to the
causes of fatalities, injuries and disease
and a corresponding implicit awareness of
more than one perspective for preventing
fatalities injuries and disease. This change
in perspective is captured by Hollnagel
(2008a) who contrasts two perspectives on
safety: theory W and theory Z as shown in
Table 1. He argues that to improve safety,
a change in perspective is required towards
theory Z; a theory that accepts that
humans, because of their capacity to adapt
to demands, are an asset to the proper
functioning of modern organisations.
Things go right because people:
Systems are well designed and
scrupulously maintained
Procedures are complete and correct
People behave as they are expected
to – as they are taught
Designers can foresee and
anticipate every contingency
Things go right because people:
Learn to overcome design flaws
and functional glitches
Adapt their performance to meet demands
Interpret and apply procedures
to match conditions
Can detect and correct when things go wrong
Humans are a liability and variability is a threat.
The purpose of design is to constrain variability,
so that efficiency can be maintained.
Humans are an asset without which the
proper functioning of modern technological
systems would be impossible.
Table 1 Summarising the key perspective changes required in the adaptive age
Source: Hollnagel, 2008(b)
However, the need for adaptation is
contingent upon an understanding of the
complexity of the organisation (socio-
technical system) that is being managed.
In some organisations (systems), adapting
may be a pre-requisite for safe performance
whilst in others it may be disastrous. Dekker
(2001), for example, makes the point that
failing to adapt can be disastrous under
certain circumstances and he cites the case
of an aircraft which crashed into the sea
off the cost of Nova Scotia in 1998. In this
case, following procedures for dealing with
smoke and fire and not descending too fast,
rather than dumping fuel and descending
rapidly, led to the plane becoming
uncontrollable and crashing into the sea.
The dilemma here is that, under certain
circumstances, following procedures may
result in fatalities and injuries. However,
at another time and in a different context,
not following procedures may also lead to
fatalities and injuries. Thus adaptation is a
double-edged sword (Dekker, 2006). This
poses a challenge to how we are to think
about and action Hollnagel’s Theory Z. In
the adaptive age, Theory Z does not imply
mindless abandonment of procedures, or
a “free for all”, rather it requires a more
demanding standard of attention resulting
in a more subtle, nuanced and refined
appreciation of how OHS is managed that
embodies the capacity to be adaptive rather
than rule bound. To better understand
this dilemma, Hollnagel (2009a) offers a
two dimensional model of performance
variability and risk as shown in Figure 1.
LOW Risk of adverse outcomes HIGH
LOW Need for performance adjustments HIGH
Loose Coupling Tight
Tractable Manageability Intractable
• Chemical Plant
• Nuclear Power
• Manufacturing Plant
• University
Figure 1 Hollnagel’s Dimensions of performance variability and risk
The first dimension in Hollnagel’s model
(Hollnagel, 2009a) is system ‘manageability’
or controllability. Within tractable systems
(simple, stable systems that are easy
to control) the need for adaptability is
low. By comparison, intractable systems
(complex systems subject to change) the
need for adaptability is high. The second
dimension is coupling (or the degree of
inter-dependence between parts of the
system). Tightly coupled systems are
characterised by more time dependant
processes, invariant sequences, little slack
and only one way to reach production
goals (Perrow, 1999). In tightly coupled
systems the risk of adverse outcomes is
high. Within loosely coupled systems it is
low. This results in four possible ways to
characterise an organisation (Hollnagel,
2009a); (i) a loosely coupled tractable
system where the work is routine, requires
little in the way of performance variability
and any performance variability that
is present will have negligible impact
upon performance; (ii) a loosely coupled
intractable system is less predictable and
the need for performance adjustments
will be higher, however, any performance
variability will have negligible impact
upon performance; (iii) a tightly coupled
tractable system also requires little in the
way of performance adjustments; however,
performance adaptations that are made
and that fail (Dekker, 2003) may quickly
result in unwanted consequences because
of tight coupling; and (iv) a tightly coupled
intractable system may require constant
performance adjustments to operate safely.
Therefore the ways of thinking about and
approaches to managing OHS must be at
least equal to the demands and complexity
of the socio-technical system associated
with the organisation’s activities. If it is
decided that the organisation is a tightly
coupled, intractable system, for example
nuclear power, then a more adaptive
response will be necessary. Alternatively,
if it is decided that the organisation is
a loosely coupled, tractable system, for
example, a manufacturing plant, then
fewer adaptive responses will be necessary.
beyond sAfety rules to
collectIve mIndfulness
Safety rules are often written on the
basis that greater control of workers’
behaviour will not only lead to a safer
workplace, but also act as a buffer
against prosecution in the case of
an accident. However, opinions are
emerging that more safety rules and
less variability in worker behaviour
does not necessarily equate with
improved safety performance. In some
cases, writing more rules following an
incident may lead to conflict between
the rule and the actions required to
undertake a task (Reason, 1997).
Hopkins (2005) prefers to complement
safety rules with a strategy of risk-
awareness which invites workers “to
attend to the risks they face and not
simply comply with rules in a mindless
fashion” (p. 18). This is supported
by examples from industry (Hale et
al., 2003; Jeffcott et al., 2006) and by
Dekker (2003) who argues that “rather
than simply increasing pressure to
comply, organisations should invest in
their understanding of the gap between
procedures and practice, and help
develop operators’ skill at adapting”
(p. 233). He goes on to propose that
organisations need to:
“(a) Monitor the gap between procedure and practice
and try to understand why it exists (and resist trying
to close it by simply telling people to comply).
(b) Help people to develop skills to judge when and
how to adapt (and resist telling people only that they
should follow procedures)” (p. 236).
This is captured by the term “Collective
Mindfulness” that is based on the premise
that “unvarying procedures can’t handle
what they didn’t anticipate” (Weick et al.,
1999, p. 86). Or to put it another way,
variability in human performance enhances
safety whilst unvarying performance can
undermine safety, particularly in complex
socio-technical systems.
In his analyses of the Esso Longford
gas plant in Victoria (Hopkins, 2001)
and the Gretley mine disaster (Hopkins,
2007) Hopkins is critical of the absence of
mindfulness among managers and identifies
the need for mindful leadership as one
strategy for averting disaster. In his analysis
of the BP Texas City explosion Hopkins
(2008) discusses how BP had embarked
upon a quest to become a High Reliability
Organisation (HRO) (to exhibit the
characteristics of collective mindfulness)
but was largely unsuccessful because they
focused on educating front line workers
to think differently without instituting
the organisational practices necessary to
support collective mindfulness.
Effective HROs organise themselves to
learn from failure rather than celebrating
success (Weick et al., 1999) and give strong
responses to weak signals (Weick & Sutcliffe,
2001, p. 4). In short, they are “complex
adaptive systems” (Weick et al., 1999, p.
117). HROs are adaptive because; they are
‘preoccupied with failure’ and treat “any lapse
as something wrong with the system” (p. 9);
they are ‘reluctant to simplify’ and strive to
simplify less and see more; they are ‘sensitive
to operations’ and encourage situation
awareness among front line workers; they
have a ‘commitment to resilience’ and do not
allow errors to disable them; and they exhibit
‘deference to expertise’ and move decision
making to those people on the front line with
the most expertise.
More recently, Reason (2008) has argued
that both individual mindfulness and
collective mindfulness are necessary for
“maintaining a state of intelligent wariness”
(p. 241). This view represents a departure
from the view expressed by Weick and
Hopkins, a view that emphasises collective
mindfulness over individual mindfulness.
Reason (2008, p. 31) defends the need
for individual mindfulness by posing the
question: “If we cannot make systems
immune to organisational accidents, what
can we do to improve the reliability and
error wisdom of those at the sharp end?”
The ‘sharp end’ refers to any person who
is directly interacting with the hazards in a
particular context and at a particular time. In
essence, it is these people that are the last line
of defence between safe and unsafe outcomes.
Therefore, providing people at the sharp end
with the skills of knowing when to adapt
is good for safety and when it could be life-
threatening. It may mean complementing
safety rules and procedures with what
Iszatt-White (2007, p. 452) refers to as
“heedfulness”. However, workers will need to
trust in the “efficacy and applicability” of the
safety rules if the rules are to over-ride workers
propensity to think that they can work safely
without following the safety rules (Iszatt-
White, 2007, p. 461). To enhance heedfulness,
Iszatt-White (2007, p. 463) argues that “the
HRO notions of heedfulness, mutual checking
and initiative offer a useful lens through
which to consider the shortcomings of rule-
based safety approaches”. This approach to
managing OHS is again indicative that we are
entering an adaptive age.
Providing that interventions designed
to encourage individual mindfulness
or heedfulness are complemented
with mindfulness or heedfulness at the
organisational level, then it represents a
worthwhile step forward particularly if one
is to adopt the perspective that variability in
performance is better for safety. Individual
mindfulness requires workers at the sharp
end to have the skills and knowledge to be
able to judge when and how to adapt to local
circumstances, and when not to adapt, and
is consistent with the third HRO principle
of being ‘sensitive to operations’. Some
organisations attempt to achieve this through
programs that encourage mindfulness or
what Hopkins refers to as “risk-awareness”
(Hopkins, 2005) in individual workers.
However, Borys (2009) in a study of one
program, found that the program was little
more than a ritual that focused on completing
paperwork rather than an incentive to think
carefully about risks. All that it managed
to achieve was a culture of completing
the paperwork, highlighting the need for
organisational practices to work in support
of individual mindfulness.
from collectIve mIndfulness to
resIlIence engIneerIng
Contemporary approaches to safety have
attempted to establish safe systems and
ensure that managers and workers work
inside the boundaries of those safety
systems (Woods & Hollnagel, 2006). Thus
it is assumed that constraining human
performance is essential for safety. An
alternative paradigm that is emerging is that
safety is achieved by managers and workers
adapting to changing circumstances. In
this case, it is the variability in human
performance, relative to the situation,
that is essential for safety. Although this
paradigm emphasises adaptive practices,
these practices are designed to complement
not replace good safe design principles
whilst acknowledging that complex socio-
technical systems will always present
opportunities for surprise. Therefore,
under this alternative paradigm, safety is
understood as a “characteristic of how a
system performs” (Woods & Hollnagel,
2006, p. 347) and that resilience is a quality
that emerges from the functioning of the
system. Resilience engineering subscribes
to this alternative paradigm and in doing
so, is similar to collective mindfulness
and heedfulness as all three concepts
focus on the importance of performance
variability for safety. However, what sets
resilience engineering apart from collective
mindfulness is the focus on learning
from successful performance as well as
unsuccessful performance (Hollnagel,
2008c, 2009b) i.e. why things go right
and as well as why things go wrong.
The rationale for this perspective is that
failures and successes result from the
same underlying processes (Hollnagel,
2009b). Hollnagel (2008b) argues that “it
is necessary to study both successes and
failures and to find ways to reinforce the
variability that lead to successes as well as
dampen the variability that leads to adverse
outcomes” ( p. xii). Thus Hollnagel (2009b,
p. 117) states:
A resilient system is able effectively to adjust its
functioning prior to, during, or following changes
and disturbances, so that it can continue to perform
as required after a disruption or a major mishap, and
in the presence of continuous stresses.
Resilience engineering research has
focussed on intractable and tightly coupled
systems such as air traffic control centres
and hospital emergency departments
and led researchers to identify a range
of markers of resilience. While there is
no agreement on these, one marker that
has been referred to repeatedly in the
resilience engineering literature is the gap
between work as imagined and work as
actually done (Dekker, 2006; Dekker &
Suparamaniam, 2005). One reason for the
widening of this ‘gap’ is a phenomenon
known as “practical drift” (Snook, 2000).
Practical drift refers to a situation where,
over time, local work practices ‘drift’ away
from the original intent at the time of system
design, to more locally efficient work
practices. However, if the local practices
drift unnoticed and the degree of coupling
in the system switches from loose to tight
coupling, for example, circumstances may
change resulting in functions becoming
more time dependant (Perrow, 1999)
without a corresponding change in local
practices from task to rule focused, then
the results can be catastrophic. Such was
the case in the friendly fire shoot down of
a Blackhawk helicopter over northern Iraq
in 1994 (Snook, 2000). In this case, crews
were struggling to make sense of their
situation and in the time available, failed to
do so. Each level of the system, individual,
group and organisational, failed to identify
that local practice had uncoupled from the
written procedures. When there is slack in
the system, this is seen as being efficient, but
when circumstances change and revert to
being tightly coupled and time dependant,
like when attempting to identify if the
helicopters below you are friend or foe,
then the resultant decisions can be deadly.
The adaptive age demands that people at
all levels of the organisation need to be able
to distinguish between drift that is adaptive
and improves organisational performance
and drift that becomes dangerous.
The solution to drift is not attempting to
further restrict performance variability as
this simply sets up a new cycle of practical
drift. Rather, it is more appropriate to
monitor and detect drift toward failure
and attempt to estimate the distance
“between operations as they really go on,
and operations as they are imagined in
the minds of managers and rule-makers”
(Dekker, 2006, p. 78).
Therefore “drift into failure” can be used
as a metaphor for organisations wishing to
become more resilient. For organisations
this may mean making the gap between
work as imagined and work as actually
performed visible because the more the gap
remains hidden, the more likely it is that the
organisation will drift into failure. In fact
Dekker and Suparamanian (2005) go so far
as to say that the larger the gap “the less likely
that people in decision-making positions
are well calibrated to the actual risks and
problems facing their operation” (p. 3).
As the limitations of OHSMS and safety
rules that attempt to control behaviour are
becoming evident, it is time to consider that
we are moving into a fifth age of safety, the
‘adaptive age’; an age which transcends
rather than replaces the other ages of safety,
ages which include the dominant safety
paradigm that assumes that safety is achieved
by establishing safe systems and ensuring
that managers and workers work inside the
boundaries of those safety systems.
The adaptive age challenges the view
of an organisational safety culture and
instead recognises the existence of socially
constructed sub-cultures. The adaptive age
embraces adaptive cultures and resilience
engineering and requires a change in
perspective from human variability as a
liability and in need of control, to human
variability as an asset and important for
safety. In the adaptive age learning from
successful performance variability is as
important as learning from failure.
Borys, D. (2009). Exploring risk-awareness
as a cultural approach to safety: Exposing
the gap between work as imagined
and work as actually performed. Safety
Science Monitor, 13(2), 1-11.
Dawson, D. M., & Brooks, B. J. (1999).
The Esso Longford gas plant accident:
Report of the Longford Royal Commission.
Melbourne, Vic: Parliament of Victoria.
Dekker, S. (2001). Follow the procedure
or survive. Human Factors and
Aerospace Safety, 1(4), 381-285.
Dekker, S. (2003). Failure to adapt
or adaptations that fail: Contrasting
models on procedures and safety.
Applied Ergonomics, 34, 233-238.
Dekker, S. (2006). Resilience engineering:
Chronicling the emergence of confused
consensus. In E. Hollnagel, D. D. Woods &
N. Leveson (Eds.), Resilience engineering:
Concepts and precepts. Hampshire: Ashgate.
Dekker, S., & Suparamaniam, N. (2005).
Divergent images of decision making
in international disaster reilef work
(No. 2005-01). Ljungbyhed, Sweden:
Lund University School of Aviation.
Gherardi, S., & Nicolini, D. (2000).
The organizational learning of safety
in communities of practice. Journal of
Management Inquiry, 9(1), 7-19.
Gherardi, S., Nicolini, D., & Odella, F.
(1998). What do you mean by safety?
Conflicting perspectives on accident and
safety management in a construction
firm. Journal of Contingencies & Crisis
Management, 6(4), 202-213.
Glendon, A. I., Clarke, S. G., & McKenna,
E. F. (2006). Human safety and risk
management (2nd ed.). Boca Raton, FL.
Hale, A. R., Heijer, T., & Koornneef, F. (2003).
Management of safety rules: The case of
railways. Safety Science Monitor, 7(1), 1-11.
Hale, A. R., & Hovden, J. (1998). Management
and culture: the third age of safety. A
review of approaches to organizational
aspects of safety, health and environment.
In A. M. Feyer & A. Williamson (Eds.),
Occupational injury: Risk prevention and
intervention. London: Taylor and Francis.
Hollnagel, E. (2006). Resilience: The
challenge of the unstable. In E. Hollnagel,
D. D. Woods & N. Leveson (Eds.),
Resilience engineering: Concepts and
precepts. Hampshire, England: Ashgate.
Hollnagel, E. (2008a). Human factors -
understanding why normal actions sometimes
fail. Paper presented at the Railway Safety
in Europe: Towards Sustainable Harmonised
Regulation 18th November, Lille, France.
Hollnagel, E. (2008b). Resilience
engineering in a nutshell. In E. Hollnagel, C.
P. Nemeth & S. Dekker (Eds.), Resilience
engineering perspectives, volume 1:
Remaining sensitive to the possibility of
failure. Hampshire, England: Ashgate.
Hollnagel, E. (2008c). Safety management:
Looking back or looking forward. In E.
Hollnagel, C. P. Nemeth & S. Dekker (Eds.),
Resilience engineering perspectives, volume
1: Remaining sensitive to the possibility of
failure. Hampshire, England: Ashgate.
Hollnagel, E. (2009a). The ETTO
principle. Surrey, England: Ashgate.
Hollnagel, E. (2009b). The four
cornerstones of resilience engineering.
In C. P. Nemeth, E. Hollnagel & S.
Dekker (Eds.), Resilience engineering
perspectives, volume 2: Preparation and
restoration. Surrey, England: Ashgate.
Hopkins, A. (2001). Lessons from
Longford: The ESSO gas plant explosion.
Sydney: CCH Australia Ltd.
Hopkins, A. (2005). Safety, culture
and risk. Sydney: CCH Australia.
Hopkins, A. (2007). Lessons from
Gretley: Mindful leadership and the
law. Sydney: CCH Australia.
Hopkins, A. (2008). Failure to learn: The BP
Texas City refinery disaster. Sydney: CCH.
Hudson, P. (2007). Implementing
safety culture in a major multi-national.
Safety Science, 45, p697-722.
Iszatt-White, M. (2007). An ethnography of
rule violation. Ethnography, 8(4), 445-465.
Jeffcott, S., Pidgeon, N., Weyman, A., &
Walls, J. (2006). Risk, trust and safety
culture in U.K. train operating companies.
Risk Analysis, 26(5), 1105-1121.
Martin, J. (2002). Organizational culture:
Mapping the terrain. Thousand Oaks CA: Sage.
Perrow, C. (1999). Normal accidents: Living
with high-risk technologies. Princeton,
New Jersey: Princeton University Press.
Reason, J. (1997). Managing the risks of
organizational accidents. Aldershot: Ashgate.
Reason, J. (2000). Beyond the
limitations of safety systems. Australian
Safety News, April, 54-55.
Reason, J. (2008). The human contribution:
Unsafe acts, accidents and heroic
recoveries. Surrey, England: Ashgate.
Richter, A., & Koch, C. (2004). Integration,
differentiation and ambiguity in safety
cultures. Safety Science, 42, 703-722.
Robson, L., Clarke, J., Cullen, K., Bielecky,
A., Severin, C., Bigelow, P., et al. (2005). The
effectiveness of occupational health and safety
management systems: A systematic review.
Toronto, Ontario: Institute for Work & Health.
Schein, E. H. (1996). Three
cultures of management: The key
to organizational learning. Sloan
Management Review, Fall, 9-29.
Snook, S. A. (2000). Friendly fire: The
accidental shootdown of U.S. Black
Hawks over northern Iraq. Princeton,
NJ: Princeton University Press.
Weick, K. E., & Sutcliffe, K. M.
(2001). Managing the unexpected.
San Francisco: Jossey-Bass.
Weick, K. E., & Sutcliffe, K. M. (2007).
Managing the unexpected (2nd ed.). San
Francisco CA: John Wiley & Sons.
Weick, K. E., Sutcliffe, K. M., & Obstfeld,
D. (1999). Organizing for high reliability:
Processes of collective mindfulness. Research
in Organizational Behaviour, 21, 81-123.
Woods, D. D., & Hollnagel, E. (2006).
Prologue: Resilience engineering concepts.
In E. Hollnagel, D. D. Woods & N. Leveson
(Eds.), Resilience engineering: Concepts and
precepts. Hampshire, England: Ashgate
... High reliability organising was conceptualised after foundational research in the U.S. air traffic control system (La Porte 1988), electrical operations, and power generation at the Pacific Gas and Electric Company (Schulman 1993), and flight operations aboard two U.S. Navy aircraft carriers (Rochlin et al. 1987) by a group of four researchers at UC Berkeley (Saunders 2015). Their operations were relatively high risk in volatile and uncertain environments, yet sustain high levels of safety performance, while meeting highly unpredictable and demanding production tasks (Beyea 2005, Sutcliffe 2011). The study was inspired by Charles Perrow's Normal Accident Theory which provided an alternative explanation of why organisational disasters such as the Three Mile Island nuclear plant explosion were inevitable (Perrow 1984). ...
... The processes in HRO have been suggested as possible ways to prevent errors in high hazard and complex organisations (Lekka and Sugden 2011). During the adaptive age of safety management strategies, HRO principles were used to improve safety management in the industrial sector (Borys et al. 2009). Their systematic procedures are applied in managing safety because they operate in a tightly coupled system (Harvey et al. 2019). ...
... A further challenge faced by duty holders is that ideas about risk management are evolving. Prominent authors such as Dekker (2014), Hollnagel (2014), Borys et al. (2009) and others see the need for change if progress in OSH is to be maintained. Dignan (2019), for example, argues that traditional management models and operating systems are no longer fit for purpose. ...
Full-text available
Since the early 1970s the UK has been a global pioneer of a risk-based approach to OSH, a key element of which is that duty holders are responsible for assessing risks and on that basis determining which controls are necessary. According to the accompanying legislative doctrine, duty holders may be penalized if there is a failure to implement necessary controls. Events have also shown that duty holders who implement controls which are deemed to be disproportionate may also be criticised. This paper describes an investigation of how the risk-based regulatory regime is perceived 50 years on by a cohort of UK experts, and the nature in their view of any difficulties encountered by duty holders in achieving proportionality. It is concluded that while the risk-based approach continues to enjoy widespread support, duty holders face a challenging task in striking an appropriate balance, and some system-related perils render this task problematic.
... Human variability and our ability to adapt to a situation are now upheld as assets rather than liabilities to managing safety. Borys and Leggett [2] described this as a key characteristic of the current safety "era". Pillay et al. [3] suggest that people play a vital role in the proper functioning of Gromb [20] label as dynamic and uncertain situations. ...
Full-text available
This study aims to test further the RISKometric previously developed by the authors. This paper is the second of three studies: it compares individuals’ RISKometric results in the first study with their performance in a risk scenario exercise in this second study; so, providing a reliability review for the RISKometric. A risk scenario exercise was developed that required participants to individually undertake a risk management process on a realistic, potentially hazardous event involving working at heights during simultaneous operations. Two observers assessed their responses, rating the participants’ competence in each of the seven risk management process elements. Twenty-six participants individually undertook the risk scenario exercise, known as round one. Analyses found that participants’ individual competence ratings given to them (by observers) when undertaking the risk scenario exercise were strongly and positively associated with the competence ratings given to them by their peers and downline colleagues in the RISKometric in an earlier study; for each of the seven elements in this second study. This finding supports the RISKometric as a useful tool for rating the competency of individuals in the seven elements of the risk management process. Work was also undertaken in preparation for a planned future third study whereby eight participants of the original 26 were selected to individually undertake the risk scenario exercise again to determine any difference in ratings, e.g., if there was a learning effect. The analysis found no significant difference over the two rounds.
The increasing usage of cobot applications reshapes work environments and working conditions, requiring specific advancements in organizational practices for health and safety. Enterprises should shift from a technocentric risk management approach to considering cobots application as socio-technical systems, where resilience engineering is beneficial. This study presents an instantiation of the Resilience Analysis Grid (RAG) in cobot applications with the aim of measuring resilience potentials in terms of the four cornerstones of resilience engineering (respond, learn, monitor, anticipate). The assessment has been provided via a questionnaire to fifteen companies that make use of cobot applications. Results revealed that companies mainly focus on the risk assessment of cobot applications with a traditional view of machine-centric safety, paying less attention to assessing contexts and process variables. This observation seems to arise mainly due to the lack of formally available safety methods or limited guidance from technical standards. Additionally, traditional industrial approaches to risk management lack monitoring of several risks that are essential for managing resilience, defined as the adaptive capacity of people, organizations, and human-machine systems. In addition, companies strongly rely on data from the cobot manufacturer for their safety assessment. The Resilience Analysis Grid was confirmed as a valuable assessment tool for the participating companies to identify improvement areas and assess health and safety from a resilience engineering perspective.
Full-text available
Os acidentes de trabalho representam perdas irreparáveis do ponto de vista econômico e social. A indústria da construção lidera o ranking dos acidentes mais graves e fatalidades, por conta das características de baixa qualificação, elevada rotatividade e o reduzido investimento por parte das empresas em treinamento e condição de trabalho. Nesse contexto, a presente pesquisa tem como objetivo principal analisar e avaliar se o Mapeamento Reverso de Análise de Falhas (MRAF) identifica os setores críticos, mitiga a reincidência dos acidentes e corrobora à mudança do nível da maturidade da cultura de segurança. Para tanto desenvolveu-se um procedimento sistematizado com base na integração das técnicas Failure mode and effects Analysis (FMEA), Gráfico de Pareto e Mapeamento de Processos. Metodologicamente a pesquisa foi suportada por um estudo de caso em profundidade em uma obra de construção e montagem industrial no interior do estado do Rio de Janeiro, que contou com análise documental, entrevistas, observação direta e aplicação do procedimento sistematizado. Como resultados, observou-se que o “processo de concretagem” era o mais crítico em relação a ocorrência dos acidentes e que os modos de falhas mais prejudiciais estão relacionados com as falhas na identificação da qualificação e/ou capacitação profissional dos trabalhadores da linha de frente.
The “New View” of occupational safety is gaining increased attention within both the construction industry and its associated academe. With the potential to overcome the current plateau in accident rates and support the further enhancement of occupational safety on sites, the “New View” offers an alternative approach to more traditional command driven safety management and instead takes a sociotechnical perspective, valorising the workers and acknowledging their contributions to the system in the form of adaptability and resilience. Yet empirical research of “New View” thinking and practice within construction is lacking. Meaningful research in this space demands non-positivistic approaches able to reveal nuanced and local insights able to inform and illuminate “New View” practices and the contexts in which they could potentially be implemented on sites. Here, we make a methodological contribution with the aim to advance empirical research in this space. Social practice theory is employed and evaluated as an approach able to make such a useful contribution. Through the exploration and explication of the block of “site safety practice,” we demonstrate the utility of this theoretical approach for “New View” researchers, whilst also making a fundamental contribution to knowledge in the form of insights of the local and situated contexts, in which “New View” thinking could be practically applied.
This chapter explores the prescription and practice of safe work method statements (SWMS) to understand whether they enhance or hinder resilience engineering (RE). SWMS are a regulated construction safety strategy in Australia but their specific role in safety is unknown. In RE reconciling the gap between prescription and practice of work plays an important role in achieving safety; however, the specific links between SWMS, RE and safety have not been empirically investigated. Semi-structured interviews with managers, supervisors, and workers showed across three construction projects showed that SWMS as prescribed are a cognitive artefact, act as a form of control, involve a process, and act as a tool, while SWMS in practice were expected to provide protection and act as a tool. Findings suggest that SWMS, if used flexibly, will enhancing RE a construction safety strategy. This research provides empirical evidence on the utility of SWMS for improving construction safety, introduces an integrative framework for RE for investigating work-as-imagined and work-as-done, and provides additional insights of how social interactions initiated through SWMS can enhance safety and organizational behaviour. The research will be useful for developing and deploying SWMS deployed more effectively in construction projects.KeywordsConstruction safetyResilience engineeringSafe work method statementsConstruction project management
The Swedish Transport Agency defined contributing to a high safety culture in transport companies as a key element in its regulatory strategy. This study examines how the safety culture strategy was received and enacted by regulators and companies within each transport sector, and factors influencing this. We discuss what a regulatory agency can accomplish through a safety culture strategy, and the extent to which it is bounded by the safety management maturity level in each sector. A key question is whether safety management requires a sequential, or evolutionary development, where companies must implement well-functioning safety management systems (SMS) before being organisationally mature enough to work successfully with safety culture. Our results seem to support this assertion, as we find that transport sectors with legal SMS requirements focus on safety culture, and work with safety culture elements (e.g. reporting/just culture, continuous improvement, involvement) to ensure that the SMS is a living system. Sectors without SMS requirements (i.e. road) do not focus on safety culture. Without SMS, it seems that safety culture work equals focusing on safety commitment among managers and employees. We identify additional factors influencing organisational maturity level and safety culture focus, limiting soft safety regulation, e.g. business structure, maintaining equal conditions for competition.
Full-text available
Safety culture has risen to prominence over the past two decades as a means by which organisations may enhance their safety performance. One way to conceptualise safety culture is as an interpretive device that mediates between organisational rhetoric and safety programs on the one hand, and how local workplace cultures make sense of and choose to interpret the rhetoric and programs on the other. More recently, risk-awareness programs have emerged as an approach to changing safety culture. Front line workers are encouraged to become risk-aware through programs designed to prompt them to undertake mental or informal risk assessments before commencing work. The problem is that risk-awareness programs have not been the subject of systematic research. Therefore, the purpose of this ethnographic study of two sites within a large contract maintenance organisation in Australia was to explore the impact of a risk-awareness program upon workers' awareness of risks, their risk control practices, managers' practices in relation to the program and the impact of the program on safety culture more generally. This study found that managers focused upon collecting the paperwork associated with the program whereas workers preferred to rely upon their common sense to keep them safe. For workers, the completion of the paperwork became a ritual that served to appease the organisational rhetoric about safety but had minimal influence upon their awareness of risk and their risk control practices. Consequently, the paperwork created an illusion of safety for managers as much as common sense did for workers. Therefore, this study found a gap between work as it was imagined by the managers and work as it was actually performed by the workers. The results of this study have implications for the design of risk-awareness programs and the role of risk-awareness programs in creating a culture of safety.
Full-text available
Every technology and activity has safety rules, which are usually formulated explicitly, taught to those operating in the system and imposed on them. Safety rules also determine liability after accidents. Yet there is very little systematic scientific or management literature on how to devise and manage safety rules. This paper uses a simple framework to draw together what is known of good and bad practice in this area, particularly in deciding what rules should be explicitly formulated and imposed. It draws on the literature on violations, on rule learning and on organizational control. The paper concludes with a case study of safety rules in the railways, derived from a larger European study of safety rule management for railway operations. It shows that the nature of the system dynamics and the current communications within railway operations result in a largely open loop operation of the system. This makes it vulnerable to any form of deviation from strictly defined operations. Safety rules are part of the apparatus to render the behaviour of the various people in the system sufficiently predictable that this open- loop operation can succeed in a large proportion of situations. However this requires that adherence to rules is very strict and has great problems coping with any deviations, even those required to respond to situati ons which cannot be dealt with following the existing rules. This requirement conflicts with much of the available literature on organisational control and high reliability organisations. However, the more interactive derivation of rules specific to the range of system conditions requires a far greater communication between system operators than the railway system currently requires. The room for manoeuvre in optimising safety rule use in railways is therefore currently limited.
High Reliability Organizations (HROs) have been treated as exotic outliers in mainstream organizational theory because of their unique potentials for catastrophic consequences and interactively complex technology. We argue that HROs are more central to the mainstream because they provide a unique window into organizational effectiveness under trying conditions. HROs enact a distinctive though not unique set of cognitive processes directed at proxies for failure, tendencies to simplify, sensitivity to operations, capabilities for resilience, and temptations to overstructure the system. Taken together these processes induce a state of collective mindfulness that creates a rich awareness of discriminatory detail and facilitates the discovery and correction of errors capable of escalation into catastrophe. Though distinctive, these processes are not unique since they are a dormant infrastructure for process improvement in all organizations. Analysis of HROs suggests that inertia is not indigenous to organizing, that routines are effective because of their variation, that learning may be a byproduct of mindfulness, and that garbage cans may be safer than hierarchies.
This introductory chapter sets the scene by raising the questions, making the case for their importance, suggesting how they might be resolved, and sketching the approach that will be adopted in the chapters that follow, and also introduces the principal characters and some of their conflicting opinions on appropriate lines to take. It tells of the way in which children were regarded by adults and, in particular, the ways in which parents responded to their untimely deaths. In so doing, the chapter engages with Philippe Ariès' controversial ‘parental indifference hypothesis’ as well as the wider approach to death and dying in the past developed mainly by French historians, including Michel Vovelle, Pierre Chaunu, and Daniel Roche.