ArticlePDF Available


The different theories developed to encourage people in the intelligent systems and cognitive engineering communities to reflect on scientific foundations of human centered computing (HCC) were discussed. Some principles were also proposed for HCC such as the envisioned world principle, the fort knox principle, the pleasure principle, the janus principle and the moving target principle. It was suggested that the laws constituting the theory of complex and cognitive systems must be consistent in order to be mathematically inclined. The three important issues that are involved in creating a theory of complex and cognitive systems are completeness, consistency and testability.
Human-Centered Computing
The Aretha Franklin Principle: Do not devalue the
human in order to justify the machine. Do not criticize
the machine in order to rationalize the human. Advocate
the human-machine system in order to amplify both.
The Sacagawea Principle: Human-centered computa-
tional tools need to support active organization of
information, active search for information, active explo-
ration of information, reflection on the meaning of
information, and evaluation and choice among action
sequence alternatives.
The Lewis and Clark Principle: The human user of the
guidance needs to be shown the guidance in a way that
is organized in terms of his or her major goals. Informa-
tion needed for each particular goal should be shown in
a meaningful form, and should allow the human to di-
rectly comprehend the major decisions associated with
each goal.
The Envisioned World Principle: The introduction of new
technology, including appropriately human-centered
technology, will bring about changes in environmental
constraints (that is, features of the sociotechnical system,
or the context of practice). Even though the domain con-
straints might remain unchanged, and even if cognitive
constraints are leveraged and amplified, changes to the
environmental constraints might be negative.
The Fort Knox Principle: The knowledge and skills of
proficient workers is gold. It must be elicited and pre-
served, but the gold must not simply be stored and safe-
guarded. It must be disseminated and utilized within the
organization when needed.
The Pleasure Principle: Good tools provide a feeling of
direct engagement. They simultaneously provide a feel-
ing of flow and challenge.
The Janus Principle: Human-centered systems do not
force a separation between learning and performance.
They integrate them.
The Mirror-Mirror Principle: Every participant in a
complex cognitive system will form a model of the
other participant agents as well as a model of the con-
trolled process and its environment.
The Moving Target Principle: The sociotechnical work-
place is constantly changing, and constant change in
environmental constraints might entail constant change
in cognitive constraints, even if domain constraints
remain constant.
The term “principle” doesn’t actually do much work in
science. Colloquially, it’s used as a tacit reference to laws, as
in “This device works according to the principle of gravity.
What are these so-called principles? Our answer leads to
additional considerations involving the use of the principles.
Cute mnemonics?
Are the principles we’ve proposed simply aids to help
people remember some tips from people who’ve grappled
with issues at the intersection of humans, technology, and
work? Indeed, we’ve deliberately given the principles
names that have both mnemonic and semantic value, even
though we could have given them more technical designa-
tions. And yes, they are “tips.
But they’re more than that.
Cautionary tales?
Are the principles merely signposts at the fork between
ssays in this department have presented nine propo-
sitions that we’ve referred to as principles of human-
centered computing:
Toward a Theory of Complex
and Cognitive Systems
Robert R. Hoffman, Institute for Human and Machine Cognition
David D. Woods, Ohio State University
76 1541-1672/05/$20.00 © 2005 IEEE IEEE INTELLIGENT SYSTEMS
Published by the IEEE Computer Society
Editors: Robert R. Hoffman, Patrick J. Hayes, and Kenneth M. Ford
Institute for Human and Machine Cognition, University of West Florida
paths to user-hostile and user-friendly sys-
tems? We and Paul Feltovich discussed the
reductive tendency, which is a necessary
consequence of learning: At any given time,
any person’s knowledge of a domain is
bound to be incomplete and to some extent
We pointed out that this ten-
dency also applies to those who are creat-
ing new information technologies, espe-
cially Complex and Cognitive Systems
(people working in teams, using informa-
tion technology to conduct cognitive work
to reach certain goals). Indeed, the people
who try to create new Complex and Cogni-
tive Systems are themselves prone to gen-
erate reductive understandings, in which
complexities are simplified:
The reductive tendency would be the assump-
tion that a design principle has the same applic-
ability and effects throughout the many dif-
ferent and changing contexts of work and
practice. That is, the effects, embodied in the
design principle, will hold fairly universally
across differing practice environments.
So, the principles are indeed important
cautionary tales.
But they’re more than that.
Are the principles recipes that we can use
to design human-centered or otherwise
“good” information technologies? Take the
Sacagawea Principle, for example. Can we
go from that principle to a good design for a
specific application? Hardly. The princi-
ples, as we’ve stated them, aren’t entries in
a cookbook that measure goodness in quarts
or bytes or hours in the oven. They’re not a
substitute for empirical inquiry (for exam-
ple, cognitive task analysis), design creativ-
ity, or proper software development. In dis-
cussing the application of human-centered
computing notions to the design of intelli-
gent systems, Axel Roesler, Brian Moon,
and Robert Hoffman stated, “The principles
of human-centered computing which have
been discussed in essays in this Department
are not entries for a cookbook; they are not
axioms for design.
Rather than being formulas, the princi-
ples imply design challenges. Looking
back on the essays, we find one challenge
expressed directly. An implication of the
Fort Knox Principle is what we called the
Tough Nut Problem
: How can we redesign
jobs and processes, including workstations,
computational aids, and interfaces, in such
a way as to get knowledge elicitation as a
“freebie” and at the same time make the
usual tasks easier?
“Project managers or designers may
choose to adopt [the principles] if their
goal is to create good, complex cognitive
The principles do serve as con-
straints on design.
But they’re more than that.
Empirical generalizations?
The principles we’ve mentioned in this
department’s essays by no means exhaust the
set we’ve generated. Consider another, for
example, the Principle of Stretched Systems:
Complex and Cognitive Systems are always
stretched to their limits of performance and
adaptability. Interventions (including innova-
tions) will always increase the tempo and
intensity of activity.
Every system is stretched to operate at its
capacity. As soon as some improvement,
some new technology, exists, practitioners
(and leaders, managers, and so on) will
exploit it by achieving a new intensity and
tempo of activity.
An example that should resonate with
most readers goes as follows. “Gee, if I
only had a robust voice recognition system,
I could cope with all my email much bet-
ter.” We’ve heard this plea many times. But
stop to consider what would really happen.
Others would use the technology too, so
the pace of correspondence would acceler-
ate, and people would wind up right back
where they were—in a state of mental
overload. Systems always get stretched.
This has happened whenever new informa-
tion technologies have been introduced
into, and changed, the workplace.
The principles aren’t just cautionary
tales or design constraints; they’re empiri-
cally grounded generalizations that have
stood the test of time. If we tap into the
literature on the philosophy of science,
we’d say that the principles are
Generalizations. Referring to classes of
things, not to individual things
Extensional generalizations. Based on
empirical or descriptive evidence
But they’re more than that.
Scientific laws?
As we’ve stated them, the principles are
what philosophers call nomological gen-
eralizations. That is, they’re universal for
the realm of discourse or for some speci-
fied boundary conditions. This criterion is
important for physical law: It’s literally
impossible for matter to travel faster than
the speed of light, for example. But it’s
certainly possible to create information
technologies that don’t support compre-
hension and navigation (Sacagawea, Lewis
and Clark Principles), that don’t integrate
learning and performance (Janus Principle),
or that fail to induce a feeling of joyful
engagement (Pleasure Principle).
But it’s impossible to create “good”
human-centered systems that violate the
principles. Thus, “goodness” sets a strong
boundary condition and will prove, we
think, to be a critical concept for cognitive
As we’ve stated them, the principles are
what philosophers of science call open
generalizations. That is, the evidence that’s
been used to induce the principles doesn’t
coincide with the range of application. If
the evidence that is available were all the
evidence there is, the science would stop.
Kenneth Craik described this feature of
scientific laws in the following way:
Now all scientific prediction consists in dis-
covering in the data of the distant past and of
the immediate past (which we incorrectly call
the present), laws or formulae which apply
also to the future, so that if we act in accor-
dance with those laws, our behavior will be
appropriate to the future when it becomes the
For all new applications and forms of
Complex and Cognitive Systems, the prin-
ciples should apply in the way Craik
describes. So, what we’ve been calling
principles are extensional, nomological
generalizations whose fate is to be deter-
It’s certainly possible to create
information technologies that
don’t support comprehension and
navigation, don’t integrate learning
and performance, or fail to induce
a feeling of joyful engagement.
mined empirically. In other words, they’re
scientific laws.
But laws of what? This department is
about making computational devices such
as VCRs
human centered. But most of the
essays have focused on technologies used
in sociotechnical contexts. Complex and
Cognitive Systems are systems in which
multiple human and machine agents collab-
orate to conduct cognitive work. Cognitive
work is goal-directed activity that depends
on knowledge, reasoning, perceiving, and
communicating. Cognitive work involves
macrocognitive functions including knowl-
edge sharing, sense making, and collabora-
tion. Furthermore, Complex and Cognitive
Systems are distributed, in that cognitive
work always occurs in the context of multi-
ple parties and interests as moments of pri-
vate cognition punctuate flows of interac-
tion and coordination. Thus, cognitive work
is not private but fundamentally social and
The principles—we should now say
laws—are not just about HCC as a view-
point or paradigm or community of prac-
tice; they’re about Complex and Cogni-
tive Systems in general. We do not refer
to complex cognitive systems because that
sort of adjective string would involve an
ambiguity. Are they complex systems? Is
it the cognition that’s complex? Complex
and Cognitive Systems, as we intend,
uses the word “and” to express a neces-
sary conjunction. The designation would
embrace notions from “cognition in the
and from distributed systems.
So, we have a domain of discourse or
subject matter. But a science needs more
than that.
Steps toward a theory?
Salted throughout the essays have been
statements implying that the principles
hang together. An earlier essay on the Plea-
sure Principle stated that both the Saca-
gawea and the Lewis and Clark Principles
are suggestive of a state in which the practi-
tioner is directly perceiving meanings and
ongoing events, experiencing the problem
they are working or the process they are con-
trolling. The challenge is to live in and work
on the problem, not to have to always fiddle
with machines to achieve understanding.
This suggests that the principles resonate
with one another. The interplay of the prin-
ciples becomes meaningful.
The Envisioned World Principle and the
Moving Target Principle have a strong
entailment relation. New technologies are
hypotheses about how work will change,
yet the context of work is itself always
changing. The Envisioned World Principle
involves changes to cognitive constraints
(for example, task requirements) brought
about by changes in environmental con-
straints (that is, new technologies). The
Moving Target Principle asserts that cog-
nitive constraints are also dynamic (for
example, new methods of weather fore-
casting and new knowledge of weather
dynamics involve changes in forecaster
understanding). Thus, all three sources of
constraint can be in constant flux when-
ever new technologies are introduced into
the workplace.
Another criterion philosophers of sci-
ence hold out for postulates to be scien-
tific laws is that laws must have entailment
relations. They must hang together in nec-
essary, interesting, and useful ways. Thus,
what we seem to have been reaching for
is a theory.
But what is it for?
Why a theory?
We see two primary motivations for a
theory of Complex and Cognitive Systems.
The first lurks in previous essays in this
department such as the discussion of
kludges and work-arounds
and Kim
Vicente’s discussion of VCRs.
All sorts
of smart, well-intentioned people are out
there building new intelligent technologies,
and have been doing so for years. The
notion of user-friendliness has been around
for over two decades. Yet, we’re all con-
fronted daily with technologies that are
not only not user-friendly but also down-
right user-hostile. We’re even tempted to
assert this as another principle (law): The
road to user-hostile systems is paved with
user-centered intentions.
Confronted with the problems that new
technologies cause (apart from the prob-
lems they might solve) and new challenges
entail, sponsors of systems development
efforts have come to cognitive engineers
crying for guidance in designing informa-
tion technologies. Sponsors yearn for sys-
tems that will solve difficult problems in
knowledge acquisition, collaboration, and
so on, including such problems as how to
enable a single person to control multiple
robots or how to help weather forecasters
build rich, principled visualizations of their
mental models of atmospheric dynamics.
It would hardly do for cognitive engi-
neers to reply with cute mnemonics, or
cautionary tales, or cookbooks, or a disas-
sociated collection of empirical generaliza-
tions. Cognitive engineers must present a
coherent, empirically grounded, empiri-
cally testable scientific theory.
But aren’t there already theories out
there? Systems theory? A theory from cog-
nitive psychology? The second motivation
for a theory of Complex and Cognitive Sys-
tems is that the phenomena that occur in
sociotechnical contexts are emergent and
involve processes not adequately captured
in either cognitive science or systems sci-
ence. Explaining Complex and Cognitive
Systems, and understanding their behaviors,
will require more than the available per-
spectives and theories. Indeed, this is part of
the motivation for the distinction between
macrocognition and microcognition.
Cognitive theory might tell us about the
millisecond-to-millisecond process of
attentional shift, but it doesn’t say much
about situational awareness. It might tell us
about the processes of sentence comprehen-
sion, but it doesn’t say much about sense-
making in real-world, dynamic situations.
Systems notions and notions of com-
plexity are indeed critical for any under-
standing of Complex and Cognitive Sys-
tems. For instance, the Triples Rule (the
unit of analysis of the human-machine-
context triple)
and the Aretha Franklin
Principle both involve systems concepts.
But while systems theory can tell us about
interactions and feedback, it doesn’t say
much about human collaboration or distrib-
uted cognition.
The principles—we should now
say laws—are not just about
HCC as a viewpoint or paradigm
or community of practice; they’re
about Complex and Cognitive
Systems in general.
How do we extend the theory?
You might wonder whether the set of
laws constituting a particular theory is com-
plete. This is a high standard employed in
logical or axiomatic theories, to which we
might not be subject because the Theory of
Complex and Cognitive Systems isn’t a
theory of logic or mathematics. But we
might wonder, especially in light of the
reductive tendency, to simply assert that the
theory of Complex and Cognitive Systems
is incomplete. The laws (formerly, princi-
ples) that we have mentioned in this depart-
ment are certainly not all that there are—we
know of some two dozen more that haven’t
yet been essay topics. But beyond this fuller
list of laws, we assert that incompleteness is
in fact a feature of the theory.
To the mathematically inclined, we
might then be free to assert that the laws
constituting the theory of Complex and
Cognitive Systems are consistent. Rather
than doing so, however, we assert that the
theory’s inconsistency is indeterminate.
This affords one path to testability in the
form of “forced inconsistency.” If a Com-
plex and Cognitive System is designed and
initiated in accordance with any subset of
the laws, doing so shouldn’t force a viola-
tion of any other law. If that happens, the
theory might need fixing.
Completeness, consistency, and testabil-
ity are just three of the outstanding issues
involved in creating a theory of Complex
and Cognitive Systems. Obviously, more
work is needed.
Numerous subtleties and
nuances must be sorted out, involving oper-
ational definitions of key concepts and
other paths to testability.
This essay aims to encourage people in
the intelligent systems and cognitive engi-
neering communities to reflect on the sci-
entific foundations of HCC.
How do we forge a scientific founda-
tion? Once forged, how do we use it? How
do we extend, refine, and empirically test
it? We invite you to correspond with this
department’s editors concerning more can-
didate laws and the challenges for a theory
of Complex and Cognitive Systems.
1. R.R. Hoffman et al., “The Triples Rule,IEEE
Intelligent Systems, May/June 2002, pp.
2. M. Endsley and R.R. Hoffman, “The Saca-
gawea Principle,IEEE Intelligent Systems,
Nov./Dec. 2002, pp. 80–85.
3. S.W.A. Dekker, J.M. Nyce, and R.R. Hoff-
man, “From Contextual Inquiry to Designable
Futures: What Do We Need to Get There?”
IEEE Intelligent Systems, Mar./Apr. 2003, pp.
4. R.R. Hoffman and L.F. Hanes, “The Boiled
Frog Problem,IEEE Intelligent Systems,
July/Aug. 2003, pp. 68–71.
5. R.R. Hoffman and P.J. Hayes, “The Pleasure
Principle,IEEE Intelligent Systems, Jan./Feb.
2004, pp. 86–89.
6. R.R. Hoffman, G. Lintern, and S. Eitelman,
“The Janus Principle,IEEE Intelligent Sys-
tems, Mar./Apr. 2004, pp. 78–80.
7. G. Klein et al., “Ten Challenges for Making
Automation a ‘Team Player’in Joint Human-
Agent Activity, IEEE Intelligent Systems,
Nov./Dec. 2004, pp. 91–95.
8. P.J. Feltovich, R.R. Hoffman, and D. Woods,
“Keeping It Too Simple: How the Reductive
Tendency Affects Cognitive Engineering,
IEEE Intelligent Systems, May/June 2004, pp.
9. R.R. Hoffman, A. Roesler, and B.M. Moon,
“What Is Design in the Context of Human-
Centered Computing?” IEEE Intelligent Sys-
tems, July/Aug. 2004, pp. 89–95.
10. D.D. Woods and R.I. Cook, “Nine Steps to
Move Forward from Error,Cognition, Tech-
nology, and Work, vol. 4, 2002, pp. 137–144.
11. A. Kaplan, The Conduct of Inquiry, Chandler,
12. W. Weimer, Notes on Methodology of Scien-
tific Research, Lawrence Erlbaum, 1979.
13. K.J.W. Craik, “Theory of the Operator in
Control Systems: I. The Operator as an Engi-
neering System,British J. Psychology, vol.
38, 1947, pp. 56–61.
14. K.J. Vicente, “Crazy Clocks: Counterintuitive
Consequences of ‘Intelligent’ Automation,
IEEE Intelligent Systems, Nov./Dec. 2001, pp.
15. E. Hutchins, Cognition in the Wild, MIT
Press, 1995.
16. G. Coulouris, J. Dollimore, and T. Kindberg,
Distributed Systems: Concepts and Design,
3rd ed., Addison-Wesley, 2001.
17. P. Koopman and R.R. Hoffman, “Work-
Arounds, Make-Work, and Kludges,IEEE
Intelligent Systems, Nov./Dec. 2003, pp.
18. R.R. Hoffman, G. Klein, and K.R. Laughery,
“The State of Cognitive Systems Engineer-
ing, IEEE Intelligent Systems
, Jan./Feb.
2002, pp. 73–75.
19. G. Klein et al., “Macrocognition,IEEE Intel-
ligent Systems, May/June 2003, pp. 81–85.
20. D.D. Woods and R. Hoffman, A Theory of
Complex Cognitive Systems, tech. report, Inst.
Human and Machine Cognition, 2005.
Robert R. Hoffman is a senior research
scientist at the Institute for Human and
Machine Cognition. Contact him at IHMC,
40 So. Alcaniz St., Pensacola, FL 32502-
David D. Woods
is a professor of
industrial and sys-
tems engineering
and the coordinator
of the Cognitive
Systems Engineer-
ing Laboratory at
Ohio State Univer-
sity. Contact him at the Cognitive Systems
Eng. Lab, 210 Baker Systems, Ohio State
Univ., 1971 Neil Ave., Columbus, OH
The Business of
Software Engineering
Software Inspections
The Business of
Software Engineering
Software Inspections
Planning with
Intelligent Systems
in Government
Heuristics in
Transportation and
Visit us on the Web at
... In the process of attempting to formulate a stopping rule (i.e., what does it mean in a complexity theory context to assert that a set of laws might be complete?) and an initial attempt at constructing a metatheory (i.e., postulates concerning lawlikeness, and the theory's completeness, consistency, and testability) (Hoffman and Woods, 2005;Hoffman et al., in preparation), we realized that the empirical generalizations are in fact derivatives from a set of five covering laws. In theoretics, a covering law is a second-order nomological generalization that places first-order empirical extensional generalizations into an entailment relation (i.e., any formal model of the covering law will also be a model of the first-order laws). ...
... Continuing research on the empirical regularities in macrocognitive work systems (Hoffman and Woods, 2005;Woods, 2002;Woods and Hollnagel, 2006) led us to identify additional trade-offs: ...
... The five bounds, as covering laws, integrate empirical generalizations about work systems. This, in turn, provides the basis for formalization of a measurement methodology (Hoffman and Woods, 2005;Hoffman, Hancock and Bradshaw, 2010;Hoffman et al., in preparation). ...
Full-text available
This paper discusses the fundamental trade-offs that bound the performance of all human work systems. Originally, Herbert Simon introduced a single trade-off, which he called "bounded rationality." The five tradeoffs we present emerged from combining empirical generalizations about macrocognitive work systems with basic constraints on complex adaptive systems. Five fundamental tradeoffs were identified, with Simon's notion being recast as just one of them. The five bounding trade-offs integrate empirical laws as the basis for a formal theory of human-machine macrocognitive work systems and an approach to measuring macrocognitive work at this system level.
... The process of exploiting these opportunities will result in a new and greater intensity and tempo of activity as the work system moves toward the edge of its competence envelope. (Woods, 2002, p. 14; see also Hoffman & Woods, 2005;Hollnagel, 2009) It is exceptionally rare for macrocognitive work systems to match their environment exactly; ...
Objective: As human factors and ergonomics (HF/E) moves to embrace a greater systems perspective concerning human-machine technologies, new and emergent properties, such as resilience, have arisen. Our objective here is to promote discussion as to how to measure this latter, complex phenomenon. Background: Resilience is now a much-referenced goal for technology and work system design. It subsumes the new movement of resilience engineering. As part of a broader systems approach to HF/E, this concept requires both a definitive specification and an associated measurement methodology. Such an effort epitomizes our present work. Method: Using rational analytic and synthetic methods, we offer an approach to the measurement of resilience capacity. Results: We explicate how our proposed approach can be employed to compare resilience across multiple systems and domains, and emphasize avenues for its future development and validation. Conclusion: Emerging concerns for the promise and potential of resilience and associated concepts, such as adaptability, are highlighted. Arguments skeptical of these emerging dimensions must be met with quantitative answers; we advance one approach here. Application: Robust and validated measures of resilience will enable coherent and rational discussions of complex emergent properties in macrocognitive system science.
... It is possible that interactive overviews may help to overcome fidelity loss. Additional design challenges for weather forecasting displays include improving visual discriminability (Dobson, 1979;Wickens and Carswell, 1997), highlighting meaningful information clusters to facilitate integration (Ratwani et al., 2008), and structuring the information landscape in a way that assists the user to achieve their goals in a hierarchical needsbased order (Hoffman and Woods, 2005;Trafton and Hoffman, 2007). ...
The Flooded Locations and Simulated Hydrographs (FLASH) project is a suite of tools that use weather radar-based rainfall estimates to force hydrologic models to predict flash floods in real-time. However, early evaluation of FLASH tools in a series of simulated forecasting operations, it was believed that the data aggregation and visualization methods might have contributed to forecasting a large number of false alarms. The present study addresses the question of how two alternative data aggregation and visualization methods affect signal detection of flash floods. A sample of 30 participants viewed a series of stimuli created from FLASH images and were asked to judge whether or not they predicted significant or insignificant amounts of flash flooding. Analyses revealed that choice of aggregation method did affect probability of detection. Additional visual indicators such as geographic scale of the stimuli and threat level affected the odds of interpreting the model predictions correctly as well as congruence in responses between national and local scale model outputs.
... The 'Pleasure Principle' of Human-Centered Computing (Hoffman & Hayes, 2005;Hoffman & Woods, 2005) asserts that: The good cognitive work system instills in the human a sense of joyful engagement. The worker feels that they are living in the problem rather than fighting with the technology. ...
Macrocognition Metrics and Scenarios: Design and Evaluation for Real-World Teams translates advances by scientific leaders in the relatively new area of macrocognition into a format that will support immediate use by members of the software testing and evaluation community for large-scale systems as well as trainers of real-world teams. Macrocognition is defined as how activity in real-world teams is adapted to the complex demands of a setting with high consequences for failure. The primary distinction between macrocognition and prior research is that the primary unit for measurement is a real-world team coordinating their activity, rather than individuals processing information, the predominant model for cognition for decades. This book provides an overview of the theoretical foundations of macrocognition, describes a set of exciting new macrocognitive metrics, and provides guidance on using the metrics in the context of different approaches to evaluation and measurement of real-world teams. © Emily S. Patterson and Janet E. Miller 2010. All rights reserved.
This volume is the first comprehensive history of task analysis, charting its origins from the earliest applied psychology through to modern forms of task analysis that focus on the study of cognitive work. Through this detailed historical analysis, it is made apparent how task analysis has always been cognitive.
Full-text available
Investigations into complex adaptive systems (CAS) have identified multiple trade-offs that place hard limits on the behavior of adaptive systems of any type. Complexity theory continues to search for a formalization that can unify these trade-offs around one or a few fundamental ones, and explain how observed tradeoffs are derived from the most basic ones (Alderson and Doyle, 2010). Resilience Engineering (RE) also arose from the recognition that basic trade-offs placed hard limits on the safety performance of teams and organizations in the context of pressures for systems to be "faster, better, cheaper" (Woods, 2006; Hollnagel, 2009). Combining the results from CAS on physical complex systems with the results from RE on high risk, high consequence human designed systems leads to a potential unification. The unification consists of (a) five basic trade-offs that bound the performance of all human adaptive systems (Hoffman and Woods, 2011), and (b) an architecture for polycentric control or governance based on regulating margin of maneuver to be able to dynamically balance the conflicts, risks and pressures that arise from the fundamental trade-offs.
Research in Cognitive Systems Engineering (CSE) has successfully identified basic requirements that must be met if new technology will be useful to practitioners in context. Synthesizing these basic requirements or support functionsis part of a process of debate and consolidationof the foundations of the field after 25 years of productive activity (Klein, 1999; Endsley et al., 2003; Hollnageland Woods, 2005). This work takes the "Laws that Govern Cognitive Work" which synthesize basic findings aand patterns (Woods, 2002; Hoffman and Woods, 2005) and provides the next step-a set of basic requirements or support functions for design. General requirements for effective support can be used to jump start individual development projects in any domain. Debating how to achieve these support functions helps translate the insights of cognitive work analyses into tangible new uses of technological possibilities.
Full-text available
Following celebrated failures stakeholders begin to ask questions about how to improve the systems and processes they operate, manage or depend on. In this process it is easy to become stuck on the label ‘human error’ as if it were an explanation for what happened and as if such a diagnosis specified steps to improve. To guide stakeholders when celebrated failure or other developments create windows of opportunity for change and investment, this paper draws on generalizations from the research base about how complex systems fail and about how people contribute to safety and risk to provide a set of Nine Steps forward for constructive responses. The Nine Steps forward are described and explained in the form of series of maxims and corollaries that summarize general patterns about error and expertise, complexity and learning.
In all complex sociotechnical workplaces, knowledge and skill have become widely recognized as increasingly important assets. They are important because expertise is a "must" for proficient performance in these domains. Furthermore, this importance is increasing as we recognize that many of the most knowledgeable personnel are nearing retirement, and there are adverse consequences associated with losing their expertise. Many organizations have discovered-either the hard way or too late-that expert wisdom is a corporate asset.
Human error is a widely recognized problem, and there are at least two complementary paths to error mitigation. One approach aims to reduce error by changing the design of systems and products to make them fit human capabilities and limitations. Another approach aims to remove human error (and human involvement) altogether by automation, sometimes including intelligent systems. The latter approach might seem preferable. After all, if no human is involved, how can there be any human error? Both paths have merit. However, the automatization path might be so tempting that researchers might not realize the new, counterintuitive problems this approach can create. Recent attempts to eliminate human error in programming a VCR provide a poignant example of this concern.