ChapterPDF Available

Steering the Reverberations of Technology Change on Fields of Practice: Laws that Govern Cognitive Work

Authors:

Abstract

Keynote Conference of the Cognitive Science Society Full treatment is chapter 12 of Joint Cognitive Systems: Patterns in Cognitive Systems Engineering. see https://www.researchgate.net/publication/284173735_Laws_that_Govern_JCSs_at_Work
Steering the Reverberations of Technology Change on Fields of Practice:
Laws that Govern Cognitive Work
David D. Woods (woods.2@osu.edu)
Institute for Ergonomics
The Ohio State University
1971 Neil Ave
Columbus, OH 43210 USA
Plenary Address
Annual Meeting of the Cognitive Science Society
August 10, 2002
“Now all scientific prediction consists in
discovering in the data of the distant past and of
the immediate past (which we incorrectly call the
present), laws or formulae which apply also to
the future, so that if we act in accordance with
those laws our behavior will be appropriate to
the future when it becomes the present.”
Craik, 1947, p. 59
Abstract
Research on cognitive work in context has abstracted a
set of common patterns about cognitive work and
about the relationship of people and computers. I offer
four families of Laws that Govern Cognitive Work plus
Norbert’s Contrast as a synthesis of these findings to
guide future development of human-computer
cooperation. These Laws are one prong of a general
strategy to avoid repeats of past "automation
surprises".
1. Patterns of Reverberations
Observational studies of cognitive work in context
have built a body of work that describes how
technology and organizational change transforms
work in systems. Points of technology change push
cycles of transformation and adaptation (e.g., Carroll’s
task-artifact cycle; Carroll and Rosson, 1992;
Winograd and Flores, 1987; Flores, Graves, Hartfield,
and Winograd, 1988). The review of the impact of new
technology in one operational world effectively
summarizes the general pattern (Cordesman and
Wagner, 1996, p.25):
Much of the equipment deployed ... was designed to
ease the burden on the operator, reduce fatigue, and
simplify the tasks involved in operations. Instead, these
advances were used to demand more from the
operator. Almost without exception, technology did not
meet the goal of unencumbering the personnel
operating the equipment
... systems often required exceptional human expertise,
commitment, and endurance.
there is a natural synergy between tactics,
technology, and human factors ... effective leaders will
exploit every new advance to the limit. As a result,
virtually every advance in ergonomics was exploited to
ask personnel to do more, do it faster and do it in more
complex ways.
... one very real lesson is that new tactics and
technology simply result in altering the pattern of
human stress to achieve a new intensity and tempo of
operations. [edited to rephrase domain referents
generically]
This statement could have come from studies of the
impact of technological and organizational change in
health care or air traffic management or many other
areas undergoing change today (see Billings, 1997,
and Sarter and Amalberti, 2000, for the case of
cockpit automation). Overall, the studies show that
when “black box” new technology (and
accompanying organizational change) hits an ongoing
field of practice the pattern of reverberation includes
(Woods and Dekker, 2000):
New capabilities, which increase demands and
create new complexities such as increased
coupling across parts of the system and higher
tempo of operations,
New complexities when technological possibilities
are used clumsily,
Adaptations by practitioners to exploit capabilities
or workaround complexities because they are
responsible to meet operational goals,
The complexities and adaptations are surprising,
unintended side effects of the design intent,
Failures occasionally break through these
adaptations because of the inherent demands or
because the adaptations are incomplete, poor, or
brittle,
The adaptations by practitioners hide the
complexities from designers and reviewers after-
the-fact who judge failures to be due to human
error.
The pattern illustrates a more general law of
adaptive systems that has been noted by many
researchers (e.g., Rasmussen, 1986; Hirschhorn,
1997)—
The law of stretched systems:
every system is stretched to operate at its capacity;
as soon as there is some improvement, for example
in the form of new technology, it will be exploited to
achieve a new intensity and tempo of activity.
Under pressure from performance and efficiency
demands, advances are consumed to ask operational
personnel “to do more, do it faster or do it in more
complex ways” (see NASA’s Mars Climate Orbiter
Mishap Investigation Board report, 2000, for a
example).
2. Watching People Engineer Cognitive
Work: Claims and Myths
People as advocates for investment in and adoption
of new technology make claims about how these
changes will affect cognitive work and the processes
and products of practice. Claims about the future of
practice if objects-to-be-realized are deployed
represent hypotheses about the dynamics of people,
technology and work (Woods, 1998). Observations at
points of technology change find that these
hypotheses can be and are often quite wrong—a kind
of second order automation surprise (Sarter, Woods,
and Billings, 1997). Envisioning the future of
operations, given the dynamic and adaptive nature of
the process, is quite fragile.
What patterns emerge from observations of people
engineering cognitive work or of people’s claims
about how various advances-in-process will enable
the re-engineering of cognitive work? Remarkably
consistently, we observe over-simplifications
(Feltovich et al., 1997) that claim the introduction of
new technology and systems into a field of practice
substitutes one agent for another, essentially,
computer capabilities as substitute for erratic human
performance. Yes, the claims of opposition of human
and machine come cloaked in different and often
quite sophisticated forms, yet underneath inter-
substitutability or Fitts’ List remains the core—people
and machines are or can be equivalent so that new
technology (with the right capabilities) can be
introduced as a simple substitution of machines for
people—preserving the system though improving the
results. This oversimplification fallacy is so persistent
it is best understood as a cultural myth—the
Substitution Myth (Woods and Tinapple, 1999).
The myth creates difficulties because it is wrong,
empirically—adding or expanding the machine’s role
changes the cooperative architecture and changes
human roles, introduces capabilities and complexities
that are part of multiple adaptive cycles as human
actors and stakeholders jostle in the pursuit of their
goals. But moreover, the myth is unproductive as it
locks us into cumbersome trial and error processes of
development, blocks understanding the demands of
cognitive work in context and how people in various
roles and groups adapt to those demands, and
channels energy away from processes of innovating
use from the continually expanding power of machine
information processing.
How can we better calibrate and ground claims
about the future of cognitive work to avoid past cycles
where change exacerbated clumsy use of technology
and limited adaptations from people responsible to
meet system goals? One possible tactic is to develop
generalizations or ‘laws’ that govern cognitive work by
any cognitive agent or any set of cognitive agents
from the empirical base. Such Laws could serve as a
guide to enhance the use information processing
technology in a practice–centered R&D process
(Woods and Christofferesen, in press).
3. Predicting and Steering Change in
Cognitive Work
Based on patterns about cognitive work and about
the relationship of people and computers abstracted
from research on cognitive work in context, I offer four
families of Laws that Govern Cognitive Work as a
synthesis to guide future development of human-
computer cooperation (the approach is a deliberate
play off Conant’s 1976 laws of information that govern
systems). I also offer Norbert’s Contrast (Wiener,
1950) as an alternative conception of the relationship
between people and computers. The current draft set
of Laws is available from the author.
These laws are built on a foundation of agent-
environment mutuality. Agents' activities are
understandable only in relationship to the properties
of the environment within which they function and an
environment is understood in terms of what it
demands and affords to potential actors in that world.
Each is mutually adapted to the other.
The Laws fall into four families plus Norbert's
Contrast. First, Laws of Adaptation build on original
insights of cybernetics and control (Ashby, 1957;
Conant, 1976). The driving force here is how cognitive
systems adapt to the potential for surprise in the
worlds of work, i.e., the foundational slogan for
Cognitive Systems Engineering from Jens Rasmussen
adaptations directed at coping with complexity and
surprise (Rasmussen and Lind, 1981; Woods, 1988;
Woods and Christoffersen, in press).
Laws of Models are concerned with how we
understand and represent the processes we control
and the agents we interact with. The driving force here
is the mystery of how expertise is tuned to the future,
while, paradoxically, the data available is about the
past.
Laws of Collaboration address how cognitive work
is distributed over multiple agents and artifacts. The
driving force here is the fact that cognitive work
always occurs in the context of multiple parties and
interests as moments of private cognition punctuate
flows of interaction and coordination. The idea that
cognition is fundamentally social and interactive, not
private, radically shifts the basis for analyzing and
designing cognitive work and reconsidering the
relationship between people and computers.
Quite surprisingly, Laws of Responsibility are the
fourth family, driving home the point that in cognition
at work, whatever the artifacts and however
autonomous that are under some conditions, people
create, operate, and modify these artifacts in human
systems for human purposes.
Fifth, based on these Laws, Norbert's Contrast goes
behind our fascination with increasing the power of
the computer to remind us of the limits of literal
minded agents and the unique competences of
human cognition to handle the tradeoffs and
dilemmas of a changing, finite resource, uncertain
world (Wiener, 1950).
Norbert’s Contrast
Artificial agents are literal minded and
disconnected from the world, while human
agents are context sensitive and have a stake
in outcomes.
The key is people and computers start from
different opposite points and tend to fall back or
default to those points without the continued
investment of effort and energy from outside the
system.
Each of these families of Laws and Norbert's
Contrast is quite surprising even shocking given
conventional beliefs about cognition, organizations,
and computers. The Laws allow us to see past these
conventional beliefs to re-consider relationships
across people, computers, the goals of various
stakeholders and the complexities and variations in
the worlds of human activity as we envision and
create the future of operations.
Laws that Govern Cognitive Work have an odd
quality–-they appear optional. Designers of systems
that perform cognitive work do not have to follow
them. In fact, we notice these laws through the
consequences that have followed repeatedly when
design breaks them in varying episodes of technology
change. The statements are law-like in that they
capture regularities of control and adaptation of
cognitive work, and they determine the dynamic
response, resilience, stability or instability of the
distributed cognitive system in question. While
developers may find following the laws optional, what
is not optional is the consequences that accrue
predictably from breaking these laws, consequences
that block achieving the performance goals
developers and practitioners, technologists and
stakeholders set.
Respect for the Laws is essential, for in the final
analysis:
in design, we either hobble or support
people’s natural ability to express forms of
expertise.
Acknowledgments
This piece is a companion and follow up to a previous
address to the Cognitive Science Society in 1994,
Observations from Studying Cognitive Systems in Context.
Many thanks to the various colleagues who in one way or
another helped identify how generalizations like these
operate in cognitive work.
Prepared in part through participation in the Advanced
Decision Architectures Collaborative Technology Alliance
sponsored by the Army Research Laboratory under
Cooperative Agreement DAAD 19-01-2-0009.
References
Ashby, W. R. (1957). An Introduction to Cybernetics.
Chapman and Hall, London.
Billings, C. E. (1997). Aviation Automation: The Search
For A Human-Centered Approach. Hillsdale, N.J.:
Lawrence Erlbaum Associates.
Carroll, J.M. & Rosson, M. B. (1992). Getting around
the task-artifact cycle: How to make claims and
design by scenario. ACM Transactions on
Information Systems. 10, 181-212.
Conant, R. C. (1976). Laws of information which
govern systems. IEEE Transactions on Systems,
Man, and Cybernetics, SMC-6, 240-255.
Cordesman, A. H. & Wagner, A. R. (1996). The
Lessons of Modern War, Vol.4: The Gulf War,
(Boulder, CO: Westview Press).
Craik, K. J. W. (1947) Theory of the operator in
control systems: I. The operator as an engineering
system. British Journal of Psychology, 38, 56-61.
Feltovich, P.J., Spiro, R.J., & Coulson, R.L (1997).
Issues of expert flexibility in contexts characterized
by complexity and change. In P.J. Feltovich, K.M.
Ford, & R.R. Hoffman (eds.), Expertise in context:
Human and machine. Menlo Park, CA. AAAI/Mit
Press.
Flores, F., Graves, M., Hartfield, B. & Winograd, T.
(1988). Computer systems and the design of
organizational interaction. ACM Transactions on
Office Information Systems, 6, 153-172.
Hirschhorn, L. (1997). Quoted in Cook, R. I., Woods,
D. D. and Miller, C. (1998). A Tale of Two Stories:
Contrasting Views on Patient Safety. National
Patient Safety Foundation, Chicago IL, April 1998
(available at www.npsf.org).
NASA, Mars Climate Orbiter Mishap Investigation
Board. (2000). Report on Project Management at
NASA, March 13, 2000.
Rasmussen, J. (1986). Information processing and
human-machine interaction: An approach to
cognitive engineering. Amsterdam: North-Holland.
Rasmussen, J. & Lind M. (1981). Coping with
complexity (Risø-M-2293). Risø National Laboratory,
Roskilde, Denmark: Electronics Department.
Roesler, A. Feil, M. & Woods, D.D. (2002). Design is
Telling (Sharing) Stories about the Future. Draft
Working MediaPaper at url: http://csel.eng.ohio-
state.edu/animock
Sarter, N. & Amalberti, R., eds. (2000). Cognitive
Engineering in the Aviation Domain, Erlbaum,
Mahwah NJ.
Sarter, N., Woods, D.D. & Billings, C.E. (1997).
Automation Surprises. In G. Salvendy, editor,
Handbook of Human Factors/Ergonomics, second
edition, Wiley, New York.
Wiener, N. The Human Use of Human Beings:
Cybernetics and Society, Doubleday NY,1950.]
Winograd, T. and Flores, F. (1986). Understanding
computers and cognition. Norwood, NJ, Ablex.
Woods, D.D. (1988). Coping with complexity: The
psychology of human behavior in complex systems.
In L.P. Goodstein, H.B. Andersen, and S.E. Olsen,
editors, Mental Models, Tasks and Errors, Taylor &
Francis, London, (p. 128-148).
Woods, D. D. (1998). Designs are hypotheses about
how artifacts shape cognition and collaboration.
Ergonomics, 41, 168-173.
Woods, D. D. & Christoffersen, K. (in press). Balancing
Practice-Centered Research and Design. In M.
McNeese and M. A. Vidulich (editors), Cognitive
Systems Engineering in Military Aviation Domains.
Wright-Patterson AFB, OH: Human Systems
Information Analysis Center.
Woods, D. D. & Dekker, S. W. A. (2000). Anticipating
the Effects of Technological Change: A New Era of
Dynamics for Human Factors. Theoretical Issues in
Ergonomic Science, 1(3), 2000.
Woods, D. D. & Tinapple, D. (1999). W3: Watching
Human Factors Watch People at Work. Presidential
Address, 43rd Annual Meeting of the Human
Factors and Ergonomics Society, September 28,
1999. Multimedia Production at http://csel.eng.ohio-
state.edu/hf99/
... Although cobots as an applied technology only started to come out in 2008 (Hentout et al., 2019), scholars from other domains have previously studied human-technical joint performance, embedded in purposeful socio-technical systems (Le Coze, 2013;Leveson, 2011b;Rasmussen, 1997;Waterson et al., 2015). Much of JCS research has been concerned with the identification of recurring patterns (Woods & Hollnagel, 2006;Woods, 2002) in automation-induced problems, often in contrast with the putative benefits that designers proposed before design implementation. ...
Article
Full-text available
Collaborative human–machine interaction will be progressively intensified in industrial applications. The aim of this article is to examine current approaches to cobot safety by showing that these approaches can additionally benefit from systems thinking methods. The first part of this article covers a narrative literature review on predominantly techno-centric robot safety approaches, with a strong focus on containing kinetic energy and ensuring separation with humans. The second part introduces systems thinking methods to analyze a socio-technical perspective on cobot safety, including joint cognitive systems and distributed cognition perspectives. This explorative research dimension is expected to overcome an overly narrow interpretation of safety issues, anticipating the challenges ahead in ever more complex cobot applications. This article embraces a socio-technical perspective to explore the potential of Joint Cognitive Systems to manage risk and safety in cobot applications. Three systemic safety analysis approaches are presented and tested with a demonstrator case study concerning their feasibility for cobot applications: System-Theoretic Accident Model and Processes (STAMP); Functional Resonance Analysis Method (FRAM); and Event Analysis of Systemic Teamwork (EAST). These methods each provide interesting extensions to complement the traditional understanding of risk as required by current and future industrial cobot implementations. The power of systemic methods for safer and more efficient cobot operations lies in revealing the distributed and emergent result from joint actions and overcoming the reductionist view from individual failures or single agent responsibilities. The safe operation of cobot applications can only be achieved through alignment of design, training, and operation of such applications.
... Health care settings such as hospitals can be defined as complex, adaptive sociotechnical systems [57,58], where interactions among technical, human, and organizational elements generate complexity in many dimensions. This implies that efforts to induce system change through new technology can be expected to affect patterns of interaction within the system and between the system and its context [59,60]. These patterns of interactions make it challenging to scale up innovations from one context to another [61] and also make evaluation difficult, as it is hard to link technological change to specific outcomes [62]. ...
Article
Full-text available
Many promising telemedicine innovations fail to be accepted and used over time, and there are longstanding questions about how to best evaluate telemedicine services and other health information technologies. In response to these challenges, there is a growing interest in how to take the sociotechnical complexity of health care into account during design, implementation, and evaluation. This paper discusses the methodological implications of this complexity and how the sociotechnical context holds the key to understanding the effects and outcomes of telemedicine. Examples from a work domain analysis of a surgical setting, where a telemedicine service for remote surgical consultation was to be introduced, are used to show how abstracted functional modeling can provide a structured and rigorous means to analyze and represent the implementation context in complex health care settings.
... New technology cannot simply be 'dropped' into a work context. Rather, its impacts on the larger work context and organization needs to be tracked and unanticipated reverberations need to be recognized and addressed (Woods 2002). As Wears and Berg (2005) noted, the introduction of new health IT cannot be thought of in isolation, but rather as part of the larger context of organizational change. ...
Chapter
This chapter describes the discipline of human factors engineering and how it can be specifically applied to the study and improvement of clinical workflow. Human factors engineering is a well-established scientific discipline that studies the functional capabilities and limitations of humans and methods to integrate these findings into the design and optimization of systems, processes and technology.
... This invokes the law of stretched systems, where 'every system is stretched to operate at its capacity and as soon as there is some improvement, for example in the form of new technology, it will be exploited to achieve a new intensity and tempo of activity'. 3 Increased efficiency eats silently and progressively into safety margins without us noticing, and after a while operating within this increasingly risky environment we are caught out by an adverse event occurring within the system that we thought we understood. Health-care organizations then immediately increase their focus on safety, investigations follow, and we offer assurances that lessons will be learned. ...
Article
For resilience in complex and large-scale software systems, we need to go beyond observability and explainability and consider joint cognition between human–machine teams. https://www.computer.org/csdl/magazine/so/2024/01/10372510/1T8PkKAOSbe
Article
Full-text available
Despite the increased importance attributed to distributed improvisation in major crises, few studies investigate how central authorities can promote a harmonic, coordinated national response while allowing for distributed autonomy and improvisation. One idea implicit in the literature is that central authorities could help track and tackle common decision bottlenecks as they emerge across “improvising” local authorities as a result of shared, dynamic external constraints. To explore this idea we map central functions needed to roll-out vaccines to local populations and identify and classify bottlenecks to decision-making by local authorities managing COVID-19 vaccine roll-out in Norway. We found five bottlenecks which emerged as vaccine roll-out progressed, three of which could feasibly have been addressed by changing the local authorities’ external constraints as the crisis developed. While the national crisis response strategy clearly allowed for distributed improvisation, our overall findings suggest that there is potential for central authorities to address external constraints in order to ease common bottlenecks as they emerge across local authorities responding to the crisis. More research is to explore alternative centralized response strategies and assess how well they effectively balance centralized and distributed control. The study contributes to the growing literature examining the interaction between local and centralized response in crisis management.
Article
This article examines the creation of three speculative environments within the Museum of the Future (MOTF), a virtual-reality environment (VRE) designed by Dstl to promote awareness of future defence and security technologies. In contrast to other areas of the MOTF, these speculative settings are experimental, and are intended to highlight the uncertainty of the future, using techniques of cognitive estrangement and other appropriate narrative and world-building techniques to encourage audiences to query their anticipatory assumptions and cognitive biases. The aim of this project is therefore to promote cognitive flexibility, enhance futures literacy (FL), and ameliorate against the effects of knowledge shields. After a brief overview of the three speculative environments, the article explores the contextual needs they were designed to address, and demonstrates how, through team composition and core design principles, as well as through the application of narrative techniques, the worlds described can offer audiences novel ways of considering possible futures.
Article
Background A telemedicine service enabling remote surgical consultation had shown promising results. When the service was to be scaled up, it was unclear how contextual variations among different clinical sites could affect the clinical outcomes and implementation of the service. It is generally recognized that contextual factors and work system complexities affect the implementation and outcomes of telemedicine. However, it is methodologically challenging to account for context in complex health care settings. We conducted a work domain analysis (WDA), an engineering method for modeling and analyzing complex work environments, to investigate and represent contextual influences when a telemedicine service was to be scaled up to multiple hospitals. Objective We wanted to systematically characterize the implementation contexts at the clinics participating in the scale-up process. Conducting a WDA would allow us to identify, in a systematic manner, the functional constraints that shape clinical work at the implementation sites and set the sites apart. The findings could then be valuable for informed implementation and assessment of the telemedicine service. Methods We conducted observations and semistructured interviews with a variety of stakeholders. Thematic analysis was guided by concepts derived from the WDA framework. We identified objects, functions, priorities, and values that shape clinical procedures. An iterative “discovery and modeling” approach allowed us to first focus on one clinic and then readjust the scope as our understanding of the work systems deepened. Results We characterized three sets of constraints (ie, facets) in the domain: the treatment facet, administrative facet (providing resources for procedures), and development facet (training, quality improvement, and research). The constraints included medical equipment affecting treatment options; administrative processes affecting access to staff and facilities; values and priorities affecting assessments during endoscopic retrograde cholangiopancreatography; and resources for conducting the procedure. Conclusions The surgical work system is embedded in multiple sets of constraints that can be modeled as facets of the system. We found variations between the implementation sites that might interact negatively with the telemedicine service. However, there may be enough motivation and resources to overcome these initial disruptions given that values and priorities are shared across the sites. Contrasting the development facets at different sites highlighted the differences in resources for training and research. In some cases, this could indicate a risk that organizational demands for efficiency and effectiveness might be prioritized over the long-term outcomes provided by the telemedicine service, or a reduced willingness or ability to accept a service that is not yet fully developed or adapted. WDA proved effective in representing and analyzing these complex clinical contexts in the face of technological change. The models serve as examples of how to analyze and represent a complex sociotechnical context during telemedicine design, implementation, and assessment.
Conference Paper
Full-text available
There is a global trend towards more highly automated traffic management systems. Problematic to increased automation are the higher levels of system integration and associated complexity and uncertainty this brings. This study explored whether end-user participation in new project design and implementation can contribute to the smooth and effective introduction of these new systems for railway control rooms. Results showed that almost all participants agreed that end-user input is important to the success of new technology. However, experiences shared by interviewed controllers highlighted that end-user participation in major projects is underutilized and that certain types of involvement or action can actively hinder the implementation process. The authors anticipate that the experiences of controllers can enhance decision-making associated with end-user participation during the introduction of new traffic management systems. The authors also anticipate that end-user participation will advance improved human system integration.
Article
Full-text available
Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.
Article
Full-text available
discusses the following questions: do individuals necessarily become more rigid in their thinking and acting at the same time they gain the experience that helps promote exceptional competence / are there ways that flexibility can be engendered and rigidifying effects evaded / how are such developments affected by the orderliness and relative stability of the domain in which the expertise is achieved some of the regularizing effects on cognition that can come with long-term practice are discussed / two examples of expert performance are presented and these are used to characterize 2 types of potential, desirable expert flexibility, one more fundamentally adaptable than the other / a tendency in thinking and learning—the reductive bias—is discussed as a major potential impediment (particularly in certain kinds of complex learning and practice environments) both to sound understanding during learning and practice, and to the maintenance of adaptability / addresses ways that flexibility might be maintained and enhanced as people gain experience and skill / discuss why the topics of focus are particularly germane in our current and coming world (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Information theory was created for the purpose of studying the communication of messages from one point to another, and since its appearance,14 its focus has remained on the question, “how can the constraint between the two variables X (message sent) and Y (message received) be measured and maximized”? Although the theory was generalized to N dimensions,10,2 and its relation to the analysis of variance noted,9 not much use seems to have been made of the result, perhaps in part because the descriptors “N-dimensional Information Theory” or “Uncertainty Analysis” did not adequately represent what can actually be seen as the analysis of constraints in multivariable systems. In any statistically-analyzable system of several variables interacting in a lively way, some variables (or sets of them) exert effects on others. These effects are reflected statistically as non-independence of the variables involved, and it is this deviation from independence which we indicate by the term “constraint.” We prefer this term to the term “dependence” because the latter suggests dependence of X on Y while the former is neutral as to direction. To the extent that the variables are not independent, they are “in communication” with one another, and information theory can be used to analyze the non-independence. In addition, the fluctuation of values taken by any variable can be viewed as a message it sends, a flow of information about itself to all other parts of the system which are “listening.” The view of systems as networks of information transfer leads to quantitative conclusions about system behavior and structure which are somewhat novel and of wide applicability.