Content uploaded by Matthew Gladden
Author content
All content in this area was uploaded by Matthew Gladden on Aug 07, 2016
Content may be subject to copyright.
Posthuman Management
Creating Effective Organizations in an Age of
Social Robotics, Ubiquitous AI, Human
Augmentation, and Virtual Worlds
Second Edition
Matthew E. Gladden
Posthuman Management: Creating Effective Organizations in an Age of Social
Robotics, Ubiquitous AI, Human Augmentation, and Virtual Worlds
(Second Edition)
Copyright © 2016 Matthew E. Gladden
Published in the United States of America
by Defragmenter Media in association with West Pole & Larkspur,
imprints of Synthypnion Press LLC
Synthypnion Press LLC
Indianapolis, IN 46227
http://www.synthypnionpress.com
All rights reserved. No part of this publication may be reproduced, stored in a
retrieval system, or transmitted in any form or any means, electronic, mechanical,
photocopying, recording, or otherwise, without the prior permission of the pub-
lisher, apart from those fair-use purposes permitted by law.
Defragmenter Media
http://defragmenter.media
ISBN 978-1-944373-05-4 (print edition)
ISBN 978-1-944373-06-1 (ebook)
10 9 8 7 6 5 4 3
August 2016
Chapters One, Two, and Three of this volume were originally published as Parts
One (pp. 31-91), Two (pp. 93-131), and Three (pp. 133-201) of Gladden, Matthew E.,
Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological
Posthumanization. Indianapolis: Defragmenter Media, 2016. This volume’s Intro-
duction has been adapted from “The Posthumanized Organization as a Synergism
of Human, Synthetic, and Hybrid Agents,” published in that same text (pp. 17-30).
Chapter Five in this volume was originally published as Gladden, Matthew E.,
“Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman So-
cioeconomic Interaction,” in Digital Ecosystems: Society in the Digital Age, edited
by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp. 85-98. Warsaw:
Digital Economy Lab, University of Warsaw, 2016.
Chapter Eight has been expanded and adapted from Gladden, Matthew E., “The
Social Robot as CEO: Developing Synthetic Charismatic Leadership for Human
Organizations,” in Sociable Robots and the Future of Social Relations: Proceedings
of Robo-Philosophy 2014, edited by Johanna Seibt, Raul Hakli, and Marco Nørskov,
pp. 329-339. Frontiers in Artificial Intelligence and Applications, vol. 273. Amster-
dam: IOS Press, 2014.
Chapter Nine is reprinted from Gladden, Matthew E., “The Diffuse Intelligent
Other: An Ontology of Nonlocalizable Robots as Moral and Legal Actors,” in So-
cial Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov, pp. 177-
98. Farnham: Ashgate, 2016.
Chapter Ten was first published as Gladden, Matthew E., “Leveraging the Cross-
Cultural Capacities of Artificial Agents as Leaders of Human Virtual Teams,” in
Proceedings of the 10th European Conference on Management Leadership and Gov-
ernance, edited by Visnja Grozdanić, pp. 428-435. Reading: Academic Confer-
ences and Publishing International Limited, 2014.
Chapter Eleven was originally published as Gladden, Matthew E., “A Fractal Meas-
ure for Comparing the Work Effort of Human and Artificial Agents Performing
Management Functions,” in Position Papers of the 2014 Federated Conference on
Computer Science and Information Systems, edited by Maria Ganzha, Leszek
Maciaszek, and Marcin Paprzycki, pp. 219-226. Annals of Computer Science and
Information Systems vol. 3. Warsaw: Polskie Towarzystwo Informatyczne, 2014.
Chapter Twelve was originally published as Gladden, Matthew E., “Managerial
Robotics: a Model of Sociality and Autonomy for Robots Managing Human Beings
and Machines,” International Journal of Contemporary Management vol. 13, no. 3
(2014): 67-76.
Chapter Thirteen is a revised and expanded version of Gladden, Matthew E., “The
Artificial Life-Form as Entrepreneur: Synthetic Organism-Enterprises and the Re-
conceptualization of Business,” published in Proceedings of the Fourteenth Inter-
national Conference on the Synthesis and Simulation of Living Systems, edited by
Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lipson, pp.
417-18. Cambridge, MA: The MIT Press, 2014.
To all the managers, leaders, and executives
with whom I have had the opportunity to work and study –
who have given me an understanding and experience
of organizational life at its most meaningful and worthwhile
Brief Table of Contents
Preface ...................................................................................................... 13
Introduction: Technologization and the Evolution of Intelligent
Agency: Toward Posthuman Models of Organizational
Management ............................................................................................ 17
Part I: Creating the Human-Synthetic Organization: Background
and Theoretical Foundations ................................................................. 47
Chapter 1: A Typology of Posthumanism: A Framework for
Differentiating Analytic, Synthetic, Theoretical, and Practical
Posthumanisms .................................................................................................. 49
Chapter 2: Organizational Posthumanism ................................................... 109
Chapter 3: The Posthuman Management Matrix: Understanding the
Organizational Impact of Radical Biotechnological Convergence ............... 157
Chapter 4: Organization Development and the Robotic-Cybernetic-
Human Workforce: Humanistic Values for a Posthuman Future? ..............239
Part II: Human Augmentation: A Closer Look ....................................... 257
Chapter 5: Neural Implants as Gateways to Digital-Physical
Ecosystems and Posthuman Socioeconomic Interaction ............................. 259
Chapter 6: The Impacts of Human Neurocybernetic Enhancement
on Organizational Business Models ................................................................ 275
Chapter 7: Implantable Computers and Information Security: A
Managerial Perspective .....................................................................................285
Part III: Robotics and Artificial Intelligence: A Closer Look .................. 301
Chapter 8: The Social Robot as CEO: Developing Synthetic
Charismatic Leadership for Human Organizations ....................................... 303
Chapter 9: The Diffuse Intelligent Other: An Ontology of
Nonlocalizable Robots as Moral and Legal Actors ......................................... 345
Chapter 10: Leveraging the Cross-Cultural Capacities of Artificial
Agents as Leaders of Human Virtual Teams ................................................... 371
Chapter 11: A Fractal Measure for Comparing the Work Effort of
Human and Artificial Agents Performing Management Functions ............. 385
Chapter 12: Managerial Robotics: A Model of Sociality and
Autonomy for Robots Managing Human Beings and Machines ................. 407
Chapter 13: Developing a Non-anthropocentric Definition of
Business: A Cybernetic Model of the Synthetic Life-form as
Autonomous Enterprise .................................................................................... 417
Index ......................................................................................................428
About the Author ................................................................................... 438
Detailed Table of Contents
Preface ...................................................................................................... 13
Introduction: Technologization and the Evolution of Intelligent
Agency: Toward Posthuman Models of Organizational
Management ............................................................................................ 17
Part I: Creating the Human-Synthetic Organization: Background
and Theoretical Foundations ................................................................. 47
Chapter 1: A Typology of Posthumanism: A Framework for
Differentiating Analytic, Synthetic, Theoretical, and Practical
Posthumanisms .................................................................................................. 49
I. Introduction ........................................................................................................ 50
II. Established Definitions of Posthumanism ...................................................... 52
III. A Proposed Two-dimensional Typology of Posthumanism ......................... 56
IV. Classification and Analysis of Individual Forms of Posthumanism ............ 61
V. Conclusion........................................................................................................ 102
Chapter 2: Organizational Posthumanism ................................................... 109
I. Introduction ....................................................................................................... 110
II. Definition of Organizational Posthumanism................................................. 114
III. Classification of Organizational Posthumanism as a Type of
Posthumanism ....................................................................................................... 115
IV. Organizational Posthumanization as Reflected in Organizational
Elements ................................................................................................................ 120
V. Technological Changes Facilitating Organizational
Posthumanization ................................................................................................ 128
VI. Conclusion ....................................................................................................... 143
Chapter 3: The Posthuman Management Matrix: Understanding the
Organizational Impact of Radical Biotechnological Convergence ............... 157
I. Introduction .......................................................................................................158
II. Formulating the Posthuman Management Matrix........................................ 163
III. Using the Matrix to Analyze the Traditional Practice of
Organizational Management .............................................................................. 182
10 • Posthuman Management
IV. Using the Matrix to Predict and Shape the Future Practice of
Organizational Management .............................................................................. 189
V. Conclusion........................................................................................................ 219
Chapter 4: Organization Development and the Robotic-Cybernetic-
Human Workforce: Humanistic Values for a Posthuman Future? ..............239
Introduction .......................................................................................................... 239
OD’s Humanistic Values ...................................................................................... 239
The Posthuman Future Confronting OD ........................................................... 241
OD’s Possible Stances toward Posthuman Technology.................................... 245
Conclusion ............................................................................................................. 251
Part II: Human Augmentation: A Closer Look ....................................... 257
Chapter 5: Neural Implants as Gateways to Digital-Physical
Ecosystems and Posthuman Socioeconomic Interaction ............................. 259
Introduction .......................................................................................................... 259
From Neuroprosthetic Devices to Posthuman Digital-Physical
Ecosystems ............................................................................................................ 260
Formulating our Model for an Ontology of Neuroprosthetics ........................ 263
Impacts Captured by Our Model ........................................................................ 263
Applying the Model: Toward a New Typology of Neuroprosthetics ................ 271
Conclusion ............................................................................................................ 272
Chapter 6: The Impacts of Human Neurocybernetic Enhancement
on Organizational Business Models ................................................................ 275
Introduction .......................................................................................................... 275
The Definition of a ‘Business Model’ .................................................................. 276
Circumstances Enabling (or Requiring) a New Business Model ..................... 277
Envisioning Posthuman Neuroprosthetics ........................................................ 278
The Impact of Neuroprosthetics on Business Models ...................................... 279
Example: The Impact of a Particular Kind of Sensory Playback Device ......... 281
Conclusion ............................................................................................................ 282
Chapter 7: Implantable Computers and Information Security: A
Managerial Perspective .....................................................................................285
Introduction .......................................................................................................... 286
The Fundamentals of Implantable Computers ................................................. 287
Information Security as Applied to Implantable Computers ........................... 288
An InfoSec Scenario: Securing a Sensory Neuroprosthetic .............................. 291
Avenues for Future Research ..............................................................................296
Conclusion ............................................................................................................ 297
Part III: Robotics and Artificial Intelligence: A Closer Look .................. 301
Chapter 8: The Social Robot as CEO: Developing Synthetic
Charismatic Leadership for Human Organizations ....................................... 303
I. Introduction ...................................................................................................... 304
Detailed Table of Contents • 11
II. The CEO as Leader: The Need to Inspire, Motivate, and Persuade ............ 306
III. Referent Power and the Exercise of Charismatic Authority by
Robots .................................................................................................................... 308
IV. Three Ways in Which We Envision Charismatic Robots as Leaders
of Human Beings ................................................................................................... 312
V. Charismatic Robot Leaders’ Appeal to Different Aspects of the
Human Person ...................................................................................................... 322
VI. Charismatic Robot Leaders as Effective CEOs............................................. 323
VII. The Process by Which We Will (Not) Choose Charismatic Robot
Leaders .................................................................................................................. 326
VIII. The Virtualization of Relationships: Will We Know Whether Our
CEO is a Robot? .................................................................................................... 330
IX. How Long Will Charismatic Robots Have the Desire and Ability to
Lead Us? ................................................................................................................ 332
X. Conclusion ........................................................................................................ 335
Chapter 9: The Diffuse Intelligent Other: An Ontology of
Nonlocalizable Robots as Moral and Legal Actors ......................................... 345
Introduction .......................................................................................................... 345
Developing an Ontological Framework ............................................................. 348
The Current Debate over Moral and Legal Responsibility for Robotic
Actors .................................................................................................................... 353
The Future of Moral and Legal Responsibility for Nonlocalizable
Robots .................................................................................................................... 357
Developing Legal and Ethical Frameworks for Autonomous
Nonlocalizable Robots ......................................................................................... 362
Conclusion ............................................................................................................ 365
Chapter 10: Leveraging the Cross-Cultural Capacities of Artificial
Agents as Leaders of Human Virtual Teams ................................................... 371
Background and Introduction ............................................................................. 372
Our Proposed Two-dimensional Model for Designing AA Team
Leaders’ Cultural Behaviors ................................................................................. 376
The Four Types of AA Leaders as Categorized According to Their
Cultural Behavior ................................................................................................. 377
Avenues for Empirical Research Regarding this Model.................................... 382
Discussion and Conclusion ................................................................................. 382
Chapter 11: A Fractal Measure for Comparing the Work Effort of
Human and Artificial Agents Performing Management Functions ............. 385
The Need for a Common Temporal Measure of Work Effort .......................... 385
Measures of Work Effort for Computer-based Systems ................................... 387
Measures of Work Effort for Human Employees .............................................. 389
Calculating the Fractal Dimension of Work Effort ........................................... 394
Formulating our Fractal Measure ....................................................................... 396
Applying our Measure to Particular Cases ......................................................... 398
12 • Posthuman Management
Analysis and Discussion ..................................................................................... 400
Chapter 12: Managerial Robotics: A Model of Sociality and
Autonomy for Robots Managing Human Beings and Machines ................. 407
Introduction and Background ............................................................................. 407
Research Aims and Questions ........................................................................... 408
Methodology ........................................................................................................ 409
Findings .................................................................................................................. 411
Discussion and Conclusions ................................................................................ 414
Chapter 13: Developing a Non-anthropocentric Definition of
Business: A Cybernetic Model of the Synthetic Life-form as
Autonomous Enterprise .................................................................................... 417
Introduction .......................................................................................................... 418
Understanding Businesses and Organisms as Viable Systems ........................ 418
Beyond ‘Human Resources’: Replacing Anthropocentric Definitions of
Business ................................................................................................................. 420
Identifying the Core Dynamics of Business through the Viable Systems
Approach ............................................................................................................... 420
Execution of the Business Process Cycle as a Byproduct (or Purpose) of
Artificial Life ......................................................................................................... 422
Challenges to the Development of Synthetic Organism-Enterprises ............. 423
Evolution of Artificial Life through Competition in the Human
Economy................................................................................................................ 424
Specific Examples and Practical Implications of Artificial Life as
Enterprise .............................................................................................................. 425
Conclusion ............................................................................................................ 425
Index ......................................................................................................428
About the Author ................................................................................... 438
Preface
The purpose of this book is to offer a posthumanist perspective on organi-
zational management that is largely lacking in the current literature but
which, I believe, will grow increasingly important in the coming years. The
impact of accelerating technological change on contemporary organizations
is undeniable, and in recent decades posthumanist thought has developed a
rich array of approaches for identifying, understanding, and predicting the
future path of such forces of ‘technologization’ in other spheres of human ac-
tivity. However, there is a relative dearth of scholarship that seeks to make
posthumanist methodologies and insights relevant and accessible for theorists
and practitioners of contemporary organizational management. The leading
works on posthumanism have generally appeared within fields like literary
criticism or bioethics that have little direct application to organizational man-
agement. Conversely, the numerous management texts exploring the impact
of emerging technologies on organizations typically display a functional or
strategic perspective that is very practical; rarely do they robustly address (or
even acknowledge) the deep-seated transformation of the nature of human
agency and our relationship to our environment that is being driven by ongo-
ing technological change and whose ontological, phenomenological, psycho-
logical, aesthetic, ethical, legal, and political dimensions are being analyzed in
a constructive fashion by the many varieties of posthumanist thought.
In its own way, each of the texts gathered in this volume seeks to build a
bridge between posthumanism and the world of organizational theory and
management. The first three chapters originally appeared in my book Sapient
Circuits and Digitalized Flesh: The Organization as Locus of Technological
Posthumanization, and they provide a systematic theoretical framework that
undergirds and ties together this entire volume. The remainder of the book
comprises chapters that were originally published as articles in academic jour-
nals, as chapters in edited volumes, or in other collections or were prepared
especially for this book. Each of these chapters employs a qualitative or quan-
titative methodology to investigate some particular aspect of management in
14 • Posthuman Management
a posthuman context. My hope is that this volume will prove useful for man-
agement scholars and students who are interested in the posthumanizing as-
pect of emerging technologies, as well as for managers whose decisions affect
the lives of countless people and who are seeking to better understand the
rapidly evolving sociotechnological dynamics that are reshaping their organ-
izations – and in particular, the advent of new forms of artificial agency, tech-
nologically augmented human agency, and hybrid human-synthetic agency.
The preparation of this volume would have been impossible were it not for
the many organizations and individuals who allowed me to share and receive
feedback on my research over the last couple of years by publishing texts in
journals and books and delivering lectures and conference presentations. I am
grateful to everyone who made possible those opportunities for dialogue. In
particular, I am thankful to the faculty, staff, and students of the universities
and other research institutions where ideas contained in this volume were first
presented, including those of Aarhus University, VERN´ University of Applied
Sciences, the Jagiellonian University, the Facta Ficta Research Center, the
Warsaw University of Technology, the University of Silesia in Katowice, the
Centrum Informacji Naukowej i Biblioteka Akademicka (CINiBA) in Kato-
wice, the Faculty of Humanities of the AGH University of Science and Tech-
nology, the Institute of Computer Science of the Polish Academy of Sciences,
and the Digital Economy Lab of the University of Warsaw. I am also thankful
to the editors who have made possible the publication of my previous research
into these topics, including those at the MIT Press, IOS Press, Ashgate Pub-
lishing, Creatio Fantastica, the Annals of Computer Science and Information Sys-
tems, Informatyka Ekonomiczna / Business Informatics, Annales: Ethics in Eco-
nomic Life, the International Journal of Contemporary Management, and Fronda
Lux. In particular, I offer my deepest gratitude to Krzysztof Maj, Ksenia
Olkusz, Mateusz Zimnoch, Magdalena Szczepocka, Marco Nørskov, Johanna
Seibt, Helena Dudycz, Natalia Juchniewicz, Renata Włoch, Mateusz Matysz-
kowicz, and Jerzy Kopański.
I am also thankful to many individuals who have been involved with the
management and governance of organizations of diverse types from whom I
have learned so much, including Mariann M. Payne, Rita M. Rodriguez, Mat-
thew R. Hummer, Harry A. Rissetto, Anthony F. Essaye, John G. Quigley,
George R. Houston, Jr., Timothy Brown, S.J., Gerald J. Chojnacki, S.J., John C.
Haughey, S.J., Thomas E. Scheye, Nancy L. Swartz, Rob Eby, Jillian Gobbo,
Joseph A. McCartin, James T. Lamiell, Brad Kelly, Aisulu Raspayeva, Maria
Ferrara, Keelan Downton, Laura Michener, Marley Moynahan, Kathryn
Tucker, David Wilkins, John McKelvie, Kevin Flannery, Christian Lambert,
Tessa Pulaski, Madeline Howard, Pamela Paniagua, Tom Rijntjes, Paweł Ur-
bański, and Nathan Fouts. I am also deeply thankful to everyone at
Georgetown University’s School of Continuing Studies, especially Edwin
Preface • 15
Schmierer and Douglas M. McCabe; to Serge Pukas, Paulina Krystosiak, Rob-
ert Pająk, Jacek Koronacki, and everyone affiliated with the Institute of Com-
puter Science of the Polish Academy of Sciences; to Nicole Cunningham for
her friendship and inspirational example as an author; and to Sarah Stern for
shaping me as a writer and editor.
I am especially thankful to those scholars who are fashioning innovative
links between posthumanist thought and various spheres of human existence
whom I have had the opportunity to hear present their research and with
whom I have enjoyed valuable conversations about these topics – especially
Bartosz Kłoda-Staniecko, Agata Kowalewska, Krzysztof Maj, and Magdalena
Szczepocka.
As always, I am deeply grateful to my parents, my brother, and other rela-
tives and friends for the support that they have provided me in my research,
and I offer my heartfelt thanks to my wife for her sound counsel and continual
encouragement. And, especially, I am grateful to Terry R. Armstrong and
Gasper Lo Biondo, S.J., without whom I would never have acquired the man-
agerial experience, the knowledge, or – most importantly – the inspiration
that made it possible for me to embark upon the work of preparing this text.
The individuals mentioned above – along with many other conference at-
tendees and anonymous reviewers who provided feedback on my papers –
have contributed greatly to developing whatever insights and value are to be
found within this book. Whatever flaws or limitations remain within the text
are due not to any failures on their part but are my responsibility alone.
Matthew E. Gladden
Pruszków, July 5, 2016
Introduction
Technologization and the Evolution
of Intelligent Agency:
Toward Posthuman Models of
Organizational Management
1
Abstract. We live in an era of accelerating technological posthu-
manization in which the form and capaciti es of hum an and artificial
agents are converging in ways that might be understood as either
exciting or unsettling. Ongoing developments in fields like biocyber-
netics, neuroprosthetics, wearable computing, virtual reality, and
genetic engineering are yielding techn ologically augmented hu-
man beings who possess physical components and behaviors re-
sembling those traditionally found in electronic computers. Mean-
while, developments in artificial intelligence, social robotics, artifi-
cial life, nanotechnology, and ubiquitous computing are creating
synthetic entities whose structures and process es ever more closely
resemble those of living organi sms. Such human and nonhuman
agents exist and interact within increasingly sophisticated digital -
physical ecosystems in which enti ties shift continually betw een ac-
tual and virtual worlds. Insofar as such agents cons titute the build-
ing-blocks of contemporary organizations , the processes of techno-
logical posthumanization that are transforming them are also re-
shapi ng the theory and practice of organizational management.
Posthuman Management provides a wide-ranging and systematic
investigation of these issues by collecting relevant texts recently
published in academic journals along with original content pre-
pared for this volume. This introductory chapter to Posthuman Man-
agement presents an overview of the major issues explored within
1
This chapter is an expanded version of “The Posthumanized Organization as a Synergism of
Human, Synthetic, and Hybrid Agents,” originally published as the introduction to Gladden,
Matthew E., Sapient Circuits and Digitalized Flesh: The Organization as Locus of Technological
Posthumanization, pp. 17-30. Indianapolis: Defragmenter Media, 2016.
18 • Posthuman Management
the vol ume’s three parts and the methodologies employed. In Part
I of the book, a general theoretical and pra ctical framework for the
field of posthuman management is developed. Each chapter ap-
proaches this task at a different level, moving from the more ab-
stract sphere of a basic exploration of the nature of posthumani sm
to the more concrete sphere of formulating tools for posthumanized
business analysis and considering the implications of posthumaniza-
tion for a specific management discipline (in this case, organization
development). The three chapters in Part II take a closer look at look
at the ways in which organizational management will be affected
by the posthumanizing augmentation of human bei ngs through
technologi es such as neuroprosthetics, virtual reality, and genetic
engineering. Finally, the six chapters of Part III explore in more depth
the ways in which increasingly advanced technologies for robotics ,
artificial intelligence, and artificial life will impact organizational
management .
The realities of organizational life are quickly catching up with the visions
long explored by science fiction writers and futurists. Many of us can now
expect to experience during our working lives a world in which ‘ordinary’
human beings labor alongside artificial general intelligences, social robots, sa-
pient networks, nanorobotic swarms, and human beings with genetically en-
gineered capacities and neurocybernetic implants. What strange new features
might such a world contain? A robot boss embraced by its human subordi-
nates because it is more empathetic, fair, honest, intelligent, and creative than
its human predecessor. A customized product and marketing campaign de-
signed for a single human consumer by an AI that can deduce the consumer’s
deepest fears and desires. Artificial life-forms that function as self-contained
‘businesses’ by gathering resources from the environment, transforming them
into products, and selling them to consumers, all without the involvement of
any human beings. Intelligent, evolvable bioelectronic viruses that can infect
an organization’s information infrastructure by moving effortlessly back and
forth between human employees and their computers. Corporate espionage
conducted by hacking into the video stream of a rival CEO’s robotic eye. An
office building or manufacturing facility or orbiting satellite or tropical resort
where an organization’s employees gather every day to work but which exists
only as a persistent virtual world experienced using immersive multisensory
VR. Employees who engage their colleagues as avatars within virtual environ-
ments, without knowing or caring whether a particular coworker is a ‘normal’
human being, uploaded human mind, social robot, or artificial general intelli-
gence. Different classes and subclasses of ‘metahuman’ and ‘parahuman’ em-
ployees and customers who have been genetically engineered to possess rad-
ically nonhuman minds and bodies. Human workers who no longer control
the intellectual property rights to their own thoughts, dreams, or memories,
because they were produced with the assistance of neuroprosthetic implants
Introduction: Technologization and the Evolution of Intelligent Agency • 19
or cognitive enhancement software provided by their employer. Human be-
ings who are unable to quit their jobs because they rely on their employers
for a lifetime supply of antivirus updates, immunosuppressive drugs, or phys-
ical maintenance for their full cyborg body. Human workers whose invasive
neural interfaces allow them to dwell permanently within virtual worlds and
whose physical bodies must be cared for by their employer’s biomedical sup-
port division. Neurocybernetically linked human workers who lose their per-
sonal identity and merge to form a hive mind whose members share collective
sensations, emotions, and volitions. A vast, tangled, digital-physical ecosys-
tem in which an organization’s human and synthetic employees, buildings,
vehicles, manufacturing equipment, databases, products, and customers are all
cybernetically linked through their participation in the ‘Internet of Being.’
Such possibilities terrify some of us just as they exhilarate others. Because
of the ongoing rapid technological developments taking place in many fields,
these hypothetical scenarios present all who are involved with the study or
management of organizations with complex ethical, legal, and operational
questions whose thoughtful consideration cannot easily be further delayed.
The Ongoing Posthumanization of Organizations
It is widely acknowledged that the nature of human organizations is un-
dergoing a profound transformation. Historic approaches to long-term strate-
gic planning are increasingly being rendered obsolete by intensifying forces
of globalized competition, rising worker mobility, and the breathtaking pace
of technological change that is driving organizations of all types to devote
growing resources to activities like online commerce, social media, cloud com-
puting, data mining, and the development of artificially intelligent tools.
2
Rich
bodies of scholarship and best practices have been formulated to guide organ-
izations in grappling with such change. However, while such analyses are of
great practical value for informing decision-making in areas like marketing,
sales, logistics, and finance, they have barely begun to plumb the deeper forces
which – at an ontological and phenomenological level – are reshaping human
beings’ capacity and desire to join with one another in the organized pursuit
of shared goals.
2
For a discussion of the ways in which multidimensional and synergistic ‘business models’ of
the sort pioneered by technology firms are now supplementing or supplanting previous types of
linear ‘business plans,’ see, e.g., Magretta, “Why Business Models Matter” (2002); Casadesus-
Masanell & Ricart, “How to Design a Winning Business Model” (2011); and DaSilva & Trkman,
“Business Model: What It Is and What It Is Not” (2014). Regarding the increasing difficulty – or
even futility – of attempting to secure a competitive advantage of a lasting structural nature for
an organization, see McGrath, The End of Competitive Advantage: How to Keep Your Strategy
Moving as Fast as Your Business (2013).
20 • Posthuman Management
Among the more noteworthy forces driving such change are those which
can be collectively described as processes of posthumanization.
3
The dynamics
of posthumanization are rewriting long-accepted rules about the kinds of en-
tities that can serve as members of organizations, the sorts of structures that
organizations can adopt to facilitate and regulate their internal activities, and
the range of roles that organizations can play in their broader environment.
One critical manifestation of posthumanization is seen in the changing nature
of intelligent agency within our world. For millennia, organizations were fash-
ioned and led by intelligent agents in the form of human beings – sometimes
assisted by intelligent (though not sapient) agents in the form of dogs, horses,
and other kinds of domesticated animals that filled specialized roles in support
of their human caretakers. In many human societies, over the last century the
role of animals as intelligent agents participating in the work of organizations
has declined, while a new form of intelligent agent has emerged to take on
roles critical to organizations’ successful functioning: namely, computerized
systems that are capable of collecting and processing information and then
selecting and pursuing a course of action.
The conceptual and functional distinction between the sort of ‘bioagency’
exercised by human beings and the ‘synthetic agency’ exercised by such elec-
tronic computerized systems was originally quite clear.
4
However, the array
of intelligent agency present and active in organizations is now undergoing a
rapid evolution, thanks to the emergence of new technologies for social robot-
ics, artificial intelligence, artificial life, neurocybernetic augmentation, and ge-
netic engineering.
5
Through our increasingly intimate integration of comput-
erized devices into our cognitive processes and bodies, human agency is tak-
ing on aspects traditionally seen in artificial agents; the notion of the ‘cyborg’
is no longer simply a concept found in science fiction but – to a greater or
lesser degree – an accurate description of ourselves and the people we meet
3
For an in-depth discussion of technological and nontechnological forms of posthumanization,
see Chapter One of this book, “A Typology of Posthumanism: A Framework for Differentiating
Analytic, Synthetic, Theoretical, and Practical Posthumanisms,” and Herbrechter, Posthumanism:
A Critical Analysis (2013).
4
The ‘bioagency’ possessed by biological entities like human beings and ‘cyberagency’ demon-
strated by artificial entities are distinguished in Fleischmann, “Sociotechnical Interaction and
Cyborg–Cyborg Interaction: Transforming the Scale and Convergence of HCI” (2009).
5
Such technologies are discussed in detail in Chapter Three of this book, “The Posthuman Man-
agement Matrix: Understanding the Organizational Impact of Radical Biotechnological Conver-
gence.”
Introduction: Technologization and the Evolution of Intelligent Agency • 21
around us every day.
6
At the same time, developments in the fields of robotics
and AI are creating synthetic systems that possess levels of sociality, imagi-
nation, emotion, legal and moral responsibility, and metabolic processes re-
sembling those that had previously been seen only in biological entities like
human beings.
7
Within organizations, information will be gathered and communicated,
strategic decisions made, and actions undertaken by a kaleidoscopic web of
intelligent agents which together form a complex cybernetic network. Such
entities may include ‘natural’ human beings who have not been biotechnolog-
ically modified; human beings possessing neuroprosthetic implants that pro-
vide extensive sensory, motor, and cognitive enhancement;
8
human beings
whose physical structures and biological processes have been intentionally
sculpted through genetic engineering;
9
human beings who spend all of their
time dwelling in virtual worlds;
10
virtualized entities resulting from a process
6
The ever-increasing aspects of ‘cyborgization’ reflected in the minds and bodies of typical hu-
man beings are discussed, e.g., in Haraway, Simians, Cyborgs, and Women: The Reinvention of
Nature (1991); Tomas, “Feedback and Cybernetics: Reimaging the Body in the Age of the Cy-
borg” (1995); Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and
Informatics (1999); Clark, Natural-born cyborgs: Minds, Technologies, and the Future of Human
Intelligence (2004); and Fleischmann (2009).
7
A comprehensive review of advances in developing sociality, emotions, and other cognitive
and biological capacities for robots is found in Friedenberg, Artificial Psychology: The Quest for
What It Means to Be Human (2008). For the ability of robots to bear responsibility for their actions,
see, e.g., Calverley, “Imagining a non-biological machine as a legal person” (2008); Coeckelbergh,
“From Killer Machines to Doctrines and Swarms, or Why Ethics of Military Robotics Is Not (Nec-
essarily) About Robots” (2011); and Gladden, “The Diffuse Intelligent Other: An Ontology of
Nonlocalizable Robots as Moral and Legal Actors” (2016).
8
For anticipated growth in the use of implantable neuroprosthetic devices for purposes of human
enhancement, see, e.g., McGee, “Bioelectronics and Implanted Devices” (2008); Gasson, “Human
ICT Implants: From Restorative Application to Human Enhancement” (2012); and Gladden,
“Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman Socioeconomic
Interaction” (2016).
9
See, e.g., Panno, Gene Therapy: Treating Disease by Repairing Genes (2005); Bostrom, “Human
Genetic Enhancements: A Transhumanist Perspective” (2012); De Melo-Martín, “Genetically
Modified Organisms (GMOs): Human Beings” (2015); and Nouvel, “A Scale and a Paradigmatic
Framework for Human Enhancement” (2015).
10
Implications of long-term immersion in virtual reality environments are discussed in Bain-
bridge, The Virtual Future (2011); Heim, The Metaphysics of Virtual Reality (1993); Geraci, Apoc-
alyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (2010); and
Koltko-Rivera, “The potential societal impact of virtual reality” (2005).
22 • Posthuman Management
of ‘mind uploading’;
11
artificial general intelligences;
12
social robots;
13
decen-
tralized nanorobotic swarms;
14
artificial organic or electronic life-forms,
15
in-
cluding virtual or physical robots that evolve through processes of mutation
and natural selection;
16
sentient or sapient networks;
17
and ‘hive minds’ com-
prising groups of diverse agents linked in such a way that they can share col-
lective sensory experiences, emotions, and volitions.
18
11
For perspectives on ‘mind uploading’ (including issues that may render it impossible), see
Moravec, Mind Children: The Future of Robot and Human Intelligence (1990); Hanson, “If uploads
come first: The crack of a future dawn” (1994); Proudfoot, “Software Immortals: Science or
Faith?” (2012); Pearce, “The Biointelligence Explosion” (2012); Koene, “Embracing Competitive
Balance: The Case for Substrate-Independent Minds and Whole Brain Emulation” (2012); and
Ferrando, “Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materi-
alisms: Differences and Relations” (2013), p. 27.
12
Potential paths to the development of artificial general intelligence and obstacles to its creation
are discussed in, e.g., Artificial General Intelligence, edited by Goertzel & Pennachin (2007); The-
oretical Foundations of Artificial General Intelligence, edited by Wang & Goertzel (2012); and Ar-
tificial General Intelligence: 8th International Conference, AGI 2015: Berlin, Germany, July 22-25,
2015: Proceedings, edited by Bieger et al. (2015).
13
Robots that can interact socially with human beings are discussed in, e.g., Breazeal, “Toward
sociable robots” (2003); Gockley et al., “Designing Robots for Long-Term Social Interaction”
(2005); Kanda & Ishiguro, Human-Robot Interaction in Social Robotics (2013); Social Robots and the
Future of Social Relations, edited by Seibt et al. (2014); Social Robots from a Human Perspective,
edited by Vincent et al. (2015); and Social Robots: Boundaries, Potential, Challenges, edited by Nør-
skov (2016).
14
Swarm robotics are discussed in, e.g., Arkin & Hobbs, “Dimensions of communication and
social organization in multi-agent robotic systems” (1993); Barca & Sekercioglu, “Swarm robotics
reviewed” (2013); and Brambilla et al., “Swarm robotics: a review from the swarm engineering
perspective” (2013). Regarding nanorobotic swarms, see, e.g., Ummat et al., “Bionanorobotics: A
Field Inspired by Nature” (2005), and Pearce (2012).
15
Artificial life-forms are discussed, e.g., in Andrianantoandro et al., “Synthetic biology: new
engineering rules for an emerging discipline” (2006); Cheng & Lu, “Synthetic biology: an emerg-
ing engineering discipline” (2012); and Gladden, “The Artificial Life-Form as Entrepreneur: Syn-
thetic Organism-Enterprises and the Reconceptualization of Business” (2014). For the relation-
ship of artificial life and evolutionary robotics, see Friedenberg (2008), pp. 201-16.
16
Evolutionary robotics and evolvable robotic hardware are reviewed in Friedenberg (2008), pp.
206-10.
17
For a self-aware future Internet that is potentially a sort of living entity, see Hazen, “What is
life?” (2006). Regarding a future Internet that is ‘self-aware’ even if not subjectively conscious,
see Galis et al., “Management Architecture and Systems for Future Internet Networks” (2009),
pp. 112-13. A sentient Internet is also discussed in Porterfield, “Be Aware of Your Inner Zom-
bie” (2010), p. 19. Regarding collectively conscious networks and a “post-internet sentient net-
work,” see Callaghan, “Micro-Futures” (2014).
18
For detailed taxonomies and classification systems for potential kinds of hive minds, see Chap-
ter 2, “Hive Mind,” in Kelly, Out of control: the new biology of machines, social systems and the
economic world (1994); Kelly, “A Taxonomy of Minds” (2007); Kelly, “The Landscape of Possible
Introduction: Technologization and the Evolution of Intelligent Agency • 23
At the forefront of efforts to understand and consciously shape this inte-
gration of biological and artificial agents are those diverse bodies of thought
and practice that constitute the phenomenon of posthumanism. And yet,
while insights and methodologies from the field of posthumanism have been
advantageously applied to many other spheres of human activity, there have
so far been very few explicit links made between posthumanism and the work
of integrating posthumanized agents to form effective organizations. In this
book, we endeavor to inform and enhance contemporary approaches to the
design and operation of organizations by fashioning such a bridge between
posthumanist thought and the fields of organizational theory and manage-
ment.
The book is divided into three main parts. In Part I, we examine topics that
provide a general theoretical framework for and introduction to management
in a posthumanized context. In Part II, we take a closer look at the ways in
which organizational management will be affected by the growing posthu-
manizing augmentation of human beings through technologies such as neu-
roprosthetics, implantable computing, virtual reality, and genetic engineering.
In Part III, we examine in more detail some of the ways in which increasingly
advanced technologies for robotics, artificial intelligence, and artificial life will
impact organizational management.
Part I: Posthuman Management: Background and
Theoretical Foundations
In Part I of the book, a general theoretical and practical framework for the
field of posthuman management is developed. Each chapter approaches this
task at a different level, moving from the more abstract sphere of basic inves-
tigations into the nature of posthumanism to the more concrete sphere of for-
mulating tools for posthumanized business analysis and exploring the impli-
cations of posthumanization for a specific management discipline (in this case,
organization development).
The Nature of Posthumanization and Posthumanism
In Chapter One, “A Typology of Posthumanism,” we consider the nature
of posthumanization and the many phenomena that have been described as
Intelligences” (2008); Yonck, “Toward a standard metric of machine intelligence” (2012); and
Yampolskiy, “The Universe of Minds” (2014). For discussion of systems whose behavior resem-
bles that of a hive mind without a centralized controller, see Roden, Posthuman Life: Philosophy
at the Edge of the Human (2014), p. 39. Hive minds are also discussed in Gladden, “Utopias and
Dystopias as Cybernetic Information Systems: Envisioning the Posthuman Neuropolity” (2015).
For critical perspectives on hive minds, see, e.g., Bendle, “Teleportation, cyborgs and the posthu-
man ideology” (2002), and Heylighen, “The Global Brain as a New Utopia” (2002).
24 • Posthuman Management
forms of ‘posthumanism,’ in order to situate organizational posthumanism
within a broader theoretical context. The array of activities that have been
described as ‘posthumanist’ is quite diverse, ranging from literary criticism of
Renaissance texts
19
and efforts by military research agencies to develop futur-
istic technologies for human enhancement
20
to spiritual movements and spe-
cific styles of performance art.
21
The question thus arises of whether these
phenomena share anything in common at all – and if so, what is their shared
dynamic and what are the characteristics that distinguish these different
forms of posthumanism.
Much excellent work has been carried out by Ferrando, Herbrechter, Birn-
bacher, Miah, Miller, and others that explores the conceptual foundations of
posthumanism. However, among such studies it can be noted that those re-
search articles which are especially comprehensive and systematic in scope
22
must often – due to space limitations – refrain from exploring any particular
form of posthumanism in depth. Meanwhile, the book-length analyses of
posthumanism that are exceptionally thorough and detailed in their approach
often focus on a single aspect of posthumanism rather than attempting to sur-
vey the phenomenon as a whole.
23
Moreover, existing analyses of posthuman-
ism tend to emerge from fields such as critical theory, cultural studies, philos-
ophy of technology, and bioethics; from the perspective of one who is inter-
ested in organizational theory and management, it takes considerable work to
extract meaningful insights from such studies and reinterpret and apply them
in ways relevant to organizational life.
24
19
See, e.g., Posthumanist Shakespeares, edited by Herbrechter & Callus (2012).
20
For examples of the term ‘posthuman’ being used to describe technologies whose development
is being pursued by DARPA and other military research and development agencies, see, e.g.,
Coker, “Biotechnology and War: The New Challenge” (2004); Graham, “Imagining Urban War-
fare: Urbanization and U.S. Military Technoscience” (2008), p. 36; and Krishnan, “Enhanced
Warfighters as Private Military Contractors” (2015).
21
The spiritual aspects of some forms of transhumanism have been noted by numerous scholars;
see, e.g., Bostrom, “Why I Want to Be a Posthuman When I Grow Up” (2008), p. 108, and Her-
brechter (2013), pp. 103-04. The neohumanist spiritual movement developed by Sarkar might
also be considered a form of posthumanism; see Sarkar, “Neohumanism Is the Ultimate Shelter
(Discourse 11)” (1982), and the discussion of such neohumanism in Chapter One of this book, “A
Typology of Posthumanism.” The form of metahumanism developed by Del Val and Sorgner
applies posthumanist ideals to performance art; see Del Val & Sorgner, “A Metahumanist Mani-
festo” (2011), and Del Val et al., “Interview on the Metahumanist Manifesto with Jaime del Val
and Stefan Lorenz Sorgner” (2011).
22
For example, see the insightful discussion in Ferrando (2013).
23
Such an exposition and investigation of critical posthumanism is found, e.g., in Herbrechter
(2013).
24
There are several forward-thinking works of management scholarship that consider the im-
pacts that posthumanizing technologies will have on future organizations; however, they do so
Introduction: Technologization and the Evolution of Intelligent Agency • 25
Chapter One attempts to synthesize and advance such existing analyses of
posthumanism in a way that lays a conceptual foundation for understanding
the varied processes of posthumanization that are relevant to specific topics
in organizational theory and management. We begin by formulating a com-
prehensive typology that can be used to classify existing and future forms of
posthumanism. The framework suggests that a given form of posthumanism
can be classified either as analytic or synthetic and as either theoretical or prac-
tical. An analytic posthumanism understands ‘posthumanity’ as a sociotech-
nological reality that already exists in the contemporary world, such as the
nonanthropocentric outlook found among some present-day evolutionary bi-
ologists, secular humanists, or animal-rights advocates that tends to minimize
the distinctions between human beings and other biological species. A syn-
thetic posthumanism is quite different: it understands ‘posthumanity’ as a col-
lection of hypothetical future entities – such as full-body cyborgs or genet-
ically engineered human beings – whose creation can either be intentionally
realized or prevented, depending on whether humanity decides to develop and
deploy particular technologies. A theoretical form of posthumanism is one
that primarily seeks to develop new knowledge or cultivate new ways of un-
derstanding reality; posthumanist thought and study occurring on university
campuses (and especially within the humanities) are often of this sort. Finally,
a practical posthumanism seeks primarily to bring about some social, political,
economic, or technological change in the world: efforts to develop new cry-
onics technologies or to engineer transhumanist genetic enhancements may
be of this kind.
Arranging the properties of analytic/synthetic and theoretical/practical as
two orthogonal axes creates a grid that can be used to categorize a form of
posthumanism into one of four quadrants or as a hybrid posthumanism span-
ning all quadrants. We argue that analytic theoretical forms of posthumanism
can collectively be understood as constituting a ‘posthumanism of critique’;
synthetic theoretical posthumanisms, a ‘posthumanism of imagination’; syn-
without describing posthumanizing technologies as such or drawing significantly on the theo-
retical or methodological aspects of posthumanism. Such works might better be understood as a
form of ‘management futurology’ grounded solidly in the field of organizational management
rather than as a bridge between management and posthumanism. They include studies such as
Berner’s comprehensive review of the management implications of futuristic technologies in
Management in 20XX: What Will Be Important in the Future – A Holistic View (2004). Posthumanist
themes are considered more explicitly – although in a narrowly focused context – in, e.g., Mara
& Hawk, “Posthuman rhetorics and technical communication” (2009), and Barile, “From the
Posthuman Consumer to the Ontobranding Dimension: Geolocalization, Augmented Reality and
Emotional Ontology as a Radical Redefinition of What Is Real” (2013).
26 • Posthuman Management
thetic practical posthumanisms, a ‘posthumanism of control’; analytic practi-
cal posthumanisms, a ‘posthumanism of conversion’; and hybrid posthuman-
isms uniting all four elements as a ‘posthumanism of production.’
Having developed this framework, we employ it to sift through a wide
range of phenomena that have been identified as ‘posthumanist’ in the schol-
arly literature or popular discourse and to categorize them according to the
framework’s criteria. The phenomena thus classified include critical, cultural,
philosophical, sociopolitical, and popular (or ‘commercial’) posthumanism;
science fiction; techno-idealism; multiple forms of metahumanism and neohu-
manism; antihumanism; prehumanism; feminist new materialism; the posthu-
manities; and biopolitical posthumanism, including bioconservatism and
transhumanism.
25
Given its notable presence in the popular consciousness,
special attention is devoted to transhumanism, and three specialized sub-ty-
pologies are discussed for distinguishing different forms of transhumanism.
Chapter One concludes by considering the form of organizational posthuman-
ism developed in this book and classifying it as a form of hybrid posthuman-
ism that spans all four quadrants of the framework.
Applying Posthumanist Thought to Organizational Theory and
Management
In Chapter Two, “Organizational Posthumanism,” the manners in which
posthumanist insights can be applied to the theory and practice of organiza-
tional management are explored in more detail. We sketch out one way of
fashioning posthumanist methodologies into a coherent management ap-
proach and chart out the potential scope of such a field. At its heart, the or-
ganizational posthumanism formulated in this text is a pragmatic approach to
analyzing, understanding, creating, and managing organizations that is at-
tuned to the intensifying processes of technological posthumanization and
which employs a post-dualistic and post-anthropocentric perspective that can
aid in recognizing challenges caused by the forces of posthumanization and
developing innovative strategies for appropriately harnessing those forces.
Organizational posthumanism does not naïvely embrace all forms of
posthumanization; unlike some strains of transhumanist thought, it does not
presume that all emerging technologies for genetic engineering or nanorobot-
ics are inherently beneficial and free from grave dangers. But at the same time,
25
Many of these forms of posthumanism are identified in Ferrando (2013); others are discussed
in Herbrechter (2013); Birnbacher, “Posthumanity, Transhumanism and Human Nature” (2008);
Miah, “A Critical History of Posthumanism” (2008); and Miller, “Conclusion: Beyond the Human:
Ontogenesis, Technology, and the Posthuman in Kubrick and Clarke’s 2001” (2012). Some forms,
such as sociopolitical posthumanism, are explicitly defined for the first time within the chapters
in this volume. Detailed descriptions of all of these types of posthumanism are presented in
Chapter One of this text, “A Typology of Posthumanism.”
Introduction: Technologization and the Evolution of Intelligent Agency • 27
organizational posthumanism does not directly join bioconservatism in at-
tempting to block the development of particular technologies deemed to be
hazardous or destructive. Instead, organizational posthumanism focuses on
analyzing posthumanizing technologies that are already available or whose
development is expected in order to assess their (potential) impact on organi-
zations and develop strategies for utilizing such technologies in ways that are
ethical, impactful, and efficient. Organizational posthumanism recognizes that
emerging technologies are likely to possess both benign and harmful applica-
tions, and the role of a manager as such is to identify and creatively exploit
the beneficial aspects of a technology within a particular organizational con-
text while simultaneously avoiding or ameliorating the technology’s more
detrimental effects.
26
Indeed, like critical posthumanism and other forms of analytic posthuman-
ism, organizational posthumanism recognizes that to a certain degree the
world as a whole has already become ‘posthumanized’ through nontechno-
logical processes: for example, regardless of whether a particular organization
decides to acquire and exploit technologies for social robotics and neuroen-
hancement, the organization must account for the fact that its pool of (poten-
tial) employees, customers, and other stakeholders includes a growing number
of individuals who, in different fashions and for varying reasons, possess in-
creasingly nonanthropocentric and nondualistic ways of viewing reality. Thus
engaging the realities of posthumanization is something that every contem-
porary organization must do of necessity; the only question is the extent to
which an organization does so consciously and with a coherent strategy.
In order to develop an adequate framework for identifying the aspects of
organizations that our study must address, we turn to fields like organizational
architecture, enterprise architecture, and organizational design. When organ-
izations are viewed through the lens of these disciplines, the relevance of six
key elements becomes apparent: the forces of posthumanization are expected
to increasingly expand and transform the kinds of agent-members, personnel
structures, information systems, processes and activities, physical and virtual
26
A human manager may simultaneously also be, for example, a follower of a particular religious
tradition, a consumer, a voter, a patient, and a parent. In those other capacities, he or she may
quite possibly work actively to spur or prevent the adoption of particular posthumanizing tech-
nologies, based on his or her adherence to posthumanist movements like bioconservatism or
transhumanism. Organizational posthumanism does not attempt to study or shape all of those
ways in which a human being may be related to posthumanizing forces and technologies; its
scope only includes those mechanisms and dynamics by which posthumanization impacts the
organization whose activities the manager is (co)responsible for directing.
28 • Posthuman Management
spaces, and external ecosystems that organizations are able (or required) to uti-
lize.
27
We argue that in each of these six areas, three different kinds of posthu-
manizing technologies will create new opportunities, threats, and exigencies
that drive organizational change. The first kind is technologies for human aug-
mentation and enhancement, which include many forms of neuroprosthetics,
implantable computing, genetic engineering, and life extension.
28
The second
is technologies for synthetic agency, which include artificial intelligence, ar-
tificial life, and diverse forms of robotics such as social, nano-, soft, and evo-
lutionary robotics.
29
The third kind is technologies for digital-physical ecosys-
tems and networks, which create new kinds of environments that human, ar-
tificial, and hybrid agents can inhabit and infrastructure through which they
can interact. Such technologies might create persistent immersive virtual
worlds and cybernetic networks whose topologies allow their agent-members
to form collective hive minds.
30
Managing the Transformation and Functional Convergence of
Human and Artificial Agents
Chapter Two thus sketches the contours of organizational posthumanism
as a field that can allow management theorists to understand the forces of
posthumanization that are impacting organizations and management practi-
tioners to anticipate and shape them. However, before attempting to apply
such insights to the task of creating organizational designs and enterprise ar-
chitectures for particular organizations, it would be helpful to have at one’s
disposal a more concrete guide for assessing the technological posthumaniza-
tion of particular groups of agents, such as those comprising the (potential)
stakeholders of an organization. To that end, in Chapter Three we formulate
“The Posthuman Management Matrix,” a conceptual tool for analyzing and
27
For example, within the ‘congruence model’ of organizational architecture conceptualized by
Nadler and Tushman, structures, processes, and systems constitute the three main elements of
an organization that must be considered. See Nadler & Tushman, Competing by Design: The Power
of Organizational Architecture (1997), p. 47.
28
Biologically and nonbiologically based efforts at human life extension are compared in Koene
(2012).
29
An overview of such topics can be found, e.g., in Friedenberg (2008) and Murphy, Introduction
to AI Robotics (2000).
30
Regarding the ongoing evolution of the Internet to incorporate ever more diverse types of
objects and entities, see Evans, “The Internet of Everything: How More Relevant and Valuable
Connections Will Change the World” (2012). For a conceptual analysis of the interconnection
between physical and virtual reality and different ways in which beings and objects can move
between these worlds, see Kedzior, “How Digital Worlds Become Material: An Ethnographic and
Netnographic Investigation in Second Life” (2014). Regarding the typologies of posthumanized
cybernetic networks, see Gladden, “Utopias and Dystopias as Cybernetic Information Systems”
(2015).
Introduction: Technologization and the Evolution of Intelligent Agency • 29
managing the behavior of agents within organizations where the boundaries
between human beings and computers are becoming increasingly blurred.
Within the schema of this Matrix, an organization’s employees and con-
sumers can include two different kinds of agents (human and artificial agents),
and the characteristics possessed by a specific agent belong to one of two sets
(‘anthropic’ or ‘computronic’ characteristics). The model thus defines four dif-
ferent types of possible entities that might serve as organizational participants
and stakeholders. The phrase ‘human agents possessing anthropic character-
istics’ is simply another way of describing the natural human beings who have
not been modified by posthumanizing technological processes such as neuro-
prosthetic enhancement or genetic engineering and who – from the dawn of
human history – have served as the backbone of all organizations on earth.
Disciplines like HR management and organization development offer many
time-tested approaches for optimizing the performance of such human beings
within an organizational context.
The phrase ‘artificial agents with computronic characteristics’ is another
way of describing the ubiquitous electronic systems developed over the last
half-century in which a computer utilizing a conventional Von Neumann ar-
chitecture and running specialized software serves as an intelligent agent to
perform assignments like transporting materials within production facilities;
31
wielding a robotic arm to perform assembly-line manufacturing tasks;
32
mon-
itoring systems and facilities to detect physical or electronic intrusion at-
tempts;
33
automatically scheduling tasks and optimizing the use of physical
and electronic resources;
34
initiating financial transactions within online mar-
kets;
35
mining data to evaluate an applicant’s credit risk or decide what per-
sonalized offers and advertisements to display to a website’s visitors;
36
inter-
31
See, e.g., The Future of Automated Freight Transport: Concepts, Design and Implementation, ed-
ited by Priemus & Nijkamp (2005), and Ullrich, Automated Guided Vehicle Systems: A Primer with
Practical Applications (2015).
32
For an overview of such technologies, see, e.g., Intelligent Production Machines and Systems,
edited by Pham et al. (2006), and Perlberg, Industrial Robotics (2016).
33
Regarding the automation of intrusion detection and prevention systems, see Rao & Nayak,
The InfoSec Handbook (2014), pp. 226, 235, 238.
34
For an overview of methods that can be employed for such purposes, see Pinedo, Scheduling:
Theory, Algorithms, and Systems (2012), and Automated Scheduling and Planning: From Theory to
Practice, edited by Etaner-Uyar et al. (2013).
35
See Schacht, “The Buzz about Robo-Advisers” (2015); Dhar, “Should You Trust Your Money to
a Robot?” (2015); Scopino, “Do Automated Trading Systems Dream of Manipulating the Price of
Futures Contracts? Policing Markets for Improper Trading Practices by Algorithmic Robots”
(2015); and Turner, “The computers have won, and Wall Street will never be the same” (2016).
36
Regarding the role of automated systems in data mining, see, e.g., Giudici, Applied Data Min-
30 • Posthuman Management
acting with customers through automated call centers, online chatbot inter-
faces, and physical kiosks to offer customer support;
37
or dispensing goods and
services to customers.
38
The successful integration of such artificial agent tech-
nologies into organizational life is a major focus of contemporary manage-
ment theory and practice.
However, the remaining two types of entities described by the Posthuman
Management Matrix have historically been overlooked by the field of man-
agement – and understandably so, because of the fact that such entities have
not existed as beings that could serve as workers, customers, and other organ-
izational stakeholders. We argue, though, that such entities are now emerging
as potential organizational actors, thanks to posthumanizing phenomena such
as the development of increasingly powerful forms of neuroprosthetics, ge-
netic engineering, virtual reality, robotics, and artificial intelligence. Human
agents possessing computer-like physical and cognitive characteristics can be
understood as real-world embodiments of the ‘cyborgs’ long envisioned in sci-
ence fiction, while artificial agents possessing anthropic physical and cogni-
tive characteristics will have very little in common with the desktop comput-
ers of earlier eras; they can be better understood as ‘bioroids’ whose form and
behaviors resemble those of sophisticated biological entities like human be-
ings.
We suggest that existing management approaches will prove ill-equipped
for successfully understanding and shaping the activities of such novel
posthumanized entities. New approaches are expected to emerge that allow
organizations to identify and address the serious operational, legal, and ethical
issues that will arise as human employees and consumers become more like
computers and computerized agents more like biological human beings. Such
efforts can build on the foundations developed by disciplines like cybernetics,
systems theory, xenopsychology, and exoeconomics that employ a nonanthro-
pocentric perspective and which formulate genericized principles that are
equally well-suited to explaining the forms and dynamics of all kinds of
agents, regardless of whether they are human, artificial, or hybrid in nature.
39
ing: Statistical Methods for Business and Industry (2003); Provost & Fawcett, Data Science for Busi-
ness (2013), p. 7; and Warkentin et al., “The Role of Intelligent Agents and Data Mining in Elec-
tronic Partnership Management” (2012), p. 13282.
37
Such technologies are described, e.g., in Perez-Marin & Pascual-Nieto, Conversational Agents
and Natural Language Interaction: Techniques and Effective Practices (2011); McIndoe, “Health
Kiosk Technologies” (2010); and Ford, Rise of the Robots: Technology and the Threat of a Jobless
Future (2015).
38
See, e.g., the firsthand account of such technologies from the perspective of a potential con-
sumer in Nazario, “I went to Best Buy and encountered a robot named Chloe – and now I’m
convinced she’s the future of retail” (2015).
39
For a history of the use of ‘xeno-’ as a prefix to designate disciplines that study the forms or
Introduction: Technologization and the Evolution of Intelligent Agency • 31
Envisioning the Posthumanization of a Management
Discipline: Organization Development
Chapter Four (“Organization Development and the Robotic-Cybernetic-
Human Workforce: Humanistic Values for a Posthuman Future?”) considers
the implications of technological posthumanization for a concrete discipline
within the field of management. Namely, we explore the extent to which or-
ganization development can and should help organizations grapple with the
impacts of emerging posthuman technologies. Organization development
(OD) is a management discipline whose theory and practice are firmly rooted
in humanistic values, insofar as it seeks to create effective organizations by
facilitating the empowerment and growth of their human members. However,
a new posthuman era is dawning in which human beings will no longer be the
only intelligent actors guiding the behavior of organizations; increasingly, so-
cial robots, AI programs, and cybernetically augmented human employees are
taking on roles as collaborators and decision-makers in the workplace, and
this transformation is only likely to accelerate. How should OD professionals
react to the rise of these posthumanizing technologies? Several ways are sug-
gested in which OD could act as a ‘Humanist OD for a posthuman world,’
providing an essential service to future organizations without abandoning its
traditional humanist values. An alternative vision is then presented for a
‘Posthuman OD’ that reinterprets and expands its humanist vision to embrace
the benefits that social robots, AI, and cyberization can potentially bring into
the workplace. Finally, we discuss the extent to which OD can remain a single,
unified discipline in light of the challenge to its traditional humanistic values
that is presented by such emerging technologies.
Part II: Human Augmentation: A Closer Look
In Part II of the book, we take a closer look at the ways in which organiza-
tional management will be affected by the growing posthumanizing augmen-
tation of human beings through technologies such as neuroprosthetics, virtual
reality, and genetic engineering.
Neuroprosthetically Facilitated Socioeconomic Interaction
Chapter Five (“Neural Implants as Gateways to Digital-Physical Ecosys-
tems and Posthuman Socioeconomic Interaction”) looks beyond current desk-
top, mobile, and wearable technologies to argue that work-related information
behaviors of intelligent agents other than human beings (and in particular, those of hypothetical
extraterrestrial life-forms), see the “Preface and Acknowledgements for the First Edition” in
Freitas, Xenology: An Introduction to the Scientific Study of Extraterrestrial Life, Intelligence, and
Civilization (1979). For a similar use of ‘exopsychology’ in connection with the study of the cog-
nitive mechanisms and processes of potential extraterrestrial intelligences, see Harrison & Elms,
“Psychology and the search for extraterrestrial intelligence” (1990), p. 207. Regarding the work
of exoeconomists, see Ames, “The Place of an Individual in an Economy” (1981), p. 37.
32 • Posthuman Management
and communications technology (ICT) will increasingly move inside the hu-
man body through the use of neuroprosthetic devices that create employees
who are permanently connected to their workplace’s digital ecosystems. Such
persons may possess enhanced perception, memory, and abilities to manipu-
late physical and virtual environments and to link with human and synthetic
minds to form cybernetic networks that can be both ‘supersocial’ and ‘postso-
cial.’ However, such neuroprosthetics may also create a sense of inauthentic-
ity, vulnerability to computer viruses and hacking, financial burdens, and
questions surrounding ownership of intellectual property produced using im-
plants. Moreover, those populations who do and do not adopt neuroprostheses
may come to inhabit increasingly incompatible and mutually incomprehensi-
ble digital ecosystems. Here we propose a cybernetic model for understanding
how neuroprosthetics can either facilitate human beings’ participation in
posthuman informational ecosystems – or undermine their health, infor-
mation security, and autonomy.
The Possibility and Necessity of New Business Models
In Chapter Six, “The Impacts of Human Neurocybernetic Enhancement on
Organizational Business Models,” we consider ways in which the increased
use of neuroprosthetics – among both organizations’ employees and custom-
ers – may require companies to transform their business models. The social
and economic impact of such neuroprosthetics will be dramatic: these devices
are expected to reshape the ways in which human beings interact with one
another, enabling them to create novel forms of social structures and organi-
zations and to engage in new kinds of informational and economic exchange
that were never previously possible. As participants in the larger societies,
economies, and informational ecosystems within which they exist, businesses
will be impacted by the widening use of neuroprosthetic technologies. Com-
panies that are able to identify, understand, and anticipate these technological
and social changes and transform their business models accordingly may be
able to secure significant competitive advantages. On the other hand, compa-
nies that are not able to adapt their business models quickly enough to the
social, economic, political, cultural, and ethical changes driven by neuropros-
thetic technologies may find themselves unable to compete, grow, or even
survive. In this text, we briefly consider the concept of a ‘business model’ and
the situations that require a company to change its business model. We then
explore three main areas in which the rise of neuroprosthetics is expected to
transform humanity. Finally, we identify the impact that such changes will
have on companies’ business models and consider an example highlighting
the reasons why many companies will need to adopt new business models in
order to address these altered realities.
Introduction: Technologization and the Evolution of Intelligent Agency • 33
New Considerations for Organizational Information Security
Chapter Seven (“Implantable Computers and Information Security: A Man-
agerial Perspective”) considers the impact that the increased use of neuropros-
thetic devices and other implantable computers will have on information se-
curity, especially as understood within an organizational context. The inter-
disciplinary field of information security already draws significantly on the
biological and human sciences; for example, it relies on the knowledge of hu-
man physiology to design biometric authentication devices and utilizes in-
sights from psychology to predict users’ vulnerability to social engineering
techniques and to develop preventative measures. The growing use of com-
puters that are implanted within the human body for purposes of therapy or
augmentation will compel the field of information security to develop rela-
tionships with fields such as medicine and biomedical engineering that are
closer than and qualitatively different from those that have previously existed,
insofar as the technologies and practices that InfoSec implements for implant-
able computers must not only secure the information contained within such
devices but must also avoid creating biological or psychological harm (or even
the danger of such harm) for the human beings within whose organisms the
computers are embedded. In this text we identify unique issues and challenges
that implantable computers create for information security. By considering
the scenario of the computer contained within a sensory neuroprosthetic de-
vice in the form of an advanced retinal implant, we demonstrate the ways in
which information security’s traditional concepts of the confidentiality, integ-
rity, and availability of information and the use of physical, logical, and ad-
ministrative access controls become intertwined with issues of medicine and
biomedical engineering. Finally, we explore the idea of ‘celyphocybernetics’
as a paradigm for conceptualizing the relationship of information security to
medicine and biomedical engineering insofar as it relates to implantable com-
puters.
Part III: Robotics and Artificial Intelligence: A Closer Look
In Part III of the book, we examine in more detail some of the ways in
which increasingly advanced technologies for robotics, artificial intelligence,
and artificial life will impact organizational management.
Could a Robot Succeed as the CEO of a Human
Organization?
Chapter Eight, “The Social Robot as CEO: Developing Synthetic Charis-
matic Leadership for Human Organizations,” explores one aspect of the ques-
tion of whether a robot could ever serve successfully as the CEO of an organ-
ization that includes human personnel. Among the many important functions
that a CEO must perform is that of motivating and inspiring a company’s
workers and cultivating their trust in the company’s strategic direction and
34 • Posthuman Management
leadership. It might appear that creating a robot that can successfully inspire
and win the trust of an organization’s human personnel is a major hurdle to
the development of a robot that can serve effectively as CEO of a company
that includes human workers. In this text, however, we argue that the devel-
opment of social robots that are capable of manifesting these leadership traits
needed to serve as CEO within an otherwise human organization is not only
possible but likely even inevitable.
We begin by analyzing what French and Raven refer to as ‘referent power’
and what Weber describes as ‘charismatic authority’ – two related character-
istics which if possessed by a social robot could allow that robot to lead human
personnel by inspiring and motivating them and securing their loyalty and
trust.
40
By analyzing current robotic design efforts and literary depictions of
robots, we suggest three ways in which human beings are striving to create
charismatic robot leaders for ourselves. We then consider the manner in
which particular robot leaders will acquire human trust, arguing that charis-
matic robot leaders for businesses and other kinds of institutions will emerge
naturally from within our world’s social fabric, without any rational decision
on our part. Finally, drawing on Abrams,
41
we suggest that the stability of
these leader-follower relations – and the extent to which charismatic social
robots can remain long-term fixtures in leadership roles such as that of CEO
– will hinge on a fundamental question of robotic intelligence and motivation
that currently stands unresolved.
Future Robots as Actors That Are More Powerful – and More
Elusive
In Chapter Nine, “The Diffuse Intelligent Other: An Ontology of Nonlocal-
izable Robots as Moral and Legal Actors,” we explore the assumption that the
physical forms and capacities displayed by future robots will resemble those
seen in the past, and we consider the implications that the changing form of
robots will have for the challenge of assigning moral and legal responsibility
for actions performed by such robots. Already much thought has been given
to the question of who bears moral and legal responsibility for actions per-
formed by robots. Some argue that responsibility could be attributed to a robot
if it possessed human-like autonomy and metavolitionality and that while
such capacities can potentially be possessed by a robot with a single spatially
compact body, they cannot be possessed by a spatially disjunct, decentralized
collective such as a robotic swarm or network. However, advances in ubiqui-
40
See, e.g., Forsyth, Group Dynamics (2010), p. 227, and Weber, Economy and Society: An Outline
of Interpretive Sociology (1968), p. 215.
41
See Abrams, “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shusterman,
Rorty, Foucault” (2004).
Introduction: Technologization and the Evolution of Intelligent Agency • 35
tous robotics and distributed computing open the door to a new form of ro-
botic entity that possesses a unitary intelligence, despite the fact that its cog-
nitive processes are not confined within a single spatially compact, persistent,
identifiable body. Such a ‘nonlocalizable’ robot may possess a body whose
myriad components interact with one another at a distance and which is con-
tinuously transforming as components join and leave the body. Here we de-
velop an ontology for classifying such robots on the basis of their autonomy,
volitionality, and localizability. Using this ontology, we explore the extent to
which nonlocalizable robots – including those possessing cognitive abilities
that match or exceed those of human beings – can be considered moral and
legal actors that are responsible for their own actions.
Artificial Agents as Managers of Diverse Virtual Human Teams
Chapter Ten, “Leveraging the Cross-Cultural Capacities of Artificial
Agents as Leaders of Human Virtual Teams,” explores the possibility that ar-
tificial agents might be employed to manage cross-cultural virtual teams of
human and synthetic workers. The human beings who manage global virtual
teams regularly face challenges caused by factors such as the lack of a shared
language and culture among team members and coordination delay resulting
from spatial and temporal divisions between members of the team. As part of
the ongoing advances in artificial agent (AA) technology, artificial agents have
been developed whose purpose is to assist the human managers of virtual
teams. In this text, we move a step further by suggesting that new capabilities
being developed for artificial agents will eventually give them the ability to
successfully manage virtual teams whose other members are human beings.
In particular, artificial agents will be uniquely positioned to take on roles as
managers of cross-cultural, multilingual, global virtual teams, by overcoming
some of the fundamental cognitive limitations that create obstacles for human
beings serving in these managerial roles. In order to effectively interact with
human team members, AAs must be able to decode and encode the full spec-
trum of verbal and nonverbal communication used by human beings. Because
culture is so deeply embedded in all human forms of communication, AAs
cannot communicate in a way that is ‘non-cultural’; an AA that is capable of
communicating effectively with human team members will necessarily dis-
play a particular culture (or mix of cultures), just as human beings do. The
need for AA team leaders to display cultural behavior raises the key question
of which culture or cultures the AA leader of a particular human virtual team
should display. We argue that the answer to this question depends on both
the cultural makeup of a team’s human members and the methods used to
share information among team members. To facilitate the analysis of how an
AA team leader’s cultural behaviors can best be structured to fit the circum-
stances of a particular virtual team, we propose a two-dimensional model for
designing suites of cultural behaviors for AAs that will manage human virtual
36 • Posthuman Management
teams. We consider examples of each type of AA described by the model, iden-
tify potential strengths and weaknesses of each type, suggest particular kinds
of virtual teams that are likely to benefit from being managed by AAs of the
different types, and discuss empirical study that can test the validity and use-
fulness of this framework.
Quantitative Approaches to Comparing Human and Artificial
Agents as Managers
In Chapter Eleven, “A Fractal Measure for Comparing the Work Effort of
Human and Artificial Agents Performing Management Functions,” we con-
sider in detail one aspect of efforts to determine whether a robot would per-
form better, worse, or simply differently than a human worker in a particular
management role. Thanks to the growing sophistication of artificial agent
technologies, businesses will increasingly face decisions of whether to have a
human employee or artificial agent perform a particular function. This makes
it desirable to have a common temporal measure for comparing the work ef-
fort that human beings and artificial agents can apply to a role. Existing tem-
poral measures of work effort are formulated to apply either to human em-
ployees (e.g., FTE and billable hours) or computer-based systems (e.g., mean
time to failure and availability) but not both. In this paper we propose a new
temporal measure of work effort based on fractal dimension that applies
equally to the work of human beings and artificial agents performing man-
agement functions. We then consider four potential cases to demonstrate the
measure’s diagnostic value in assessing strengths (e.g., flexibility) and risks
(e.g., switch costs) reflected by the temporal work dynamics of particular man-
agers.
Developing Managerial Robots with Capabilities Comparable
to Those of Human Managers
Chapter Twelve, “Managerial Robotics: A Model of Sociality and Auton-
omy for Robots Managing Human Beings and Machines,” investigates some
concrete characteristics of existing robots in an effort to better understand
whether and how such characteristics might be fundamentally intertwined
with one another and how they might be manifested in more advanced future
types of robots. The development of robots with increasingly sophisticated
decision-making and social capacities is opening the door to the possibility of
robots carrying out the management functions of planning, organizing, lead-
ing, and controlling the work of human beings and other machines.
42
In this
text we study the relationship between two traits that impact a robot’s ability
42
Building on the classic management framework developed by Henri Fayol, Daft identifies the
four essential functions of a manager as planning, organizing, leading, and controlling activities
within an organization. See Daft, Management (2011).
Introduction: Technologization and the Evolution of Intelligent Agency • 37
to effectively perform management functions: those of autonomy and sociality.
Using an assessment instrument we evaluate the levels of autonomy and so-
ciality of 35 robots that have been created for use in a wide range of industrial,
domestic, and governmental contexts, along with several kinds of living or-
ganisms with which such robots can share a social space and which may pro-
vide templates for some aspects of future robotic design. We then develop a
two-dimensional model that classifies the robots into 16 different types, each
of which offers unique strengths and weaknesses for the performance of man-
agement functions. Our data suggest correlations between autonomy and so-
ciality that could potentially assist organizations in identifying new and more
effective management applications for existing robots and aid roboticists in
designing new kinds of robots that are capable of succeeding in particular
management roles.
Artificial Life Forms Functioning Autonomously as Businesses
within the Real-world Economy
In Chapter Thirteen, “Developing a Non-anthropocentric Definition of
Business: A Cybernetic Model of the Synthetic Life-form as Autonomous En-
terprise,” it is noted that operating a business has traditionally been consid-
ered an exclusively human activity: while domesticated animals or desktop
computers, for example, might participate in the work of a business, they are
not in themselves capable of organizing or constituting a ‘business.’ However,
the increasing sophistication and capacities of social robots, synthetic life-
forms, and other kinds of artificial agents raises the question of whether some
such entities might be capable not only of leading a business but of directly
constituting one. In this text we argue that it is theoretically possible to create
artificial life-forms that function as autonomous businesses within the real-
world human economy and explore some of the implications of the develop-
ment of such beings. Building on the cybernetic framework of the Viable Sys-
tems Approach (VSA), we formulate the concept of an ‘organism-enterprise’
that exists simultaneously as both a life-form and a business. The possible ex-
istence of such entities both enables and encourages us to reconceptualize the
historically anthropocentric understanding of a ‘business’ in a way that allows
an artificial life-form to constitute a ‘synthetic’ organism-enterprise (SOE) just
as a human being acting as a sole proprietor constitutes a ‘natural’ organism-
enterprise. Such SOEs would exist and operate in a sphere beyond that of cur-
rent examples of artificial life, which produce goods or services within some
simulated world or play a limited role as tools or assistants within a human
business. Rather than competing against other artificial organisms in a virtual
world, SOEs could potentially survive and evolve through competition against
human businesses in our real-world economy. We conclude by briefly envi-
sioning particular examples of SOEs that elucidate some of the legal, eco-
nomic, and ethical issues that arise when a single economic ecosystem is
38 • Posthuman Management
shared by competing human and artificial life. It is suggested that the theoret-
ical model of synthetic organism-enterprises developed in this text may pro-
vide a useful conceptual foundation for computer programmers, engineers,
economists, management scholars and practitioners, ethicists, policymakers,
and others who will be called upon in the coming years to grapple with the
realities of artificial agents that increasingly function as autonomous enter-
prises within our world’s complex economic ecosystem.
Conclusion
It is our hope that the questions raised, topics explored, and approaches
suggested in this book can draw attention to an important element that is
largely missing from contemporary debates surrounding emerging transform-
ative technologies, which often focus on issues of economics (such as the ques-
tion of whether increasing roboticization will produce mass human unem-
ployment
43
) or bioethics (such as the question of whether neuroprosthetic de-
vices that alter a user’s personality or memories are ethically permissible
44
).
Namely, we aim to highlight the fact that those posthumanizing technologies
that transform the nature of human and synthetic agency will necessarily also
transform the nature of the organizations for which agents serve as workers,
consumers, managers, investors, and other stakeholders. Given the fact that
almost every aspect of human existence is intimately connected with the ac-
tivity of human organizations, the forces of posthumanization that enable or
impel dramatic changes in such organizations will impact every corner of our
lives. The extent to which such radical change can be anticipated and con-
sciously shaped by organizations and those who manage them may largely
determine the quality of the world – or worlds – experienced by generations
of human beings to come.
References
Abrams, J.J. “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shuster-
man, Rorty, Foucault.” Human Studies vol. 27, no. 3 (2004): 241-58.
Ames, Edward. “The Place of an Individual in an Economy.” In Essays in Contempo-
rary Fields of Economics: In Honor of Emanuel T. Weiler (1914-1979), edited by
43
See, for example, Sachs et al., “Robots: Curse or Blessing? A Basic Framework” (2015), and
Ford (2015).
44
See, e.g., Maguire & McGee, “Implantable brain chips? Time for debate” (1999); Khushf, “The
use of emergent technologies for enhancing human performance: Are we prepared to address
the ethical and policy issues” (2005); Soussou & Berger, “Cognitive and Emotional Neuropros-
theses” (2008); Kraemer, “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises Ques-
tions of Personal Authenticity and Alienation” (2011); and Van den Berg, “Pieces of Me: On Iden-
tity and Information and Communications Technology Implants” (2012).
Introduction: Technologization and the Evolution of Intelligent Agency • 39
George Horwich and James P. Quirk, pp. 24-40. West Lafayette, IN: Purdue Uni-
versity Press, 1981.
Andrianantoandro, Ernesto, Subhayu Basu, David K. Karig, and Ron Weiss. “Syn-
thetic biology: new engineering rules for an emerging discipline.” Molecular Sys-
tems Biology 2, no. 1 (2006).
Arkin, Ronald C., and J. David Hobbs. “Dimensions of communication and social or-
ganization in multi-agent robotic systems.” In From Animals to Animats 2: Pro-
ceedings of the Second International Conference on Simulation of Adaptive Behavior,
edited by Jean-Arcady Meyer, H. L. Roitblat and Stewart W. Wilson, pp. 486-93.
Cambridge, MA: The MIT Press, 1993.
Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin. Springer
Berlin Heidelberg, 2007.
Artificial General Intelligence: 8th International Conference, AGI 2015: Berlin, Germany,
July 22-25, 2015: Proceedings, edited by Jordi Bieger, Ben Goertzel, and Alexey
Potapov. Springer International Publishing, 2015.
Automated Scheduling and Planning: From Theory to Practice, edited by A. Şima
Etaner-Uyar, Ender Özcan, and Neil Urquhart. Springer Berlin Heidelberg, 2013.
Bainbridge, William Sims. The Virtual Future. London: Springer, 2011.
Barca, Jan Carlo, and Y. Ahmet Sekercioglu. “Swarm robotics reviewed.” Robotica 31,
no. 03 (2013): 345-59.
Barile, Nello. “From the Posthuman Consumer to the Ontobranding Dimension: Ge-
olocalization, Augmented Reality and Emotional Ontology as a Radical Redefini-
tion of What Is Real.” intervalla: platform for intellectual exchange, vol. 1 (2013).
http://www.fus.edu/intervalla/images/pdf/9_barile.pdf. Accessed May 18, 2016.
Barile, S., J. Pels, F. Polese, and M. Saviano. “An Introduction to the Viable Systems
Approach and Its Contribution to Marketing.” Journal of Business Market Man-
agement 5(2) (2012): 54-78.
Beer, Stafford. Brain of the Firm. Second edition. New York: John Wiley, 1981.
Bendle, Mervyn F. “Teleportation, cyborgs and the posthuman ideology.” Social Semi-
otics 12, no. 1 (2002): 45-62.
Berner, Georg. Management in 20XX: What Will Be Important in the Future – A Holis-
tic View. Erlangen: Publicis Corporate Publishing, 2004.
Birnbacher, Dieter. “Posthumanity, Transhumanism and Human Nature.” In Medical
Enhancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
95-106. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
Bostrom, Nick. “Human Genetic Enhancements: A Transhumanist Perspective.” In
Arguing About Bioethics, edited by Stephen Holland, pp. 105-15. New York:
Routledge, 2012.
Bostrom, Nick. “Why I Want to Be a Posthuman When I Grow Up.” In Medical En-
hancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
40 • Posthuman Management
107-37. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
Brambilla, Manuele, Eliseo Ferrante, Mauro Birattari, and Marco Dorigo. “Swarm ro-
botics: a review from the swarm engineering perspective.” Swarm Intelligence 7,
no. 1 (2013): 1-41.
Breazeal, Cynthia. “Toward sociable robots.” Robotics and Autonomous Systems 42
(2003): 167-75.
Callaghan, Vic. “Micro-Futures.” Presentation at Creative-Science 2014, Shanghai,
China, July 1, 2014.
Calverley, D.J. “Imagining a non-biological machine as a legal person.” AI & SOCIETY
22, no. 4 (2008): 523-37.
Casadesus-Masanell, Ramon, and Joan E. Ricart. “How to Design a Winning Business
Model.” Harvard Business Review 89, no. 1-2 (2011): 100-07.
Cheng, Allen A., and Timothy K. Lu. “Synthetic biology: an emerging engineering
discipline.” Annual Review of Biomedical Engineering 14 (2012): 155-78.
Clark, Andy. Natural-born cyborgs: Minds, Technologies, and the Future of Human In-
telligence. Oxford: Oxford University Press, 2004.
Coeckelbergh, Mark. “From Killer Machines to Doctrines and Swarms, or Why Ethics
of Military Robotics Is Not (Necessarily) About Robots.” Philosophy & Technology
24, no. 3 (2011): 269-78.
Coker, Christopher. “Biotechnology and War: The New Challenge.” Australian Army
Journal vol. II, no. 1 (2004): 125-40.
Daft, Richard. Management, Mason, OH: South-Western, Cengage Learning, 2011.
DaSilva, Carlos M., and Peter Trkman. “Business Model: What It Is and What It Is
Not.” Long Range Planning 47 (2014): 379-89.
De Melo-Martín, Inmaculada. “Genetically Modified Organisms (GMOs): Human Be-
ings.” In Encyclopedia of Global Bioethics, edited by Henk ten Have. Springer Sci-
ence+Business Media Dordrecht. Version of March 13, 2015. doi: 10.1007/978-3-
319-05544-2_210-1. Accessed January 21, 2016.
Del Val, Jaime, and Stefan Lorenz Sorgner. “A Metahumanist Manifesto.” The Agonist
IV no. II (Fall 2011). http://www.nietzschecircle.com/AGONIST/2011_08/ME-
TAHUMAN_MANIFESTO.html. Accessed March 2, 2016.
Del Val, Jaime, Stefan Lorenz Sorgner, and Yunus Tuncel. “Interview on the Me-
tahumanist Manifesto with Jaime del Val and Stefan Lorenz Sorgner.” The Agonist
IV no. II (Fall 2011). http://www.nietzschecircle.com/AGONIST/2011_08/Inter-
view_Sorgner_Stefan-Jaime.pdf. Accessed March 2, 2016.
Dhar, Vasant. “Should You Trust Your Money to a Robot?” Big Data 3, no. 2 (2015):
55-58.
Evans, Dave. “The Internet of Everything: How More Relevant and Valuable Connec-
tions Will Change the World.” Cisco Internet Solutions Business Group: Point of
Introduction: Technologization and the Evolution of Intelligent Agency • 41
View, 2012. https://www.cisco.com/web/about/ac79/docs/innov/IoE.pdf. Accessed
December 16, 2015.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts 8, no. 2 (Fall 2013): 26-32.
Fleischmann, Kenneth R. “Sociotechnical Interaction and Cyborg–Cyborg Interac-
tion: Transforming the Scale and Convergence of HCI.” The Information Society
25, no. 4 (2009): 227-35. doi:10.1080/01972240903028359.
Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. New
York: Basic Books, 2015.
Forsyth, D.R. Group Dynamics, 5e. Belmont, CA: Cengage Learning, 2010.
Freitas Jr., Robert A. “Preface and Acknowledgements for the First Edition.” In Xen-
ology: An Introduction to the Scientific Study of Extraterrestrial Life, Intelligence,
and Civilization. Sacramento: Xenology Research Institute, 1979. http://www.xen-
ology.info/Xeno/PrefaceFirstEdition.htm, last updated October 22, 2009. Accessed
January 30, 2016.
Friedenberg, Jay. Artificial Psychology: The Quest for What It Means to Be Human.
Philadelphia: Psychology Press, 2008.
The Future of Automated Freight Transport: Concepts, Design and Implementation, ed-
ited by Hugo Priemus and Peter Nijkamp. Cheltenham: Edward Elgar Publishing,
2005.
Galis, Alex, Spyros G. Denazis, Alessandro Bassi, Pierpaolo Giacomin, Andreas Berl,
Andreas Fischer, Hermann de Meer, J. Srassner, S. Davy, D. Macedo, G. Pujolle, J.
R. Loyola, J. Serrat, L. Lefevre, and A. Cheniour. “Management Architecture and
Systems for Future Internet Networks.” In Towards the Future Internet: A Euro-
pean Research Perspective, edited by Georgios Tselentis, John Domingue, Alex Ga-
lis, Anastasius Gavras, David Hausheer, Srdjan Krco, Volkmar Lotz, and Theo-
dore Zahariadis, pp. 112-22. IOS Press, 2009.
Gasson, M.N. “Human ICT Implants: From Restorative Application to Human En-
hancement.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 11-28. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
Geraci, Robert M. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence,
and Virtual Reality. New York: Oxford University Press, 2010.
Giudici, P. Applied Data Mining: Statistical Methods for Business and Industry. Wiley,
2003.
Gladden, Matthew E. “The Artificial Life-Form as Entrepreneur: Synthetic Organism-
Enterprises and the Reconceptualization of Business.” In Proceedings of the Four-
teenth International Conference on the Synthesis and Simulation of Living Systems,
edited by Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lip-
son, pp. 417-18. The MIT Press, 2014.
42 • Posthuman Management
Gladden, Matthew E. “The Diffuse Intelligent Other: An Ontology of Nonlocalizable
Robots as Moral and Legal Actors.” In Social Robots: Boundaries, Potential, Chal-
lenges, edited by Marco Nørskov, pp. 177-98. Farnham: Ashgate, 2016.
Gladden, Matthew E. “Neural Implants as Gateways to Digital-Physical Ecosystems
and Posthuman Socioeconomic Interaction.” In Digital Ecosystems: Society in the
Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp.
85-98. Warsaw: Digital Economy Lab, University of Warsaw, 2016.
Gladden, Matthew E. “Utopias and Dystopias as Cybernetic Information Systems: En-
visioning the Posthuman Neuropolity.” Creatio Fantastica nr 3 (50) (2015).
Gockley, Rachel, Allison Bruce, Jodi Forlizzi, Marek Michalowski, Anne Mundell,
Stephanie Rosenthal, Brennan Sellner, Reid Simmons, Kevin Snipes, Alan C.
Schultz, and Jue Wang. “Designing Robots for Long-Term Social Interaction.” In
2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
2005), pp. 2199-2204. 2005.
Graham, Stephen. “Imagining Urban Warfare: Urbanization and U.S. Military Tech-
noscience.” In War, Citizenship, Territory, edited by Deborah Cowen and Emily
Gilbert. New York: Routledge, 2008.
Hanson, R. “If uploads come first: The crack of a future dawn.” Extropy 6, no. 2
(1994): 10-15.
Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New
York: Routledge, 1991.
Harrison, Albert A., and Alan C. Elms. “Psychology and the search for extraterres-
trial intelligence.” Behavioral Science 35, no. 3 (1990): 207-18.
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Liter-
ature, and Informatics, Chicago: University of Chicago Press, 1999.
Hazen, Robert. “What is life?”, New Scientist 192, no. 2578 (2006): 46-51.
Heim, Michael. The Metaphysics of Virtual Reality. New York: Oxford University
Press, 1993.
Herbrechter, Stefan. Posthumanism: A Critical Analysis. London: Bloomsbury, 2013.
[Kindle edition.]
Heylighen, Francis. “The Global Brain as a New Utopia.” In Renaissance der Utopie.
Zukunftsfiguren des 21. Jahrhunderts, edited by R. Maresch and F. Rötzer. Frank-
furt: Suhrkamp, 2002.
Intelligent Production Machines and Systems, edited by Duc T. Pham, Eldaw E. Eldu-
khri, and Anthony J. Soroka. Amsterdam: Elsevier, 2006.
Kanda, Takayuki, and Hiroshi Ishiguro. Human-Robot Interaction in Social Robotics.
Boca Raton: CRC Press, 2013.
Kedzior, Richard. How Digital Worlds Become Material: An Ethnographic and
Netnographic Investigation in Second Life. Economics and Society: Publications of
the Hanken School of Economics Nr. 281. Helsinki: Hanken School of Economics,
2014.
Introduction: Technologization and the Evolution of Intelligent Agency • 43
Kelly, Kevin. “The Landscape of Possible Intelligences.” The Technium, September 10,
2008. http://kk.org/thetechnium/the-landscape-o/. Accessed January 25, 2016.
Kelly, Kevin. Out of Control: The New Biology of Machines, Social Systems and the Eco-
nomic World. Basic Books, 1994.
Kelly, Kevin. “A Taxonomy of Minds.” The Technium, February 15, 2007.
http://kk.org/thetechnium/a-taxonomy-of-m/. Accessed January 25, 2016.
Khushf, George. “The use of emergent technologies for enhancing human perfor-
mance: Are we prepared to address the ethical and policy issues.” Public Policy
and Practice 4, no. 2 (2005): 1-17.
Koene, Randal A. “Embracing Competitive Balance: The Case for Substrate-Inde-
pendent Minds and Whole Brain Emulation.” In Singularity Hypotheses, edited by
Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, pp. 241-
67. The Frontiers Collection. Springer Berlin Heidelberg, 2012.
Koltko-Rivera, Mark E. “The potential societal impact of virtual reality.” Advances in
virtual environments technology: Musings on design, evaluation, and applications 9
(2005).
Kraemer, Felicitas. “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises
Questions of Personal Authenticity and Alienation.” Neuroethics 6, no. 3 (May 12,
2011): 483-97. doi:10.1007/s12152-011-9115-7.
Krishnan, Armin. “Enhanced Warfighters as Private Military Contractors.” In Super
Soldiers: The Ethical, Legal and Social Implications, edited by Jai Galliott and Mi-
anna Lotz. London: Routledge, 2015.
Magretta, Joan. “Why Business Models Matter.” Harvard Business Review 80, no. 5
(2002): 86-92.
Maguire, Gerald Q., and Ellen M. McGee. “Implantable brain chips? Time for de-
bate.” Hastings Center Report 29, no. 1 (1999): 7-13.
Mara, Andrew, and Byron Hawk. “Posthuman rhetorics and technical communica-
tion.” Technical Communication Quarterly 19, no. 1 (2009): 1-10.
McGee, E.M. “Bioelectronics and Implanted Devices.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 207-24. The Inter-
national Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
McGrath, Rita Gunther. The End of Competitive Advantage: How to Keep Your Strategy
Moving as Fast as Your Business. Boston: Harvard Business Review Press, 2013.
McIndoe, Robert S. “Health Kiosk Technologies.” In Ethical Issues and Security Moni-
toring Trends in Global Healthcare: Technological Advancements: Technological Ad-
vancements, edited by Steven A. Brown and Mary Brown, pp. 66-71. Hershey:
Medical Information Science Reference, 2010.
Miah, Andy. “A Critical History of Posthumanism.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 71-94. The Interna-
tional Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
44 • Posthuman Management
Miller, Jr., Gerald Alva. “Conclusion: Beyond the Human: Ontogenesis, Technology,
and the Posthuman in Kubrick and Clarke’s 2001.” In Exploring the Limits of the
Human through Science Fiction, pp. 163-90. American Literature Readings in the
21st Century. Palgrave Macmillan US, 2012.
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cam-
bridge: Harvard University Press, 1990.
Murphy, Robin. Introduction to AI Robotics. Cambridge, MA: The MIT Press, 2000.
Nadler, David, and Michael Tushman. Competing by Design: The Power of Organiza-
tional Architecture. Oxford University Press, 1997. [Kindle edition.]
Nazario, Marina. “I went to Best Buy and encountered a robot named Chloe – and
now I’m convinced she’s the future of retail.” Business Insider, October 23, 2015.
http://www.businessinsider.com/what-its-like-to-use-best-buys-robot-2015-10.
Accessed May 20, 2016.
Nouvel, Pascal. “A Scale and a Paradigmatic Framework for Human Enhancement.”
In Inquiring into Human Enhancement, edited by Simone Bateman, Jean Gayon,
Sylvie Allouche, Jérôme Goffette, and Michela Marzano, pp. 103-18. Palgrave
Macmillan UK, 2015.
Panno, Joseph. Gene Therapy: Treating Disease by Repairing Genes. New York: Facts
on File, 2005.
Pearce, David. “The Biointelligence Explosion.” In Singularity Hypotheses, edited by
A.H. Eden, J.H. Moor, J.H. Søraker, and E. Steinhart, pp. 199-238. The Frontiers
Collection. Berlin/Heidelberg: Springer, 2012.
Perez-Marin, D., and I. Pascual-Nieto. Conversational Agents and Natural Language
Interaction: Techniques and Effective Practices. Hershey, PA: IGI Global, 2011.
Perlberg, James. Industrial Robotics. Boston: Cengage Learning, 2016.
Pinedo, Michael L. Scheduling: Theory, Algorithms, and Systems, fourth edition. New
York: Springer Science+Business Media, 2012.
Porterfield, Andrew. “Be Aware of Your Inner Zombie.” ENGINEERING & SCIENCE
(Fall 2010): 14-19.
Posthumanist Shakespeares, edited by Stefan Herbrechter and Ivan Callus. Palgrave
Shakespeare Studies. Palgrave Macmillan UK, 2012.
Proudfoot, Diane. “Software Immortals: Science or Faith?” In Singularity Hypotheses,
edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart,
pp. 367-92. The Frontiers Collection. Springer Berlin Heidelberg, 2012.
Provost, Foster, and Tom Fawcett. Data Science for Business. Sebastopol, CA: O’Reilly
Media, Inc., 2013.
Roden, David. Posthuman Life: Philosophy at the Edge of the Human. Abingdon:
Routledge, 2014.
Sachs, Jeffrey D., Seth G. Benzell, and Guillermo LaGarda. “Robots: Curse or Bless-
ing? A Basic Framework.” NBER Working Papers Series, Working Paper 21091.
Cambridge, MA: National Bureau of Economic Research, 2015.
Introduction: Technologization and the Evolution of Intelligent Agency • 45
Sarkar, Prabhat Rainjan. “Neohumanism Is the Ultimate Shelter (Discourse 11).” In
The Liberation of Intellect: Neohumanism. Kolkata: Ananda Marga Publications,
1982.
Schacht, Kurt N. “The Buzz about Robo-Advisers.” CFA Institute Magazine 26, no. 5
(2015): 49.
Scopino, G. “Do Automated Trading Systems Dream of Manipulating the Price of Fu-
tures Contracts? Policing Markets for Improper Trading Practices by Algorithmic
Robots.” Florida Law Review, vol. 67 (2015): 221-93.
Social Robots and the Future of Social Relations, edited by Johanna Seibt, Raul Hakli,
and Marco Nørskov. Amsterdam: IOS Press, 2014.
Social Robots from a Human Perspective, edited by Jane Vincent, Sakari Taipale, Bar-
tolomeo Sapio, Giuseppe Lugano, and Leopoldina Fortunati. Springer Interna-
tional Publishing, 2015.
Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov. Farnham:
Ashgate Publishing, 2016.
Soussou, Walid V., and Theodore W. Berger. “Cognitive and Emotional Neuropros-
theses.” In Brain-Computer Interfaces, pp. 109-23. Springer Netherlands, 2008.
Theoretical Foundations of Artificial General Intelligence, edited by Pei Wang and Ben
Goertzel. Paris: Atlantis Press, 2012.
Tomas, David. “Feedback and Cybernetics: Reimaging the Body in the Age of the Cy-
borg.” In Cyberspace, Cyberbodies, Cyberpunk: Cultures of Technological Embodi-
ment, edited by Mike Featherstone and Roger Burrows, pp. 21-43. London: SAGE
Publications, 1995.
Turner, Matt. “The computers have won, and Wall Street will never be the same.”
Business Insider, May 10, 2016. http://www.businessinsider.com/quant-funds-
dominate-hedge-fund-rich-list-2016-5. Accessed June 5, 2016.
Ullrich, Günter. Automated Guided Vehicle Systems: A Primer with Practical Applica-
tions, translated by Paul A. Kachur. Springer Berlin Heidelberg, 2015.
Ummat, Ajay, Atul Dubey, and Constantinos Mavroidis. “Bionanorobotics: A Field
Inspired by Nature." In Biomimetics: Biologically Inspired Technologies, edited by
Yoseph Bar-Cohen, pp. 201-26. Boca Raton: CRC Press, 2005.
Van den Berg, Bibi. “Pieces of Me: On Identity and Information and Communications
Technology Implants.” In Human ICT Implants: Technical, Legal and Ethical Con-
siderations, edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp.
159-73. Information Technology and Law Series 23. T. M. C. Asser Press, 2012.
Warkentin, Merrill, Vijayan Sugumaran, and Robert Sainsbury. “The Role of Intelli-
gent Agents and Data Mining in Electronic Partnership Management.” Expert Sys-
tems with Applications 39, no. 18 (2012): 13277-88.
Weber, Max. Economy and Society: An Outline of Interpretive Sociology. New York:
Bedminster Press, 1968.
46 • Posthuman Management
Yampolskiy, Roman V. “The Universe of Minds.” arXiv preprint, arXiv:1410.0369
[cs.AI], October 1, 2014. http://arxiv.org/abs/1410.0369. Accessed January 25, 2016.
Yonck, Richard. “Toward a standard metric of machine intelligence.” World Future
Review 4, no. 2 (2012): 61-70.
Part I
Creating the Human-Synthetic
Organization: Background and
Theoretical Foundations
Chapter One
A Typology of Posthumanism:
A Framework for Differentiating Analytic,
Synthetic, Theoretical, and Practical
Posthumanisms
1
Abstract. The term ‘posthumanism’ has been employed to de-
scribe a diverse array of phenomena ranging from academic disci-
plines and artistic movements to political advocacy campai gns
and the development of commercial technologies. Such phenom-
ena differ widely in their subject matter, purpose, and methodol-
ogy, raising the question of whether it is possible to fashion a coher-
ent definition of posthumanism that encompasses all phenomena
thus labelled. In this text, we seek to bring greater clarity to this dis-
cussion by formulating a novel conceptual framework for classifying
existing and potential forms of posthumanism. The framework as-
serts that a given form of posthumanism can be classified: 1) either
as an analytic posthumanism tha t understands ‘posthumanity’ as a
sociotechnological reality that already exi sts in the contemporary
world or as a synthetic posthumanism that understands ‘posthu-
manity’ as a collection of hypothetical future entities whose devel-
opment can be intentionally realized or prevented; and 2) either as
a theoretical posthumanism that primarily seeks to develop new
knowledge or as a practical posthumanism that seeks to bring
about some social, political, economic, or technological change.
By arranging these two characteri stics as orthogonal axes, we ob-
tain a matrix that categorizes a form of posthumanism into one of
four quadrants or as a hybrid posthumanism spanning all quadrants.
It is suggested that the five resulti ng types can be understood
1
This chapter was originally published in Gladden, Matthew E., Sapient Circuits and Digitalized
Flesh: The Organization as Locus of Technological Posthumanization, pp. 31-91, Indianapolis:
Defragmenter Media, 2016.
50 • Posthuman Management
roughly as posthumanisms of critique, imagination, conversion, con-
trol, and production.
We then employ this framework to classify a wide variety of posthu-
manisms, such as criti cal, cultural, philosophical, sociopolitical, and
popular (or ‘commercial’) posthumanism; science fiction ; techno-
idealism; metahumanism; neohumanism ; anti humanism; prehu-
manism; feminist new materialism; the posthumanities; biopoli tical
posthumanism, includi ng bioconservatism and tra nshumanism (with
specialized objective and instrumental typologies offered for classi-
fying forms of transhumanism); and organizational posthumanism .
Of particular interest for our research i s the classification of organi-
zational posthumanism as a hybrid pos thumanism combini ng ana-
lytic, synthetic, theoretical, and practical aspects. We argue that
the framework proposed in this text generates a typology that is
flexi ble enough to encompass the full range of posthumanisms
while being discriminating enough to order posthuma nisms into
types that reveal new insights about their nature and dynamics.
I. Introduction
Terms such as ‘posthumanism,’ ‘posthumanity,’ and ‘the posthuman’ are
being used to describe an increasingly wide and bewildering array of phenom-
ena in both specialized scholarly and broader popular contexts. Spheres of hu-
man activity that have been described as ‘posthumanist’ include academic dis-
ciplines,
2
artistic movements,
3
spiritual movements,
4
commercial research and
development programs designed to engineer particular new technologies,
5
works of science fiction,
6
and campaigns advocating specific legislative or reg-
ulatory action.
7
2
For examples, see the descriptions of critical, cultural, and philosophical posthumanism and
the posthumanities later in this text.
3
Examples include the works of performance art created by Del Val. See Del Val et al., “Interview
on the Metahumanist Manifesto with Jaime del Val and Stefan Lorenz Sorgner” (2011).
4
An instance is the form of neohumanism developed by Sarkar. See Sarkar, “Neohumanism Is
the Ultimate Shelter (Discourse 11)” (1982).
5
For examples of the term ‘posthuman’ being used to describe specific technologies that are
being developed by DARPA and other military research and development agencies, see, e.g.,
Coker, “Biotechnology and War: The New Challenge” (2004); Graham, “Imagining Urban War-
fare: Urbanization and U.S. Military Technoscience” (2008), p. 36; and Krishnan, “Enhanced
Warfighters as Private Military Contractors” (2015).
6
Posthumanist aspects of science fiction are discussed, for example, in Hayles, How We Became
Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (1999); Cyberculture, Cyborgs
and Science Fiction: Consciousness and the Posthuman, edited by Haney (2006); and Goicoechea,
“The Posthuman Ethos in Cyberpunk Science Fiction” (2008).
7
Examples include some of the legislative and regulatory approaches proposed in Fukuyama,
Our Posthuman Future: Consequences of the Biotechnology Revolution (2002), and Gray, Cyborg
Citizen: Politics in the Posthuman Age (2002).
Chapter One: A Typology of Posthumanism • 51
Running through many of these ‘posthumanisms’ is the common thread of
emerging technologies relating to neurocybernetic augmentation, genetic en-
gineering, virtual reality, nanotechnology, artificial life, artificial intelligence,
and social robotics which – it is supposed – are challenging, destabilizing, or
transforming our understanding of what it means to be ‘human.’ And yet
when posthumanist interpretations are also being offered for subjects like the
Bible,
8
medieval alchemical texts,
9
Shakespeare,
10
and 1930s zombie fiction,
11
it
becomes apparent that directly equating posthumanism with an attitude to-
ward futuristic technologies is overly simplistic and even misleading.
And not only do different manifestations of posthumanism differ widely
from one another in their subject matter; even when two forms of posthuman-
ism consider the same object, they often oppose one another in their aims,
methodologies, and conclusions. For example, both transhumanists and bio-
conservatives attempt to foresee the extent to which genetic engineering will
allow the capacities of future human beings to be radically transformed; while
transhumanists conclude that the development of such technologies must be
pursued as a natural next step in the evolution of humanity, bioconservatives
conclude that pursuit of such technologies must be blocked in order to pre-
serve the integrity of the human species and the possibility of a politically and
economically just society.
12
This mélange of meanings for the term ‘posthumanism’ raises important
questions. First, is it possible to develop a definition of posthumanism that
covers all of its uses? And second, assuming that this is theoretically possible,
would it be desirable? Or is it better to acknowledge that ‘posthumanism’ has
become too fragmented to possess a single coherent definition and that it is
better to develop separate definitions for the diverse phenomena which share
that appellation?
In this text, we seek to contribute to this debate by developing a conceptual
framework that presents one approach to clarifying the key characteristics of
different types of posthumanism and the relationships between them. Alt-
hough the structure and details of the proposed framework are novel, such a
framework can be understood as an appraisal, synthesis, and elaboration of
the work of thinkers such as Ferrando, Herbrechter, Birnbacher, Miah, Miller,
8
See, e.g., The Bible and Posthumanism, edited by Koosed (2014).
9
See, e.g., Smith, Genetic Gold: The Post-human Homunculus in Alchemical and Visual Texts
(2009).
10
Examples include the texts collected in Posthumanist Shakespeares, edited by Herbrechter &
Callus (2012).
11
Instances of this can be found in Better Off Dead: The Evolution of the Zombie as Post-Human,
edited by Christie & Lauro (2011).
12
These issues are explored in more detail in the discussion of biopolitical posthumanism and
bioconservatism later in this text.
52 • Posthuman Management
and others who have not simply carried out posthumanist reflection on topics
like genetic engineering or science fiction but have instead analyzed the na-
ture of posthumanism itself – have attempted to forge some conceptual order
amidst the landscape of many conflicting ‘posthumanisms.’
Rather than presenting a simple catalogue of posthumanisms, the frame-
work developed in this text proposes that a given form of posthumanism can
be categorized on the basis of a pair of factors: its understanding of ‘posthu-
manity’ and the role or purpose for which the posthumanism has been devel-
oped. In this way, a posthumanism can be classified either as an analytic
posthumanism that understands posthumanity as a sociotechnological reality
that already exists in the contemporary world or as a synthetic posthumanism
that understands posthumanity as a collection of hypothetical future entities
whose development can be intentionally realized or prevented. Simultane-
ously, it can be classified either as a theoretical posthumanism that primarily
seeks to develop new knowledge or as a practical posthumanism that primarily
seeks to bring about some social, political, economic, or technological change.
By combining these factors, a two-dimensional typology is created that iden-
tifies a form of posthumanism with one of four quadrants or as a hybrid
posthumanism that spans all quadrants. After presenting this tool, the major-
ity of this text will be spent in employing it to classify a wide variety of posthu-
manisms that have been identified in the literature.
II. Established Definitions of Posthumanism
Before formulating our typology of posthumanism, it is useful to explore
the ways in which the concept of posthumanism is currently understood.
A multiplicity of posthumanisms. The term ‘posthuman’ has been used by
different authors to represent very different concepts;
13
while this has en-
riched the development of posthumanism, it has also introduced confusion.
14
For example, Miller notes that the term has been given a variety of meanings
by theorists operating in the natural sciences; cybernetics; epistemology; on-
tology; feminist studies; film, literary, and cultural studies; animal studies; and
ecocriticism.
15
Herbrechter observes that the ‘post-’ in ‘posthumanism’ is not
only ambiguous but even “radically open” in its meaning.
16
For example, the
word can be understood either as ‘post-humanism,’ a critical response to and
deconstructive working-through of the assumptions of humanism, or as
13
Bostrom, “Why I Want to Be a Posthuman When I Grow Up” (2008), p. 107.
14
See Ferrando, “Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New
Materialisms: Differences and Relations” (2013), p. 26.
15
Miller, “Conclusion: Beyond the Human: Ontogenesis, Technology, and the Posthuman in Ku-
brick and Clarke’s 2001” (2012), p. 163.
16
Herbrechter, Posthumanism: A Critical Analysis (2013), p. 69.
Chapter One: A Typology of Posthumanism • 53
‘posthuman-ism,’ a philosophy of future engineered beings whose capacities
are expected to surpass those of contemporary human beings.
17
Indeed, Birn-
bacher suggests that the term ‘posthumanity’ and related idea of ‘transhuman-
ism’ have been utilized by so many different thinkers in such widely divergent
fashions that they can be better understood “as slogans rather than as well-
defined concepts.”
18
Posthumanist terminology. In this text, we will refer often to the interre-
lated but distinct notions of ‘posthumanization,’ ‘posthumanity,’ ‘posthuman-
ism,’ and the ‘posthuman.’ Because each of these terms has been used to rep-
resent multiple concepts, it is difficult to offer authoritative definitions for
them. Nevertheless, they can be broadly differentiated:
Posthumanization can be understood as a process by which society
comes to include at least some intelligent personal subjects that are
not natural biological human beings and which leads to a nonanthro-
pocentric understanding of reality. At present, posthumanization of-
ten occurs as a result of the technologization of human beings, which
is spurred by phenomena such as our increasing physical integration
with electronic systems, our expanding interaction with and depend-
ence on robots and artificial intelligences, our growing immersion in
virtual worlds, and the use of genetic engineering to design human
beings as if they were consumer products.
19
However, processes of
posthumanization do not inherently require the use of modern tech-
nology: works of mythology or literature that present quasi-human
figures such as monsters, ghosts, and semidivine heroes can advance
the process of posthumanization by challenging the boundaries of our
concept of humanity and, in some sense, incorporating those figures
into the structures and dynamics of society.
20
Posthumanity refers either to a collection of intelligent beings –
whether human, synthetic, or hybrid – that have been created or af-
fected by a process of posthumanization or to the broader sociotech-
nological reality within which such beings exist.
Posthumanism is a coherent conceptual framework that takes the
phenomenon of posthumanization or posthumanity as its object; it
may be developed as part of an academic discipline, artistic or spiritual
movement, commercial venture, work of fiction, or form of advocacy,
among other possible manifestations.
17
Herbrechter (2013), p. 16.
18
Birnbacher “Posthumanity, Transhumanism and Human Nature” (2008), p. 96.
19
The relationship of posthumanism to the commercialization of the human entity is discussed
in Herbrechter (2013), pp. 42, 150-52.
20
For the role of such figures in nontechnological posthumanization, see, e.g., Herbrechter
(2013), pp. 2-3, 106.
54 • Posthuman Management
‘Posthuman’ can refer to any of the above: a process (posthumaniza-
tion), collection of entities (posthumanity), or body of thought
(posthumanism).
Tracing the origins of posthumanism. Some identify the birth of posthu-
manism as an explicit conceptual system with Wiener’s formulation of cyber-
netics in the 1940s; others suggest that posthumanism as an explicit discipline
only appeared with Haraway’s analysis of cyborgs and the dissolution of hu-
man-machine boundaries in the 1990s.
21
While ongoing developments in ro-
botics, artificial intelligence, biocybernetics, and genetic engineering are lend-
ing new urgency to questions surrounding posthumanism, Herbrechter ar-
gues that the phenomenon of posthumanism is at least as old as that of post-
Enlightenment humanism – even if it has only recently been explicitly
named.
22
The fact that the term ‘posthumanism’ is used to refer to such a di-
verse array of intellectual phenomena means that scholars can date its origins
variously to the Renaissance, post-Enlightenment era, 1940s, or 1990s, depend-
ing on exactly which ‘posthumanism’ is being considered.
Attempts at defining posthumanism generically. Ideally, it would be
possible to formulate a generic definition of ‘posthumanism’ broad enough to
cover all such intellectual frameworks. And, indeed, scholars have attempted
to identify elements that are shared across all varieties of posthumanism. For
example, Miller contends that various strains of posthumanism agree that:
The posthuman subject is a multiple subject, not a unified one, and
she or he (a distinction that also gets blurred in posthuman -ism) is
not separate from his/her environment. Technologies become ex-
tensions of the self, and humans become only one type of individual
in a vast ecosystem that includes di gital as well as natural environ-
mental forces. In other words, posthumanism is partl y about leaving
behind the old notions of liberal humanism. […] But it also begins to
gesture toward a much more radical state, a state beyond the cur-
rent human form.
23
According to this view, the heart of posthumanism is a ‘post-anthropocen-
tric’
24
perspective that looks beyond traditional human beings to identify other
sources of intelligence, agency, subjectivity, and meaning within the world.
Emphasizing this fact, Ferrando states that:
Posthumani sm i s often defined as a post-humanism and a post-an-
thropocentrism: it is “post” to the concept of the human and to the
21
Such perspectives on the genesis of posthumanism are offered, e.g., in Herbrechter (2013), p.
41, and its discussion of Gane, “Posthuman” (2006).
22
Herbrechter (2013), p. 77.
23
Miller (2012), p. 164.
24
See Herbrechter (2013), pp. 2-3.
Chapter One: A Typology of Posthumanism • 55
historical occurrence of humanism, both based […] on hierarchical
social constructs and human-centric assumptions.
25
Thus by way of offering a preliminary definition, Herbrechter suggests that
posthumanism in its most general sense is “the cultural malaise or euphoria
that is caused by the feeling that arises once you start taking the idea of ‘post-
anthropocentrism’ seriously.”
26
Similarly, Birnbacher suggests that the differ-
ent forms of posthumanism are united in studying already existing or poten-
tial future ‘posthumans’ whose nature is not constrained by human nature as
previously understood and who lack at least some key characteristics that
have historically been considered typical of the human species.
27
Miah, meanwhile, finds “a range of posthumanisms” that are united by the
fact that they “challenge the idea that humanness is a fixed concept.”
28
How-
ever, posthumanism’s challenge to the concept of the ‘human’ differs from the
more nihilistic attacks waged by postmodernism: in their own unique ways –
whether subtly or wholeheartedly – various kinds of posthumanism are will-
ing to entertain the idea of restoring in an altered post-anthropocentric form
some of the ‘grand narratives’ about humanity, agency, history, and other
phenomena that had been wholly rejected by postmodernism.
29
Problems with a generic definition of posthumanism. While such gen-
eral definitions offer a useful starting point, they are hampered by the fact that
‘posthumanisms’ differ markedly with regard to their origins, purpose, and
methodology. For example, as we have noted, some thinkers argue that tech-
nological progress is an essential aspect of posthumanism that will inevitably
someday be harnessed to engineer a superior posthumanity.
30
Other thinkers
argue that technology is not an inherent element of posthumanism at all and
that posthumanity is a conceptual array of interrelated human, quasi-human,
and nonhuman beings (such as ghosts, monsters, aliens, and robots) that have
held a place within the human imagination for hundreds or thousands of
years. Any definition of ‘posthumanism’ that is broad enough to describe all
such conflicting perspectives may be so vague as to be of little practical value.
Existing frameworks for categorizing posthumanisms. Scholars have
proposed a range of conceptual frameworks for classifying the many forms of
posthumanism. For example, Miah distinguishes between the three different
25
Ferrando (2013), p. 29.
26
Herbrechter (2013), p. 3.
27
Birnbacher (2008), p. 104.
28
Miah, “A Critical History of Posthumanism” (2008), p. 83.
29
Differences between postmodernism and posthumanism can be observed, e.g., in Herbrechter
(2013), p. 23.
30
For such broadly transhumanist perspectives, see, e.g., Bostrom (2008) and Kurzweil, The Sin-
gularity is Near: When Humans Transcend Biology (2005).
56 • Posthuman Management
phenomena of biopolitical, cultural, and philosophical posthumanism.
31
Fer-
rando distinguishes three forms of posthumanism per se (i.e., critical, cultural,
and philosophical posthumanism), while noting that the word ‘posthuman’ is
also used more broadly to include related phenomena such as transhumanism,
new materialism, antihumanism, metahumanism, and the posthumanities.
32
Finally, drawing on Rosenau, Herbrechter distinguishes two different
strains of posthumanism. On one side is an affirmative posthumanism that
includes ‘technoeuphorians’ (such as transhumanists) who wholeheartedly
embrace posthumanizing technologies and ‘technocultural pragmatists’ who
accept that posthumanizing technological change is inevitable and who at-
tempt to strengthen its positive impacts while ameliorating any detrimental
side-effects. On the other side is a skeptical posthumanism that includes ‘catas-
trophists’ (such as bioconservatives) who are attempting to forestall the de-
velopment of posthumanizing technology due to its perceived danger and
‘critical deconstructive posthumanists’ (such as Herbrechter) who accept that
posthumanizing technological change is occurring and who are primarily in-
terested not in identifying its potentially negative biological or social impacts
but in analyzing the theoretical weaknesses, biases, and naïvety displayed by
those who zealously advocate such technologization of humankind.
33
III. A Proposed Two-dimensional Typology of
Posthumanism
While such existing schemas for classifying posthumanisms offer valuable
insights, we contend that it would be useful to possess a more comprehensive
and systematic framework developed for this purpose. To that end, we would
suggest that a given form of posthumanism can be classified in two ways:
1) By its understanding of posthumanity. A form of posthumanism
can be categorized either as an analytic posthumanism that under-
stands posthumanity as a sociotechnological reality that already ex-
ists in the contemporary world and which needs to be analyzed or
as a synthetic posthumanism that understands posthumanity as a
collection of hypothetical future entities whose development can be
either intentionally realized or intentionally prevented, depending
31
See Miah (2008).
32
Ferrando (2013), p. 26.
33
For this dichotomy of affirmative and skeptical perspectives, see Herbrechter (2013), pp. 23-
24, and its analysis of Rosenau, Post-Modernism and the Social Sciences: Insights, Inroads, and
Intrusions (1992).
Chapter One: A Typology of Posthumanism • 57
on whether or not human society chooses to research and deploy
certain transformative technologies.
2) By the purpose or role for which it was developed. A form of
posthumanism can be categorized either as a theoretical posthuman-
ism that primarily seeks to develop new knowledge and understand-
ing or as a practical posthumanism that primarily seeks to bring
about some social, political, economic, or technological change in
the real world.
By arranging these two characteristics as orthogonal axes, a matrix is obtained
that categorizes a form of posthumanism into one of four quadrants or as a
hybrid that spans all quadrants. Figure 1 depicts this matrix along with our
proposed classification of numerous forms of posthumanism that will be in-
vestigated within this text. We can now discuss these two axes in more detail.
Analytic versus synthetic posthumanism. Analytic posthumanisms de-
fine ‘posthumanity’ as a sort of sociotechnological reality that already exists
in the contemporary world and which calls out to be better understood. Such
posthumanisms typically display a strong orientation toward the present and
the past; they do not generally focus on the future, insofar as the exact form
that the future will take has not yet become clear to us and thus cannot yet be
the object of rigorous analysis.
Synthetic posthumanisms, on the other hand, define ‘posthumanity’ as a
set of hypothetical future entities
34
(such as full-body cyborgs or artificial gen-
eral intelligences) whose capacities differ from – and typically surpass – those
of natural biological human beings and whose creation can either be inten-
tionally brought about or intentionally blocked, depending on whether hu-
manity decides to develop and implement certain transformative technologies
such as those relating to genetic engineering, neuroprosthetics, artificial in-
telligence, or virtual reality. Such posthumanisms generally have a strong fu-
ture orientation; they rarely give detailed attention to events of the distant
past, and they conduct an exploration of power structures or trends of the
current day only insofar as these offer some insight into how future processes
of posthumanization might be directed.
34
An exception to this definition would be prehumanism, a form of synthetic theoretical posthu-
manism that is similar to science fiction but which imagines the characteristics of quasi-human
beings in a hypothetical distant past rather than in the far future. While the directionality of the
temporal reference-points is reversed in comparison to that of futurological science fiction, the
(implicit or explicit) contrast of contemporary humanity with the intelligent beings of a chron-
ologically distant but causally connected world remains intact. See the discussion of prehuman-
ism later in this text.
58 • Posthuman Management
Fig. 1: Our proposed two-dimensional typology of posthumanism, which classifies a form of
posthumanism based on whether it understands posthumanity as a sociotechnological reality
already existing in the contemporary world (‘analytic’) or as a set of hypothetical future enti-
ties whose capacities differ from those of natural biological human beings (‘synthetic’) and
whether its purpose is primarily to expand the knowledge possessed by humanity (‘theoretical’)
or to produce some specific political, economic, social, cultural, or technological change within
the world (‘practical’). Classifications are suggested for numerous forms of posthumanism.
Theoretical versus practical posthumanism. Posthumanisms can also be
classified according to the purpose for which they were developed or the role
that they play.
35
Theoretical posthumanisms are those that mainly seek to en-
hance our understanding of issues and to expand the knowledge possessed by
35
The distinction between theoretical and practical posthumanisms could be understood, for
example, in light of the Aristotelian division of human activities into theoria, poiesis, and praxis.
Chapter One: A Typology of Posthumanism • 59
humanity – not primarily for the sake of effecting some specific change within
the world but for the sake of obtaining a deeper, richer, more accurate, and
more sophisticated understanding of human beings and the world in which
we exist.
Practical posthumanisms, on the other hand, are interested primarily in
producing some specific political, economic, cultural, social, or technological
change. While theoretical posthumanism often takes the form of analyses, cri-
tiques, or thought experiments, practical posthumanism may take the form of
efforts to ensure or block the approval of proposed treaties, legislation, or reg-
ulations; secure or cancel funding for particular military, educational, or social
programs; develop and test new technologies; design, produce, and market
new kinds of goods or services; or influence the public to vote, spend their
time and money, interact socially, tolerate particular corporate or governmen-
tal actions, or otherwise behave in specific ways. Practical posthumanisms
may thus include elements of advocacy, engineering, and entrepreneurship.
Hybrid posthumanisms that combine all four aspects. There are at least
three kinds of posthumanism which, we would argue, are simultaneously an-
alytic, synthetic, theoretical, and practical. These will be explored in more
depth later in this text. The first of these hybrid posthumanisms is the form of
metahumanism formulated by Sorgner and Del Val.
36
Their metahumanist
program possesses a strong theoretical component, insofar as it is grounded
in and seeks to advance critiques developed by thinkers such as Nietzsche and
Deleuze; however, it also displays a strong practical component in that it is
geared toward generating works of performance art and other concrete prod-
ucts. Similarly, their metahumanism is analytic insofar as it reflects on the
‘metabodies’ of human beings as they exist today and synthetic insofar as it
recognizes that new kinds of metabodies will be created in the future, largely
through the ongoing technologization of humankind.
The second hybrid posthumanism is sociopolitical posthumanism. This is
manifested, for example, in legal scholars’ efforts to update legal systems to
reflect emerging deanthropocentrized realities such as the growing ability of
robots to autonomously make complex ethical and practical decisions that im-
pact the lives of human beings.
37
Such work is theoretical insofar as it flows
from a sophisticated theory of law and practical insofar as it is geared toward
Theoretical posthumanism is a kind of theoria, while practical posthumanism comprises praxis
(as in the case of posthumanist political movements) and poiesis (as in the case of some posthu-
manist artistic movements).
36
They describe their form of metahumanism in Del Val & Sorgner, “A Metahumanist Manifesto”
(2011).
37
A thoughtful example of this is found in Calverley, “Imagining a non-biological machine as a
legal person” (2008).
60 • Posthuman Management
reshaping real-world legal systems. Similarly, it is analytic insofar as it inves-
tigates the effects of posthumanization that are already reflected in the world
today and synthetic insofar as it seeks to anticipate and account for different
posthumanities that might appear in the future.
Finally, the form of organizational posthumanism formulated later in this
text also combines both analytic and synthetic as well as theoretical and prac-
tical aspects. Organizational posthumanism is theoretical insofar as it seeks to
understand the ways in which the nature of organizations is being trans-
formed by the technologization and posthumanization of our world and prac-
tical insofar as it seeks to aid management practitioners in creating and main-
taining viable organizations within that posthumanized context. It is analytic
insofar as it recognizes post-anthropocentric phenomena (such as the growing
use of AI, social robotics, and virtualized interaction) that are already present
within many organizations and synthetic insofar as it believes that such post-
anthropocentrizing trends will continue to accelerate and will generate organ-
izational impacts that can be shaped through the planning and execution of
particular strategies.
The types of posthumanism delineated by our two-dimensional framework
are generalizations. The phenomena that can be assigned to any one type may
differ significantly from one another, thus it is hazardous to assign a broad-
brush description to a type of posthumanism and expect it to apply equally
well to all of the posthumanisms included within that type. Nevertheless, as a
starting point for further discussion, we would suggest that it is possible to
capture the fundamental dynamic of each type of posthumanism.
For example, analytic theoretical posthumanisms might collectively be un-
derstood as manifesting a ‘posthumanism of critique’ that employs posthu-
manist methodologies to identify hidden anthropocentric biases and posthu-
manist aspirations contained within different fields of human activity. Simi-
larly, synthetic theoretical posthumanisms could be seen as exemplifying a
‘posthumanism of imagination’ that creatively envisions hypothetical future
posthumanities so that their implications can be explored. Analytic practical
posthumanisms manifest a ‘posthumanism of conversion’ aimed at changing
hearts and minds and influencing the way in which human beings view the
world around themselves. Synthetic practical posthumanisms exemplify a
‘posthumanism of control’ that seeks either to develop new technologies that
give individuals control over their own posthumanization or to implement le-
Chapter One: A Typology of Posthumanism • 61
gal or economic controls to govern the development of such technologies. Fi-
nally, hybrid posthumanisms that span all four spheres can be understood as
examples of a ‘posthumanism of production’ that develops a robust and rigor-
ous theoretical framework that is then employed to successfully generate con-
crete products or services within the contemporary world. An overview of
these five main types of posthumanism is reflected in Figure 2.
Fig. 2: The five types of posthumanism delineated by our two-dimensional model can be un-
derstood informally as posthumanisms of critique, imagination, conversion, control, and pro-
duction.
IV. Classification and Analysis of Individual Forms of
Posthumanism
A review of the literature reveals many different phenomena that have
been identified as forms of posthumanism or which more generally have been
described as ‘posthuman’ or ‘posthumanist’ in nature. Below we classify and
analyze many such phenomena utilizing our two-dimensional typology.
62 • Posthuman Management
A. Analytic Theoretical Posthumanisms: Seeking to Understand
the Posthumanized Present
Analytic theoretical posthumanisms can collectively be understood as con-
stituting a ‘posthumanism of critique’ that employs posthumanist methodol-
ogies to uncover hidden anthropocentric biases and posthumanist aspirations
that are concealed within different fields of human activity. Such forms of an-
alytic theoretical posthumanism include critical posthumanism, cultural
posthumanism, the posthumanities (or ‘digital humanities’), feminist new ma-
terialism, antihumanism, and some forms of metahumanism and neohuman-
ism. We can consider each of these in more detail.
1. CRITICAL POSTHUMANISM
Critical posthumanism is an academic form of posthumanism developed
primarily from within the disciplines of the humanities. It constitutes a form
of analytic theoretical posthumanism in that it applies critical methodologies
to challenge our contemporary conception of humanity and to spur the devel-
opment of more appropriate theoretical frameworks. Critical posthumanism
does not come ‘after’ humanism in a chronological sense but instead follows
from humanism in a conceptual sense; Herbrechter explains this by stating
that critical posthumanism “inhabits humanism deconstructively,”
38
critiquing
historical binary conceptual oppositions between subject and object, biologi-
cal and artificial, human and machine, human and animal, nature and nurture,
and male and female.
39
Unlike many strains of postmodernism, such critical
posthumanism is not nihilistic;
40
it is not about destroying the human subject
but about recognizing a whole wealth of subjects that had never before been
fully acknowledged or which – because of an absence of the necessary soci-
otechnological environment – could not previously exist in the real world.
41
Assimilation of the nonhuman. Critical posthumanism seeks to create an
account of the personal subject that is descriptive rather than normative and
which does not consider ‘humanity’ as historically (and narrowly) defined but
instead addresses a broader universe of entities that includes natural human
beings as well as related entities like ghosts, angels, monsters, cyborgs, artifi-
cial intelligences, and extraterrestrial beings that have traditionally been con-
sidered quasi-human, parahuman, or nonhuman.
42
Critical posthumanism
38
Herbrechter (2013), pp. 6-7.
39
The raising of such challenges to historical binary and dualistic thought is a hallmark of
posthumanism. See, e.g., Herbrechter (2013), pp. 79, 90.
40
Regarding the positive aspects of critical posthumanism that distinguish it from more nega-
tional forms of postmodernism, see Herbrechter (2013), p. 196.
41
See Herbrechter (2013), p. 198.
42
Regarding the wide spectrum of entities that are important for critical posthumanism, see, e.g.,
Chapter One: A Typology of Posthumanism • 63
possesses an empathy for such excluded beings in part because it claims that
we owe our humanity to them: while some humanists contend that the ‘hu-
man being’ is defined first and then all entities that fail to satisfy that defini-
tion are excluded as being ‘nonhuman,’ critical posthumanism argues that in
reality it was our inherent understanding of the myriad forms of the ‘inhu-
man’ that first allowed us to define the ‘human’ in opposition to them.
43
In a
sense, critical posthumanism is thus nothing new; it is an age-old, nontechno-
logical, deconstructive process that continually challenges our understanding
of (and exclusive identification with) the ‘human’ by bringing into our circle
of awareness examples of the inhuman and nonhuman.
44
It has existed for as
long as monsters, angels, mythic heroes, and the relationship of such entities
to human beings have been pondered within works of art, literature, philoso-
phy, and theology.
Posthumanism with or without technology. In contrast with transhuman-
ism – which is closely identified with particular technologies – critical posthu-
manism can thus take the form of a ‘posthumanism without technology’
45
that
focuses on anthropological, linguistic, or aesthetic questions rather than issues
of biomedical engineering. However, as a practical matter, critical posthuman-
ism’s consideration of the ‘nonhuman other’ has taken on a new focus and
urgency thanks to the accelerating processes of technologization that are now
reshaping humankind. Critical posthumanism does not formulate a critique of
technology per se but of the processes of technologization by which techno-
logical mechanisms, systems, and attitudes are consolidating their power over
all aspects of human life. Critical posthumanism recognizes the fact that hu-
man beings are – and have always been – locked in a symbiotic relationship
of coevolution with our technology; it analyzes and critiques this process,
without condemning or embracing it a priori in the way that biopolitical
posthumanism often does.
46
Diagnosing ‘speciesism.’ Critical posthumanism considers the cases of
nonhuman entities as a means of diagnosing what it sees as previously unno-
ticed forms of ‘speciesism’ or anthropocentric bias that have long permeated
human political, economic, scientific, artistic, and religious activity.
47
For ex-
ample, traditional cultural studies are highly anthropocentric, insofar as they
Herbrechter (2013), pp. 2-3, 106.
43
For a discussion of the logical and practical priority of the ‘human’ or ‘nonhuman,’ see Her-
brechter (2013), p. 55, and its reflections on Curtis, “The Inhuman” (2006), p. 434.
44
Herbrechter (2013), p. 44.
45
Regarding nontechnological forms of posthumanization, see Herbrechter (2013), p. 157.
46
For a discussion of our symbiotic relationship with technology and critical posthumanism’s
attitude toward it, see Herbrechter (2013), pp. 90, 19.
47
Ferrando (2013), p. 29.
64 • Posthuman Management
assume that ‘humanity’ (or something closely mimicking it) is required in or-
der for culture to exist; thus animals may have societies, but they do not pos-
sess culture. Critical posthumanism, on the other hand, does not assume as a
starting point that culture logically requires humanity; indeed, it explicitly re-
jects this notion.
48
Critical posthumanism accepts the fact that human beings
are no longer the only intelligent social actors within the world; we are in-
creasingly only one of many kinds of individuals – both real and virtual, bio-
logical and electronic – that populate a rich and complex digital-physical en-
vironment and shape it through our interactions.
49
Critical posthumanism
thus seeks to identify hidden assumptions that only human beings – and not,
for example, social robots or genetically enhanced domesticated animals – are
capable of filling particular roles within society or that human activity should
be carried out with the sole purpose of benefitting human beings.
A critique of cybernetics, virtualization, and transhumanism. While
critical posthumanism appreciates the value of robots and AIs in helping us to
better understand the nature of human intelligence and agency, it does not
share transhumanism’s zeal for attempting to literally transform human be-
ings into virtualized or robotic entities. Indeed, a major aim of critical posthu-
manism is to resist the defining of ‘mind’ as a disembodied collection of infor-
mation in the manner promoted by many forms of transhumanism and some
of the more techno-idealistic branches of cybernetics.
50
As envisioned by Har-
away, for example, critical posthumanism is not simply an approach bent on
destroying traditional anthropocentric presumptions; it also displays a posi-
tive element that seeks to formulate a new understanding of human beings as
‘embodied selves.’
51
Similarly, Hayles foresees a danger that the growing cul-
tural fascination with virtual reality might encourage a false belief that infor-
mation can exist in a disembodied form; her critical posthumanism thus aims
to ensure that processes of posthumanization do not result in the dematerial-
ization of human beings but in our rematerialization – in a recognition that
48
Regarding the conceptual relationship of humanity to culture, see Badmington, “Cultural Stud-
ies and the Posthumanities” (2006), p. 270, and its discussion in Herbrechter (2013), p. 174.
49
Miller (2012), p. 164. For a philosophical analysis of posthumanized digital-physical ecosystems
and the interdependencies existing among their human and nonhuman actors that advances and
refines conventional Actor-Network Theory (ANT), see Kowalewska, “Symbionts and Parasites
– Digital Ecosystems” (2016).
50
For critical posthumanism as a challenge to techno-idealism and transhumanism, see Her-
brechter (2013), p. 94.
51
Regarding critical posthumanism’s efforts to fashion a positive concept of the embodied self,
see Haraway, Simians, Cyborgs, and Women: The Reinvention of Nature (1991), and Herbrechter
(2013), pp. 99-100.
Chapter One: A Typology of Posthumanism • 65
we are networked corporalities, material-digital beings, and not pure infor-
mation as some transhumanists might claim.
52
Critical posthumanism also
challenges transhumanism by devoting attention to questions of power and
privilege; Ferrando notes that critical posthumanism explicitly analyzes such
issues, while transhumanism is singularly ‘non-critical’ in its lack of interest
in the historical development of humanity and its naïve presentation of a ge-
neric ‘human being’ that exists without reference to social or economic class,
sex, race, ethnicity and nationality, interpersonal relationships, or religion and
spirituality.
53
Creating a concept of humanity that can endure. It is possible to argue
that far from ‘destroying’ the concept of humanity in a postmodernist sense,
critical posthumanism is actually aimed at saving the concept of humanity;
critical posthumanism accomplishes this by transforming our notion of ‘hu-
manity’ into a broader concept of ‘posthumanity’ that does not require the
continued survival of human beings in some mythically pristine, unengi-
neered, untechnologized, and ‘natural’ biological form but which instead wel-
comes into the family of (post-)humanity a wider range of biological, artificial,
and hybrid subjects. According to this view, even if ‘humanity’ in the narrow
humanist sense were to someday suffer extinction, a more broadly understood
‘posthumanity’ would be likely to survive. Indeed, some have suggested that
by insisting on a definition of humanity that is so rigidly anthropocentric, it
is humanism itself that has created the risk of the eventual ‘dehumanization’
of the universe through the elimination of biological humankind. Critical
posthumanism might thus be understood as a sort of conceptual lifeboat that
opens the door to the long-term persistence of a world of sapient (if not ‘nat-
urally human’) posthuman persons and subjects.
54
Humanism, rehumanism, or alterhumanism? Rather than continuing re-
cent postmodernist trends of disparaging humanism, critical posthumanism
might be seen as constituting a renaissance of a transformed and deanthropo-
centrized humanist thought.
55
Indeed, Herbrechter suggests that posthuman-
ism might be understood as a sort of autoimmune response generated by the
larger humanistic culture that can serve to liberate contemporary human be-
ings from the more oppressive and problematic aspects of humanism, thereby
52
For the critical posthumanist rejection of an understanding of the human entity as pure infor-
mation, see Hayles (1999) and its discussion in Herbrechter (2013), pp. 185-86.
53
Ferrando (2013), p. 28.
54
For the notion that humanism may be the true threat to humanity and posthumanism its res-
cuer, see Herbrechter (2013), pp. 123-24, 187, and its commentary on Hayles (1999), p. 290.
55
Regarding posthumanism as the refinement and fulfillment of humanism, see Herbrechter
(2013), p. 106.
66 • Posthuman Management
leading to the first full flowering of true humanism. However, critical posthu-
manism attempts to counteract the more dehumanizing aspects of posthu-
manization not through a strategy of nostalgic ‘rehumanization’ that restores
classical humanism to an authoritative role but through a form of ‘alterhu-
manism’ that expands itself to encompass entities and perspectives previously
dismissed as inhuman.
56
Critical posthumanism as a bridge between posthumanisms. Her-
brechter’s efforts to fashion a “critical but open-minded posthumanism”
57
are
suggestive of the fact that critical posthumanism is well-positioned to serve as
an impartial mediator and translator between conflicting posthumanist posi-
tions. For example, Herbrechter draws on Thacker’s attempts to describe the
growing informatization of human beings and conversion of the human body
into ‘biomedia’ in a way that is critical but value-neutral and does not inher-
ently support transhumanist or bioconservative positions.
58
Similarly, Herbrechter argues that critical posthumanism represents a sort
of reversible methodological process that can translate between the two
spheres or levels of the human being as personal subject and human being as
viable system. Taking the human subject as its starting point, critical posthu-
manism can draw on the insights of postmodernism to deconstruct that sub-
ject and move to the atomic realm of processes and relations that constitute
what is referred to as a ‘human being.’ Conversely, by drawing on insights
from cybernetics and systems theory, critical posthumanism can begin with a
collection of discrete processes and relations and correlate them to show how
their interactions create a system that constitutes a human (or posthuman)
subject. Critical posthumanism might thus serve as a bridge between post-
modernism and cybernetics.
59
Posthuman Realism
One form of critical posthumanism sometimes referred to by its own name
is the strain formulated by Hayles known as ‘posthuman realism.’ As de-
scribed above, it emphasizes the embodiment of the human being within a
finite and nonexchangeable biological substrate, which contrasts with techno-
idealist and transhumanist visions of the human mind as a virtualized entity
or collection of disembodied data that can be shifted from one body to another
56
For critical posthumanism’s ability as an ‘alterhumanism’ to critique the detrimental effects
of posthumanization without resorting to naïve humanism, see Herbrechter (2013), pp. 76-77, 70.
57
Herbrechter (2013), p. 171.
58
For such more or less value-neutral analyses of posthumanization, see Thacker, “What Is Bio-
media?” (2003), p. 52, and the discussion of it in Herbrechter (2013), pp. 191-92.
59
Regarding critical posthumanism as a mediator between postmodernist understandings of the
subject and cybernetics, see Herbrechter (2013), pp. 198-99.
Chapter One: A Typology of Posthumanism • 67
(and between biological and electronic substrates) without imperiling its con-
sciousness or personal identity.
60
2. CULTURAL POSTHUMANISM
Miah places the origins of cultural posthumanism in Posthuman Bodies, ed-
ited by Halberstam and Livingstone in 1995. Other formative figures identified
by Miah include Haraway, Hayles, Badmington, and Graham.
61
As a form of
analytic theoretical posthumanism , cultural posthumanism understands
‘posthumanity’ to be a state that already exists within our contemporary
world. It argues that the nature of posthumanity can be diagnosed by applying
the tools of cultural studies to analyze elements of contemporary culture, in-
cluding works of literature, film, television, music, painting, sculpture, archi-
tecture, fashion, computer games, tabletop roleplaying games, and religious
and political speech.
Affinity with critical posthumanism. Some authors treat cultural posthu-
manism and critical posthumanism as though they were the same discipline;
other scholars classify critical posthumanism as a subset of cultural posthu-
manism or vice versa. Indeed, the overlap between cultural and critical posthu-
manism is significant, and many thinkers have worked to advance both forms
of posthumanism. Like critical posthumanism, cultural posthumanism can
take the form of a ‘posthumanism without technology’: rather than awaiting
or building a future of technologized beings, cultural posthumanism focuses
on the present in which humanity already “collapses into sub-, inter-, trans-,
pre-, anti-.”
62
Cultural posthumanism also shares with critical posthumanism
a strong second-order element, in that it seeks to understand the cognitive and
social dynamics by which cultural posthumanism is generated. In fact, Miah
argues that the most coherent and explicit theories of posthumanism have
been developed from within the fields of cultural and literary studies and com-
munications.
63
Differences from critical posthumanism. Despite the links between cul-
tural and critical posthumanism, differences can be discerned between the two
fields. For example, in exploring posthumanism’s origins in the 1990s, Fer-
rando distinguishes the critical posthumanism that emerged within the sphere
of literary criticism and which was driven primarily by feminist theorists from
the cultural posthumanism that emerged simultaneously within the field of
60
See Hayles (1999), p. 5, and Herbrechter (2013), p. 43.
61
Miah (2008), pp. 76, 78.
62
See Posthuman Bodies, edited by Halberstam & Livingstone (1995), p. viii, and the commentary
in Miah (2008), p. 76.
63
Miah (2008), pp. 75-76.
68 • Posthuman Management
cultural studies.
64
Unlike critical posthumanism (and biopolitical posthuman-
ism), cultural posthumanism does not privilege issues relating to subjectivity,
ethics, politics, and power relations but seeks to develop a broader analysis of
posthumanization processes that gives equal weight to their aesthetic, artistic,
and theological facets. Beyond highlighting deficiencies in existing bodies of
thought, cultural posthumanism can also play a proactive role in building the
‘posthumanities’ that will increasingly become the focus of study at universi-
ties.
65
Cultural visions of a posthumanized future as diagnoses of the posthu-
manized present. Both critical and cultural posthumanism analyze the state
of posthumanity as it exists in the present moment; however, while critical
posthumanism typically focuses on the effects of posthumanization that have
already impacted human beings, cultural posthumanism also studies cultural
depictions of future social and technological change (e.g., as presented in
works of science fiction), insofar as they reflect a current desire for or fear of
posthumanization. However, depictions of breakdowns in the binary opposi-
tion of human and inhuman can be found not only in science fiction but in all
types of cultural texts, from ancient to contemporary works; thus cultural
posthumanism has a vast field of objects for study.
66
Cultural products as harbingers of posthuman oppression or liberation.
As previously noted, critical posthumanism does not take an a priori stance in
favor of either technoeuphoric transhumanism or technoparanoid bioconserv-
atism; it instead tries to honestly understand and critique both positions.
67
Nevertheless, in practice critical posthumanism injects itself into such biopo-
litical discourses in a way meant to expose perceived biases and shift the pro-
cesses of posthumanization in a direction of greater justice and equity. Miah
argues that despite its supposed neutrality regarding the value of posthuman-
ization, cultural posthumanism, too, often reflects an implicit concern that
revolutionary new technologies will be appropriated by the powerful in a way
that thwarts the realization of social justice for the less privileged. Cultural
posthumanism documents the ways in which cultural products explore the
power of the posthumanization process to either liberate or oppress human
beings.
68
Miah suggests that this investigation of the meaning of justice and
64
Ferrando (2013), p. 29.
65
Herbrechter (2013), p. 143.
66
Regarding the broad range of cultural artifacts that may reflect posthumanist themes, see Her-
brechter (2013), p. 143.
67
See Herbrechter (2013), p. 84.
68
Regarding this dual potential of the forces of posthumanization, see Herbrechter (2013), p. 85.
Chapter One: A Typology of Posthumanism • 69
ethics in a posthumanized world represents a common interest of both cultural
and philosophical posthumanism.
69
3. THE POSTHUMANITIES (OR DIGITAL HUMANITIES)
Ferrando notes that while the word ‘posthumanities’ can refer to a collec-
tion of future posthumanized species, it can also denote a set of academic dis-
ciplines that are in the process of succeeding the historical disciplines of the
humanities.
70
The nature of such ‘posthumanities’ is as diverse and ambiguous
as that of posthumanism itself. On the one hand, the posthumanities can in-
clude disciplines like critical and cultural posthumanism that explicitly incor-
porate posthuman realities into their subject matter or posthumanist concep-
tual frameworks and techniques into their methodologies; such posthumani-
ties offer a skeptical assessment of posthumanizing and technologizing trends.
On the other hand, the term ‘posthumanities’ is sometimes used as a synonym
for the ‘digital humanities,’ a group of fields that are on the vanguard of the
technologization of academia. Displaying a techno-enthusiasm similar to that
of transhumanism, posthumanities of the latter sort advocate the replacement
of “analog or literacy-based knowledge structures” with virtualized digital col-
lections of data.
71
Human nature and the posthumanities. Herbrechter notes that simply
because critical posthumanism considers ‘human nature’ to be a cultural arti-
fact, it is not obligated to claim that human nature is unworthy of study. In-
deed, the critical posthumanities will be well-positioned to investigate human
nature in a way that expands the scope of such a ‘nature’ in a deanthropocen-
trizing manner.
72
With its insights into the history, structure, and practices of
various spheres of culture, cultural posthumanism can play a role in taking
the critical methodologies developed within critical posthumanism and apply-
ing them across the current range of the humanities to develop nonanthropo-
centric and nonbinary posthumanities that can survive and thrive despite their
loss of the concept of human nature that has historically served as the anchor
of the humanities.
73
Counteracting the forces of scientism. From the perspective of critical
posthumanism, one important aim of the posthumanities is to ensure that dis-
ciplines such as philosophy, theology, history, and the arts continue to play a
69
Miah (2008), p. 79.
70
Ferrando (2013), p. 32.
71
For the posthumanities as a possible driver (rather than critic) of digitalization, see Herbrechter
(2013), p. 179.
72
Herbrechter (2013), p. 168.
73
This is similar to the previously discussed notion that posthumanism might serve as the res-
cuer of a faltering humanism. See Herbrechter (2013), p. 143.
70 • Posthuman Management
role in shaping our understanding of human nature and that fields such as
neuroscience, biology, chemistry, and computer science do not appropriate for
themselves the sole privilege and responsibility of defining what is and is not
human. In this way, Herbrechter suggests that the posthumanities can help
guarantee that binary and anthropocentric historical humanism is succeeded
by a nondualistic and nonanthropocentric posthumanism rather than by a
‘scientistic’ posthumanism that simply replaces the transcendental idol of the
human with a new transcendental idol of science.
74
4. FEMINIST NEW MATERIALISM
Ferrando cites a range of ‘new materialisms’ that have arisen as a largely
feminist response to late postmodernism; they represent a pushback against
those forms of postmodernism that had resolved the historic ‘nature versus
nurture’ debate by strongly emphasizing the importance of culture and edu-
cation while downplaying the role of biology and matter in shaping human
existence.
75
New materialism’s link to posthumanism lies in the fact that ra-
ther than resolving such a binary question in one direction or the other, it
dissolves the dualism that pits language and culture against biology and mat-
ter. As Ferrando explains, within new materialist thought “biology is cultur-
ally mediated as much as culture is materialistically constructed,” and matter
cannot be separated from the dynamic and performative process of its ongoing
materialization.
76
Herbrechter offers a similar account of the neovitalism that arises from a
“feminist materialist, life-affirming tradition” which offers a critique of the
more death-centered philosophy of, for example, Derrida. For Herbrechter, the
posthumanist aspect of new materialism can be seen in its effort “to reposition
the notion of ‘life’ outside propriety or impropriety, namely by ‘de-athropo-
centring’ and ‘de-ontologizing’ it.”
77
He also notes that strong feminist ele-
ments have long been found within mainstream critical posthumanism; Har-
away, for example, suggests that the posthumanizing dissolution of the
boundary between human being and machine resulting from the technologi-
zation and cyborgization of our lives can also be exploited to dissolve other
boundaries such as those relating to gender.
78
74
For the posthumanities as a bulwark against scientism, see Herbrechter (2013), p. 169.
75
Ferrando (2013), pp. 30-31.
76
Ferrando (2013), p. 31.
77
Herbrechter (2013), p. 212.
78
The recognition of such blurring boundaries has long been at the core of posthumanism. See
Haraway (1991) and Herbrechter (2013), pp. 99-100.
Chapter One: A Typology of Posthumanism • 71
5. ANTIHUMANISM
The term ‘antihumanism’ has been used to describe an array of phenomena
that bear some relationship to posthumanism. Some forms of antihumanism
are directly identified with posthumanism; for example, Miah characterizes
Pepperell’s theory of posthumanism – in which the technological tools that
once gave humankind dominance over nature now threaten to claim domi-
nance over us – as a form of “anti-humanism, which is re-enlightened by mod-
ern science.”
79
Other forms of antihumanism are described as diverging from
posthumanism in key respects. For example, Ferrando conceptualizes ‘antihu-
manism’ as sharing a central tenet with posthumanism: namely, a radical cri-
tique of “modern rationality, progress and free will” that constitutes a “decon-
struction of the notion of the human.” However, the deconstruction offered
by posthumanism argues that simple binaries such as ‘human versus nonhu-
man’ are no longer meaningful and that human beings are not (any longer)
the only kinds of personal subjects that constitute our society. Antihumanism,
on the other hand, claims that the binary of ‘life versus death’ is still mean-
ingful – and that the human being, as such, is dead. Ferrando argues that while
posthumanism draws much from the deconstructive approach of Derrida, an-
tihumanism has more in common with the ‘death of Man’ propounded by Fou-
cault.
80
Drawing on Badmington, Herbrechter suggests that antihumanism is fre-
quently just a well-disguised form of humanism, insofar as it does not develop
its own independent perspective but instead simply defines itself as the nega-
tion of all that humanism stands for. However, denying the exclusive central-
ity of the ‘human’ is not the same thing as embracing the joint centrality of
the ‘human and nonhuman’; from the perspective of critical posthumanism,
antihumanism thus presents an insufficient challenge to the fundamentally
anthropocentric doctrines of humanism. While antihumanism remains locked
into the binary patterns that characterize humanist thought, critical posthu-
manism makes a concentrated effort to break down those historical binaries,
replacing them with richer and more sophisticated conceptual schemas.
81
While the relationship of antihumanism to posthumanism is thus complex,
building on Ferrando’s analysis we would suggest that at least some forms of
antihumanism have evolved to take on characteristics indicative of posthu-
manist thought. We would argue that such antihumanism is most naturally
classified as a form of analytic theoretical posthumanism. While such antihu-
manism differs from critical posthumanism in its attitude toward binary
79
See Miah (2008), p. 75, and Pepperell, The Posthuman Condition: Consciousness Beyond the Brain
(2003).
80
Ferrando (2013), pp. 31-32.
81
Herbrechter (2013), p. 126.
72 • Posthuman Management
frameworks and post-anthropocentrism, it shares critical posthumanism’s re-
jection of simplistic post-Enlightenment humanism, its goal of developing a
more accurate understanding of the nature of humanity, and an emphasis on
analyzing the state of humanity as it has come to exist rather than in some
engineered form that it might take in the distant future.
6. METAHUMANISM AS REHUMANISM
There have arisen at least three independent uses of the term ‘metahuman-
ism.’ These are: 1) metahumanism understood as a form of ‘rehumanism,’ as
formulated by Sanbonmatsu; 2) metahumanism as an activist movement in
support of those who have been subject to metahumanizing mutation, as for-
mulated in numerous works of science fiction and fantasy; and 3) metahuman-
ism as a philosophical and artistic approach and movement of relational
‘metabodies,’ as formulated by Del Val and Sorgner. We would argue that the
first form of metahumanism constitutes a type of analytic theoretical posthu-
manism; it will thus be considered in more detail here. The second form of
metahumanism will be discussed later as a form of synthetic practical posthu-
manism, and the third will be explored as a type of hybrid posthumanism that
spans theoretical, practical, analytic, and synthetic spheres.
Writing in 2004, Sanbonmatsu formulated a concept of ‘metahumanism’
not as a form of posthumanism but rather as a critical response to and explicit
rejection of it. He argues that within our contemporary world,
[…] in the Western academy, cultural studies theorists and other ac-
ademi c intellectuals hold conferences celebrating our so-called
post-human times, singing the virtues of cyborgs, prosthetics, and
bioengineering. Post-humanism is merely the latest in a string of
commodity concepts spun off by academic industrialists to shore
up the crumbling appearance of use value in their work.
82
In this view, posthumanism is presented as perhaps the most degenerate iter-
ation of a disintegrating Western critical tradition, while metahumanism is
proposed as a form of thought that can rescue the critical tradition by con-
fronting and vanquishing posthumanism. In its contents, such metahumanism
would essentially appear to be a reborn humanism operating under a different
name. Thus Sanbonmatsu argues that “If critical thought is to survive this im-
plosion of theory” represented by posthumanism, posthumanist thought must
be challenged by a metahumanism that constitutes “a return to ontology and
the grounding of thought in a meaningful account of human being” and which
82
Sanbonmatsu, The Postmodern Prince: Critical Theory, Left Strategy, and the Making of a New
Political Subject (2004), p. 207.
Chapter One: A Typology of Posthumanism • 73
does not hesitate “to declare itself to be in defense of this being that we are –
or that we might become.”
83
Herbrechter considers Sanbonmatsu to be pursuing the “renewal of a leftist
radical humanism in the name of a Kantian cosmopolitan tradition.”
84
How-
ever, such metahumanism could instead arguably be understood as an idio-
syncratic example of analytic theoretical posthumanism, insofar as it does not
simply propose for adoption a naïve 19th-Century humanism that is unaware
of the processes of technologization and posthumanization that have occurred
during recent centuries. Rather than ignoring the rise of posthumanist
thought, Sanbonmatsu’s metahumanism explicitly critiques and seeks to learn
from what it perceives as the errors of earlier posthumanist accounts. While
such metahumanism can thus be viewed as an ‘anti-posthumanism,’ we would
argue that it can alternatively be understood as a ‘rehumanism’ informed by
posthumanist insights.
7. NEOHUMANISM AS THE EMBRACING OF HUMAN SUBJECTIVITY
As is true for ‘posthumanism’ and ‘metahumanism,’ the term ‘neohuman-
ism’ has been used to describe a divergent array of phenomena. For example,
Herbrechter refers broadly to the discourse that pits “transhumanists versus
neohumanists.”
85
In that context, neohumanists can be understood as thinkers
who disagree both with the postmodernist annihilation of the notion of hu-
manity and the transhumanist idolization of a reengineered humanity; neohu-
manists seek to salvage the positive elements of humanism but in a manner
that acknowledges ongoing processes of posthumanization. Similarly, Wolin
employs the term when arguing that in his later works Foucault distanced
himself from his earlier post-structuralist critique of modernity and formu-
lated a new ‘neohumanist’ approach in which the existence of a free and
thinking human subject is at least implicitly embraced.
86
If considered a form
of posthumanism, such neohumanisms would take their place alongside criti-
cal posthumanism as a form of analytic theoretical posthumanism.
B. Synthetic Theoretical Posthumanisms: Seeking to
Understand a Future Posthumanity
Synthetic theoretical posthumanisms manifest a ‘posthumanism of imagi-
nation’ that creatively envisions hypothetical future posthumanities so that
83
Sanbonmatsu (2004), p. 207.
84
For this critique of Sanbonmatsu’s metahumanism, see Herbrechter (2013), p. 71.
85
Herbrechter (2013), p. 40.
86
See Wolin, “Foucault the Neohumanist?” (2006), and Nealon, Foucault Beyond Foucault (2008),
pp. 10-11.
74 • Posthuman Management
their implications can be explored.
87
Such forms of synthetic theoretical
posthumanism include philosophical posthumanism, science fiction, prehu-
manism, and techno-idealism. We can consider each of these in more detail.
1. PHILOSOPHICAL POSTHUMANISM
Philosophical posthumanism combines critical posthumanism’s academic
rigor with science fiction’s practice of imagining possible future paths for the
processes of posthumanization. It is a synthetic theoretical posthumanism in-
sofar as it constructs scenarios of future posthumanities and its goal is to
deepen human knowledge rather than to generate some economic, political,
or technological impact.
Philosophical posthumanism draws on the insights of critical and cultural
posthumanism, integrating them into traditional methodologies of philosoph-
ical inquiry in order to reassess earlier philosophical claims with a new aware-
ness of the ways in which philosophy has been suffused with “anthropocentric
and humanistic assumptions” that limit its scope, comprehensiveness, and ef-
fectiveness.
88
Moreover, as philosophy reflects on processes of posthumaniza-
tion to envision the ways in which they will reshape ontology, epistemology,
and ethics, this generates a new process of ‘philosophical posthumanization’
that takes its place alongside other technological and social forms of posthu-
manization.
89
Origins in critical and cultural posthumanism. Ferrando recounts that
during the 1990s feminists within the field of literary criticism developed crit-
ical posthumanism, which interacted with cultural posthumanism to give rise
to philosophical posthumanism by the end of the decade.
90
Similarly, Miah
considers the cyborg expositions of Haraway and Gray, the posthumanism of
Hayles and Fukuyama, and Bostrom’s transhumanism to have contributed to
the development of philosophical posthumanism.
91
Philosophical posthuman-
ism can be understood either as a form of philosophy that has adopted ele-
ments of posthumanist thought or as a new form of critical and cultural
posthumanism that has chosen to focus its attention on traditional philosoph-
ical questions.
87
As previously noted, an exception to this temporal pattern is prehumanism, which considers
fictional or hypothetical beings of the far-distant past as an alternative to positioning them in
the far-distant future.
88
Ferrando (2013), p. 29.
89
Herbrechter (2013), p. 176.
90
Ferrando (2013), p. 29.
91
See Miah (2008), p. 80; Haraway, “A Manifesto for Cyborgs: Science, Technology, and Socialist
Feminism in the 1980s” (1985); Gray, “The Ethics and Politics of Cyborg Embodiment: Citizenship
as a Hypervalue” (1997); Gray (2002); Hayles (1999); Fukuyama (2002); and Bostrom, “A History
of Transhumanist Thought” (2005).
Chapter One: A Typology of Posthumanism • 75
The differences between philosophical and cultural posthumanism, in par-
ticular, are frequently blurred. Even Miah, who clearly distinguishes philo-
sophical posthumanism from its biopolitical and cultural siblings, notes that
the analyses offered by philosophical posthumanism are often “inextricable
from other cultural critiques.” However, it is possible to identify differences
between the two fields; for example, Miah suggests that while cultural posthu-
manism (as represented by Haraway and Hayles) is “intended to disrupt uni-
form ideas about what it means to be human and the social and political enti-
tlements this might imply,” philosophical posthumanism typically focuses on
ontological, phenomenological, and epistemological questions surrounding
scenarios of future technologization.
92
Envisioning future posthumanity. Like cultural posthumanism, philosoph-
ical posthumanism contemplates not only current processes of technologiza-
tion but also hypothetical futuristic technologies that do not yet exist but
which have been envisioned in works of science fiction. While cultural
posthumanism analyzes such fictional future technologies as a means of diag-
nosing current humanity’s desire for or fear of further posthumanization, phil-
osophical posthumanism uses hypothetical technologies as the bases for
thought experiments that explore the ontological, epistemological, ethical, le-
gal, and aesthetic implications of such future posthumanization. By exploiting
philosophical methodologies and a knowledge of science and technology, such
thought experiments allow philosophical posthumanists to understand the
ways in which human nature may be transformed or superseded through fu-
ture posthumanization – without necessarily advocating or opposing such
transformations in the way that a biopolitical posthumanist would.
93
The phenomenon of environmental posthumanization. As conceptual-
ized by Miah, a notable characteristic of philosophical posthumanism is that
it does not focus on changes to human beings per se as the primary manifes-
tation of posthumanization.
94
Instead, philosophical posthumanism posits a
broader phenomenon in which posthumanization is occurring throughout the
world as a whole. For example, the proliferation of social robots, artificial gen-
eral intelligences, artificial life-forms, virtual worlds, ubiquitous computing,
and the Internet of Things is expected to create a rich digital-physical ecosys-
tem in which human beings are no longer the only – or perhaps even the most
significant – intelligent actors. Such a post-anthropocentric and post-dualistic
world would already possess a strongly posthuman character regardless of
92
Miah (2008), pp. 79-80.
93
Regarding philosophical posthumanism’s dispassionate analysis of processes of posthumani-
zation, see, e.g., Miah (2008), p. 79.
94
Miah (2008), pp. 80-81.
76 • Posthuman Management
whether human beings undergo processes of biotechnological transformation
or choose to remain in their ‘natural’ biological form.
Some strains of philosophical posthumanism effectively update historical
Darwinian biological materialism for the age of artificial life, viewing the
posthuman world as a place in which the differences between human beings
and animals, human beings and robots, and human beings and electronic in-
formation systems are increasingly ones of degree rather than kind.
95
The re-
lationship between the human and machine is explored especially by consid-
ering entities such as cyborgs in which those two realms have become physi-
cally and behaviorally fused.
96
It also addresses the ontological and ethical im-
plications of new kinds of entities such as artificial general intelligences that
have not yet been created in practice but for whose development much theo-
retical groundwork has been laid; this gives philosophical posthumanism a
stronger future orientation than critical posthumanism, which is more con-
cerned with ethical and social realities of our current day.
2. SCIENCE FICTION
Herbrechter suggests that true science fiction is “the most posthumanist of
all genres,” as it takes seriously – and often advances – the ongoing “dissolu-
tion of ontological foundations like the distinction between organic and inor-
ganic, masculine and feminine, original and copy, natural and artificial, hu-
man and nonhuman.”
97
In its most representative form, science fiction at-
tempts to construct coherent visions of a near- or far-future posthumanized
world so that its nature and implications can be investigated; for this reason,
science fiction can be categorized as a synthetic theoretical posthumanism.
98
Science fiction versus posthumanist reflection on science fiction. It is
important to distinguish science fiction itself from scholarly analysis of sci-
ence fiction. While science fiction typically constitutes a form of synthetic
theoretical posthumanism, the reflection on science fiction that is carried out,
for example, by cultural posthumanists is often a form of analytic theoretical
95
For philosophical posthumanism’s consideration of evolutionary processes in biological and
nonbiological entities, see Miah (2008), p. 82.
96
Miah (2008), pp. 80-81.
97
Herbrechter (2013), pp. 115-17.
98
Building on Poster and Hayles, Herbrechter notes that the cyberpunk genre in particular –
which attempts to construct realistic and realizable visions of a near-future technologized
posthumanity – has most explicitly grappled with the nature of human beings as embodied in-
formational processes and the ramifications of posthumanizing technologies that are expected
to break down traditional humanist binaries and reshape the experience of human existence
within the coming decades. See Goicoechea (2008); Poster, What’s the Matter with the Internet?
(2001); Hayles, My Mother Was a Computer: Digital Subjects and Literary Texts (2005); Hayles
(1999); and Herbrechter (2013), p. 187.
Chapter One: A Typology of Posthumanism • 77
posthumanism. From the perspective of cultural posthumanism, science fic-
tion’s relevance does not depend on it portraying future technologies that are
in fact strictly realizable; rather it is relevant because it reflects society’s cur-
rent ‘cultural imaginary’ and can thus be used to diagnose humanity’s attitude
toward the processes of technologization and posthumanization.
99
In a related
fashion, when transhumanism draws inspiration from works of science fiction
to spur the real-world pursuit of particular futuristic technologies, it consti-
tutes a form of synthetic practical rather than synthetic theoretical posthu-
manism.
Science fiction and the genesis of posthumanism. From its birth, the
field of posthumanism has been tied to the world of science fiction. Indeed,
the work generally considered to contain the earliest allusion to a critical
posthumanism, Hassan’s 1977 text “Prometheus as Performer: Toward a
Posthumanist Culture? A University Masque in Five Scenes,” explicitly cites
the film 2001: A Space Odyssey and dawning questions about artificial intelli-
gence as being relevant to understanding the “emergent […] posthumanist
culture.”
100
If posthumanism has always drawn on certain forms of science fic-
tion, Miller suggests that – in complementary fashion – science fiction has
always constituted a form of posthumanism. While ‘posthumanism’ as such
may only have been labelled and defined during the last few decades, science
fiction had already existed for centuries as an unrecognized form of posthu-
manism; only recently has critical theory begun to follow science fiction’s ex-
ample of radically reassessing the limits of human nature and the social and
technological structures that circumscribe the meaning of ‘the human.’
101
Distinguishing science fiction from popular (‘commercial’) posthu-
manism. In places, Herbrechter writes of science fiction as though it were
essentially a commercial enterprise whose contents are formulated by large
corporations with the goal of maximizing revenue and profits – rather than a
serious literary and artistic endeavor whose contents are crafted by individual
authors, filmmakers, and game designers as a means of exploring difficult
philosophical, political, and social issues facing humanity. Thus he emphasizes
the “rather close ‘co-operation’ between science fiction, the film industry and
its lobbies and the discourse on posthumanity in general.”
102
However, such a
view appears to be an oversimplification. We would argue that in the context
of posthumanism, the phrase ‘science fiction’ is frequently used to refer to two
spheres of human activity which are so qualitatively different in nature that
they are better classified as two entirely different forms of posthumanism.
99
Herbrechter (2013), p. 117.
100
See Hassan, “Prometheus as Performer: Toward a Posthumanist Culture? A University
Masque in Five Scenes” (1977), and its discussion in Herbrechter (2013), p. 33.
101
This point is made in Miller (2012), p. 164.
102
Herbrechter (2013), p. 39.
78 • Posthuman Management
We would suggest that the term ‘science fiction’ be reserved for the first
of these two types of posthumanism, which involves the construction of fic-
tional scenarios (often set in the future) as a means of exploring the profound
ontological, biological, ethical, social, and cultural implications of posthuman-
ization. Works of science fiction are, in a sense, thought experiments similar
to those utilized within philosophical posthumanism. However, while philo-
sophical posthumanism employs the rigorous methodologies and critical ap-
paratus of philosophy, science fiction exploits the freedom to draw on more
artistic and less formally academic methodologies. Works such as paintings,
sculpture, or music with science-fiction themes can explore the ‘mood’ or
‘ethos’ of posthumanization in a general sense. Artistic forms such as films or
novels can present more detailed diegetic content but are consumed in a man-
ner that is still largely passive. However, interactive media such as computer
games and tabletop roleplaying games can put their human players in situa-
tions in which they face complex ethical dilemmas and must actively confront
challenges associated with new posthumanized ways of being. As noted
above, because of its emphasis on imagining future posthumanities and the
fact that it is primarily geared at deepening human knowledge, science fiction
can be best understood as a form of synthetic theoretical posthumanism.
The second kind of posthumanism that is sometimes described as a type of
‘science fiction’ (and which Herbrechter indeed takes to be the most repre-
sentative form of science fiction) is what we would refer to as ‘popular’ (or
‘commercial’) posthumanism to distinguish it from science fiction proper. Ex-
amples of popular posthumanism include films, television series, and other
works that are created either to generate maximum profits by engaging mass
audiences or to condition the public to accept certain future actions by gov-
ernments, corporations, or other institutions. Like posthumanist science fic-
tion, popular posthumanism often employs storylines that are set in the future
and which feature cyborgs, androids, artificial general intelligences, genetic
engineering, virtual reality, and other posthumanizing technologies. How-
ever, rather than attempting to confront and thoughtfully explore the philo-
sophical implications of such phenomena, popular posthumanism exploits
posthuman themes instrumentally as a means of achieving some practical goal
– such as generating revenue from movie ticket sales.
Some artistic products function simultaneously as works of both posthu-
manist science fiction and popular posthumanism; in practice, the division be-
tween these two types is rarely absolute. Nevertheless, the divergence in the
goals of posthumanist science fiction and popular posthumanism can often be
seen, for example, in the difference between complex original literary works
and their later adaptations into Hollywood blockbuster films that feature a
drastic simplification of the works’ philosophical content coupled with more
frequent explosions and a happy ending in which the protagonist defeats the
Chapter One: A Typology of Posthumanism • 79
(often technologically facilitated) threat to humanity.
103
Popular posthuman-
ism will be considered in more detail later as a form of synthetic practical
posthumanism.
3. PREHUMANISM
While some works of science fiction envision the extremely far future,
other forms of theoretical posthumanism envision the extremely distant past.
For example, some proponents of cultural materialism emphasize the billions
of years that passed before intelligent life appeared on earth. These vast fore-
gone eons are highlighted not because the events that occurred within them
are of direct interest to posthumanism but because they contextualize and
deanthropocentrize our present moment; they emphasize the fact that the uni-
verse is not dependent on humanity for its existence or meaning and that the
whole era of humankind’s flourishing is only a fleeting instant in comparison
to the lifespan of the cosmos as a whole.
104
Practitioners of what might be
called ‘prehumanism’ are not interested in performing a literal scientific re-
construction of the biological or anthropological characteristics of the precur-
sors of modern human beings but rather in imagining such prehistoric beings
from a metaphorical or hypothetical perspective in order to better appreciate
the relationship of contemporary humanity to the timescale of the universe.
‘Prehumanist’ approaches generally constitute forms of synthetic theoreti-
cal posthumanism, insofar as they are grounded in imagination rather than
critique. Herbrechter notes, for example, that the world of posthumanist spec-
ulative fiction includes not only works that explore future spaces but also ones
that explore “fictional pasts or verfremdet (defamiliarized) presents.”
105
As a
posthumanist approach that looks back imaginatively to the past, prehuman-
ism thus constitutes a mirror image of the posthumanist science fiction that
looks ahead imaginatively to the future.
106
Works such as the cosmic horror
literature of H.P. Lovecraft that feature alien entities that have existed for mil-
103
For example, consider Asimov’s Robot series of stories and novels as compared with the 2004
Will Smith cinematic vehicle, I, Robot.
104
See Herbrechter (2013), pp. 9-10.
105
Such products are by no means limited to science fiction but can include works of any genre
and theme that disorient and challenge their characters and readers. See Herbrechter (2013), p.
116.
106
As described here, prehumanism is thus not ‘pre-humanist’ in the sense of considering the
world that existed before the appearance of humanism but rather ‘prehuman-ist’ in the sense of
considering the world that existed before the appearance of human beings. The usage described
here thus differs from the way in which the terms ‘prehumanism’ and ‘prehumanist’ are em-
ployed in, e.g., Berrigan, “The Prehumanism of Benzo d’Allesandria” (1969), and Witt, “Francesco
Petrarca and the Parameters of Historical Research” (2012), to refer to time periods that preceded
and concepts that foreshadowed those of Renaissance humanism.
80 • Posthuman Management
lions of years (or in a timeless parallel dreamworld) can be understood as ex-
amples of such prehumanism.
107
Other works such as 2001: A Space Odyssey
simultaneously constitute both: 1) prehumanism that uses the distant past as
a setting for imagining a ‘quasi-human’ that already was; and 2) posthumanist
science fiction that looks into the future to imagine a ‘quasi-human’ that has
not yet been.
108
4. TECHNO-IDEALISM
Techno-idealism is a form of posthumanist thought closely linked to but
distinct from transhumanism. It involves the belief that the sole essential part
of a human being is the mind and that this ‘mind’ consists of a particular pat-
tern of information. Because only a mind’s pattern of information – and not
the physical substrate in which the information is stored – is relevant, all of a
brain’s biological neurons can be replaced one by one with electronic replicas,
and as long as the pattern of interactions found within the brain’s neural net-
work is preserved intact, the person’s mind, consciousness, and identity would
continue to exist within its new (and undying) robotic shell. From the perspec-
tive of techno-idealism, human beings’ physical biological bodies are ulti-
mately interchangeable and replaceable with physical robotic bodies or poten-
tially even virtualized ones.
Contrast with critical posthumanism. Herbrechter portrays techno-ideal-
ists as yearning for ‘technoscientific utopias’ in which human engineers will
someday unravel the mysteries of genetics, thereby allowing biological life to
finally be transformed into pure, disembodied information; in this way, virtu-
ality becomes a means to immortality as human beings “gain control over the
‘book of life’.”
109
He contrasts techno-idealism’s naïve understanding of the
nature of the human mind with the more thoughtful and incisive analyses
conducted within critical and philosophical posthumanism. Indeed, Her-
brechter suggests that critical posthumanism can largely be understood as an
effort to defend the material anchoring of humanity against those techno-ide-
alists who seek to virtualize and disembody everything – as manifested, for
example, in their advocacy of mind uploading.
110
Complementarity to transhumanism. The ‘posthumanity’ envisioned by
techno-idealism is one of hypothetical future entities like full-body cyborgs
and uploaded minds. Techno-idealism does not, in itself, actively seek to en-
gineer such beings but rather to develop conceptual frameworks for exploring
their nature, capacities, and behavior; it can thus be understood as a form of
107
See, e.g., Lovecraft, The Dunwich Horror and Others (1983) and At the Mountains of Madness
and Other Novels (1985).
108
See Kubrick’s 2001: A Space Odyssey (1968).
109
Herbrechter (2013), pp. 103, 171.
110
Herbrechter (2013), p. 95.
Chapter One: A Typology of Posthumanism • 81
synthetic theoretical posthumanism. However, in practice techno-idealist
frameworks are often formulated by committed transhumanists seeking an
intellectual justification for their concrete practical endeavors. Drawing on
Krüger, Herbrechter traces the development of a ‘radical techno-idealism’
from Wiener’s cybernetics, the futurology of the incipient Space Age, and the
cryonics movement to figures such as More, Minsky, Moravec, Kurzweil, and
contemporary transhumanist performance artists.
111
For many such individu-
als, the techno-idealism which says that human beings can achieve immortal-
ity through the development of transformative technologies is paired with a
technological determinism which says that humanity inevitably will create
and implement such technologies.
112
It is not necessary, however, for transhumanists to hold techno-idealist be-
liefs. For example, one could conceivably deny that an uploaded mind is a
‘true’ human mind – while simultaneously arguing that such artificial intelli-
gences should nonetheless be developed to serve as successors to humanity
and a next step in the evolution of sapient intelligence within our world.
Someone holding such a view would be a transhumanist but not a techno-
idealist. Conversely, a person could conceivably accept the claim that a bio-
logical human brain can be gradually replaced by an electronic brain without
destroying its owner’s ‘mind’ – but without feeling the slightest inclination to
see any human being undergo such a procedure. Indeed, such a person might
feel a sense of revulsion at the idea that causes him or her to oppose the de-
velopment of such technologies, even while accepting their efficacy on an in-
tellectual level. Such an individual would be a techno-idealist but not a trans-
humanist.
C. Analytic Practical Posthumanisms: Seeking to Reshape the
Posthumanized Present
Analytic practical posthumanisms seek to reshape an already-existing
posthumanized world. They can be understood as constituting a ‘posthuman-
ism of conversion’ that is aimed at changing hearts and minds and influencing
the way in which human beings view and interact with their contemporary
environment. Such forms of analytic practical posthumanism include some
forms of metahumanism and neohumanism, which we describe in more detail
below.
111
See Krüger, Virtualität und Unsterblichkeit [Virtuality and Immortality] (2004), as discussed
in Herbrechter (2013), p. 103.
112
On this frequent pairing of theoretical and practical posthumanism, see Herbrechter (2013),
p. 103.
82 • Posthuman Management
1. METAHUMANISM AS ADVOCACY FOR THOSE WHO HAVE EXPERIENCED
METAHUMANIZING MUTATION
Since the 1980s, the term ‘metahuman’ has been used within a range of
science-fiction, superhero, and fantasy literature and roleplaying games to re-
fer to a human being who has undergone a mutation or transformation that
grants the individual a new physical form or altered sensory, cognitive, or
motor capacities; the mechanics of the transformation may be portrayed as
technological, magical, or otherwise preternatural in nature.
113
The term ‘me-
tahumanity’ is employed within such a fictional world to describe either its
typically diverse collection of metahuman beings or the state of being a me-
tahuman. Within the context of such a fictional world, ‘metahumanism’ can
describe either: 1) the condition of possessing metahuman characteristics
(which can be viewed by different individuals as a blessing or a curse); or 2) a
political or social movement that works to promote the safety, welfare, and
basic rights of metahumans, who often suffer discrimination as a result of the
radical otherness that can terrify or appall ‘normal’ human beings.
a. Anti-metahumanism as Discrimination agains t Metahumans
Within such a fictional context, ‘anti-metahumanism’ describes an oppos-
ing political, social, or religious movement that views metahumans either as a
lesser form of being whose activities must be supervised, a threat to the wel-
fare of regular human beings, or inherently evil.
114
Such oppression is typically
described as being inflicted by natural, non-metahumanized human beings,
although metahumans themselves are capable of displaying anti-metahuman
attitudes and behaviors.
b. Classifying Metahum anism within the Fictional and Real Worlds
When classifying them as forms of posthumanism, metahumanism and
anti-metahumanism can be understood from two perspectives, namely: 1) as
they function within the fictional world in which they appear; and 2) as de-
vices created by authors, filmmakers, or game designers and consumed by au-
diences within our contemporary real world. Within the fictional worlds in
which they exist as political and social movements, metahumanism and anti-
metahumanism depict a form of analytic practical posthumanism, insofar as
113
See Ferrando (2013), p. 32. Perhaps the earliest published use of the term ‘metahuman’ in this
sense (in particular, as an adjective referring to superhuman powers or abilities gained as a result
of infection by an extraterrestrial virus) was in the anthology set in the shared Wild Cards su-
perhero universe published in 1986. See, e.g., Milán, “Transfigurations” (p. 264) and “Appendix:
The Science of the Wild Card Virus: Excerpts from the Literature” (p. 403), in Wild Cards, edited
by Martin (1986).
114
For a depiction of anti-metahumanism, e.g., within the fictional universe of the Shadowrun
roleplaying game, see the Sixth World Almanac, edited by Hardy & Helfers (2010), pp. 23, 35, 49,
54, 57, 79, 142.
Chapter One: A Typology of Posthumanism • 83
they focus on an already existing (within the work’s fictional timeline) posthu-
manity and either advocate for the adoption of particular policies or work di-
rectly to empower or suppress metahumanity.
However, within our real world, such fictional depictions of metahuman-
ism and anti-metahumanism play a broader range of roles. Some creators of
fictional works employ metahumans (and the reactions to them) as a means
of critiquing our real-world presumptions and encouraging audiences to probe
their own understanding of what it means to be human. In these cases, it is
not being claimed by an author that posthumanized beings displaying those
exact characteristics might someday come to exist; rather, metahumanity is
being used as a device to compel contemporary audiences to consider their
own humanity. Such metahumanism and anti-metahumanism serve as a form
of analytic posthumanism that is either theoretical or practical, depending on
whether it fills the role of a thought experiment or is intended to alter the way
that audiences treat other human beings (or animals, artificial intelligences,
and other nonhuman beings).
Other fictional works may feature metahumanism and anti-metahuman-
ism in order to help audiences explore the many possible forms that future
posthumanity might take and understand the interrelationships between
posthumanizing technologies such as genetic engineering, neuroprosthetics,
and artificial intelligence. Such works are often forms of synthetic theoretical
posthumanism;
115
however, they may also display aspects of synthetic practical
posthumanism, if designed to foster attitudes of acceptance toward future me-
tahuman beings.
3. NEOHUMANISM AS ADVOCACY FOR ENGINEERED BEINGS
One variety of ‘neohumanism’ was described in an earlier section as a type
of analytic theoretical posthumanism. The term ‘neohuman’ has also been
used within the context of science fiction to describe genetically engineered
human beings who possess a genotype derived from and similar to that of
natural human beings but who have been given enhanced sensory, motor, and
cognitive capacities. While some fictional neohumans are presented as relish-
ing the engineered capacities that make them ‘superior’ to natural human be-
ings, others resent these traits that they never chose to possess and which
115
This is especially true of works featuring future worlds in which metahumans can choose at
least some of their ‘nonhuman’ traits, such as characters who acquire neuroprosthetic enhance-
ments or study magic within the Shadowrun universe. Similarly, in many tabletop roleplaying
games and computer games, a game’s contemporary human player must invest significant time
and care in selecting his or her character’s metahuman characteristics from among a complex
system of physical and cognitive attributes, advantages, disadvantages, skills, and equipment
and possessions. See, e.g., the Shadowrun: Core Rulebook 5, edited by Killiany & Monasterio
(2013).
84 • Posthuman Management
cause them to be seen as something other than fully human. Rather than em-
phasizing the engineered characteristics that set them apart, such neohumans
may instead accentuate those shared genetic traits that link them with (the
rest of) humanity.
116
In such a context, ‘neohumanism’ would involve advocacy for the devel-
opment of such engineered beings or defense of the rights and welfare of such
persons, thus resembling metahumanism in its form of support for those who
have experienced metahumanizing mutation. Such neohumanism would be a
form of analytic practical posthumanism within the fictional worlds in which
it is depicted, but it could be either analytic or synthetic and either theoretical
or practical if evaluated according to the real-world reasons for which a crea-
tor of fiction decided to include it in his or her work.
4. NEOHUMANISM AS SPIRITUAL ULTRAHUMANISM
Another application of the term ‘neohumanism’ is in describing a holistic
and universalist philosophy developed by Sarkar that is grounded in Tantric
spiritual principles
117
and manifested in particular religious practices, works
of art and literature, humanitarian and animal-rights initiatives, and a global
network of schools guided by “a transcivilizational global pedagogy.”
118
The
goal of such a neohumanism is:
[…] to relocate the self from ego (and the pursuit of individual max-
imization), from family (and the pri de of genealogy), from geo-sen-
timents (attachments to land and nation), from socio -sentiments
(attachments to class, race and religious community), from human-
ism (the human being as the centre of the universe) to Neohuman-
ism (love and devotion for all, inanimate and animate, beings of
the universe).
119
This nominal dislocation of the human being from its historical position as the
‘center of the universe’ appears to have much in common with the post-an-
thropocentric attitude that is developed, for example, within critical posthu-
manism. However, that similarity is arguably superficial. Elsewhere, Sarkar
writes that:
Neohumanism will give new inspiration and provide a new interpre-
tation for the very concept of human existence. It will help people
understand that human beings, as the most thoughtful and intelli-
gent beings in this created universe, will have to accept the great
116
See Interface Zero 2.0: Full Metal Cyberpunk, developed by Jarvis et al. (2013), p. 107.
117
See the “Foreword” to Neohumanist Educational Futures: Liberating the Pedagogical Intellect,
edited by Inayatullah et al. (2006).
118
“Foreword,” Neohumanist Educational Futures (2006).
119
“Foreword,” Neohumanist Educational Futures (2006).
Chapter One: A Typology of Posthumanism • 85
responsibility of taking care of the entire u niverse – will have to ac-
cept that the responsibility for the entire universe rests on them.
120
Ferrando argues that some forms of transhumanism can actually be under-
stood as an ‘ultrahumanism’ that seeks to advance post-Enlightenment ration-
ality and scientific progress to its logical conclusion, thereby consummating
humanism rather than superseding it.
121
A similar account might be offered of
Sarkar’s neohumanism: rather than rejecting the humanist vision of human
beings as the supreme intelligent agents charged with exercising dominion
over nature, neohumanism seeks to cement the position of human beings as
the ‘center of the universe’ – albeit a center that serves as a loving caretaker
for the rest of creation.
122
Such neohumanism is analytic, insofar as it focuses its attention on the
human beings who already exist today and the sociotechnological reality
within which they are embedded. While such neohumanism possesses many
elements that are explicitly philosophical in nature, the neohumanist project
is geared primarily toward creating a movement whose adherents alter their
daily lives to incorporate particular spiritual practices and who establish and
operate schools, charitable institutions, and other organizations that embody
the movement’s philosophy; in this sense, neohumanism can be understood
as a practical rather than theoretical posthumanism.
D. Synthetic Practical Posthumanisms: Seeking to Control the
Processes Generating a Future Posthumanity
Synthetic practical posthumanisms reflect a ‘posthumanism of control’
that seeks to initiate, accelerate, guide, limit, or block future processes of
posthumanization – typically through regulating the development of new
technologies or through other political, economic, or social mechanisms. Such
forms of synthetic practical posthumanism include biopolitical posthumanism
(which itself includes bioconservatism and transhumanism) and popular or
‘commercial’ posthumanism. We can consider these in more detail.
1. BIOPOLITICAL POSTHUMANISM
Biopolitical posthumanism encompasses a range of posthumanisms that all
envision the engineering of a future ‘posthumanity’ but which differ in their
assessment of whether such a development is desirable or undesirable. Biopo-
litical posthumanisms manifest a strong future orientation: they attempt to
predict the long-term impact of pursuing particular new biotechnologies and
– based on such predictions – work to actively facilitate or impede the creation
120
Sarkar (1982).
121
Ferrando (2013), p. 27.
122
Indeed, Sarkar claims explicitly that “Neohumanism is humanism of the past, humanism of
the present and humanism – newly explained – of the future.” See Sarkar (1982).
86 • Posthuman Management
of such technologies by spurring political or regulatory action, influencing
public opinion, advancing scientific research and technology commercializa-
tion, or through other means. Such biopolitical posthumanisms are synthetic
insofar as they understand posthumanity to be a collection of future beings
whose creation can be purposefully brought about or avoided, and they are
practical insofar as they seek to actively accomplish or block the advent of
such posthuman beings.
Contrasting attitudes toward posthumanity. Different forms of biopolit-
ical posthumanism are distinguished by their attitude toward biotechnological
posthumanization. For Miah, biopolitical posthumanism can be divided fairly
neatly into the opposing camps of ‘bioconservative’ thinkers like Fukuyama
and ‘technoprogressive’ or transhumanist thinkers like Stock. Bioconserva-
tives see the advent of posthumanity as a negative or retrogressive step – a
loss of human dignity and a destruction of the characteristic essence that
makes human beings unique – while technoprogressives see the arrival of
posthumanity as an advance by which human nature is beneficially enhanced
or its limits transcended.
123
Birnbacher argues that the concept of ‘posthumanity’ is in itself value-neu-
tral;
124
however, one could contend that for biopolitical posthumanists,
‘posthumanity’ is in fact an intensely value-laden term – but one whose ‘au-
thentic’ value is disputed by two opposed ideological groups. Such an inter-
pretation is consistent with Miah’s observation that for some bioconserva-
tives, the very word ‘posthumanism’ is presumed to represent a world so ob-
viously horrific and morally bankrupt that little need is seen to offer specific
arguments about why the creation of a ‘posthuman’ world should be
avoided.
125
Having reviewed biopolitical posthumanism in general, it is worth explor-
ing in more depth its two most prominent forms: bioconservatism and trans-
humanism.
a. Bioconservatism
Bioconservatism is a form of posthumanism that came into existence
largely as a rejection of the tenets of another form of posthumanism – namely,
123
See Miah (2008), pp. 73-74. ‘Factor X’ is the term used by Fukuyama to describe the essence
of humanity that is vulnerable to being corrupted through the unrestrained application of bio-
medical technology. This can be compared and contrasted, e.g., with the idea of ‘essence loss’
within the fictional Shadowrun universe. See Fukuyama (2002) and Shadowrun: Core Rulebook 5
(2013), pp. 52-55, 396-97.
124
Birnbacher (2008), p. 95.
125
Miah (2008), pp. 74-75.
Chapter One: A Typology of Posthumanism • 87
transhumanism.
126
For bioconservatives, the arrival of the posthumanity envi-
sioned by transhumanism would bring about the ‘dehumanization’ of the hu-
man species.
127
Fukuyama is frequently cited as an eminent bioconservative as
a result of his writing and public debating in opposition to transhumanism
during his time as a member of the U.S. President’s Council on Bioethics in
the early 2000s. Habermas is also often cited as a leader in the world of bio-
conservative thought: while much of his work is highly theoretical, it includes
a call to action that points toward practical applications, and the critiques and
conceptual frameworks that he has developed provide a philosophical foun-
dation for bioconservatism.
128
Bioconservatism is a synthetic posthumanism insofar as it focuses its atten-
tion on hypothetical and emerging technologies that can potentially be used
to engineer new quasi-human biological species or cyborgs that differ greatly
from human beings as they exist today. It is a practical posthumanism insofar
as it attempts to block the creation of such future posthumanized beings by
rallying public opinion to support particular political and social initiatives;
developing and promoting treaties, legislation, regulations, and policies for
adoption by governments; pressuring companies, universities, and other in-
stitutions engaged in transhumanist programs to curtail such activities; and
encouraging individual consumers to change the ways in which they spend
their money and time.
Concerns regarding the social impact of posthumanization. Typical bi-
oconservatism does not focus on the psychological, phenomenological, or on-
tological consequences of posthumanization for the individual posthumanized
being. Instead, it sketches out the broad negative impacts that biotechnologi-
cal posthumanization will supposedly have for human society as a whole – for
example, by weakening government protections for human rights, lowering
the ethical standards of corporations, creating economic injustice, pressuring
entire social classes of human beings to modify themselves in order to com-
pete economically, and perhaps even sparking civil war between those trans-
human beings who have been genetically and cybernetically ‘enriched’ and
those natural human beings who, comparatively speaking, are genetically and
cybernetically ‘deprived.’
129
This emphasis on broad social concerns is re-
flected in Bostrom’s characterization of the five main objections that biocon-
servatism offers to the purposeful creation of posthumanized beings – namely,
that: 1) “It can’t be done”; 2) “It is too difficult/costly”; 3) “It would be bad for
126
Herbrechter (2013), pp. 36-37.
127
Birnbacher (2008), p. 97.
128
Herbrechter (2013), pp. 161-62.
129
Miah (2008), pp. 73-74; Herbrechter (2013), p. 45, 162.
88 • Posthuman Management
society”; 4) “Posthuman lives would be worse than human lives”; and 4) “We
couldn’t benefit.”
130
b. Transhumanism
Transhumanism shares with analytic posthumanism its origins in the late
1980s and early 1990s and a “perception of the human as a non-fixed and mu-
table condition”; in other ways, though, the two perspectives are quite differ-
ent.
131
Transhumanism does not look back into humanity’s past to diagnose
the social and technological legacy that we have inherited; instead it looks
ahead to the future – and in particular, to the ‘enhanced’ human, quasi-hu-
man, or parahuman species that can be fashioned through the intentional ap-
plication of genetic engineering, nanotechnology, cryonics, ‘mind uploading,’
and other emerging or hypothetical technologies.
132
Understanding of posthumanity. Bostrom uses the word ‘posthuman’ in
a concrete functional sense to refer to an engineered being that possesses at
least one ‘posthuman capacity’ exceeding what is possible for natural human
beings.
133
In Bostrom’s conception of posthumanity, posthuman beings will
not necessarily constitute the entirety – or even a large percentage – of future
human society. Indeed, because of the cost and difficulty of the bioengineering
equipment and techniques that are needed to create posthuman beings, it is
likely that such beings will at least initially represent only a small portion of
human society. This synthetic understanding differs from analytic forms of
posthumanism in which all human beings are already considered to be posthu-
manized, insofar as we live in a world that is posthuman.
Attitude toward posthumanity. The attitude toward posthumanity ex-
pressed by Bostrom can be taken as typical of transhumanists more generally.
Bostrom makes a nominal effort at suggesting that he is neutral regarding the
question of whether posthumanity represents a step forwards or backwards
in human development; he acknowledges that while transhumanism is only
concerned with creating forms of posthumanity that are “very good,” there
are undoubtedly other “possible posthuman modes of being” that would be
“wretched and horrible.”
134
Elsewhere, however, Bostrom appears to define
posthumanity in such a way that it can only be a beneficial phenomenon. For
example, he defines a ‘posthuman being’ not merely as one that has been tech-
130
Bostrom (2008), p. 109.
131
For an account of the origins of such forms of posthumanism, see Ferrando (2013), p. 26.
132
Ferrando (2013), p. 27.
133
Bostrom (2008), p. 108.
134
This passing acknowledgement is found within an otherwise vigorous defense of the goal of
engineering posthumanity. See Bostrom (2008), p. 108.
Chapter One: A Typology of Posthumanism • 89
nologically engineered to possess characteristics differing from those natu-
rally possessed by human beings but as one who has been technologically en-
gineered to possess either: 1) an enhanced “capacity to remain fully healthy,
active, and productive, both mentally and physically”; 2) enhanced “general
intellectual capacities […], as well as special faculties such as the capacity to
understand and appreciate music, humor, eroticism, narration, spirituality,
mathematics, etc.”; or 3) an enhanced “capacity to enjoy life and to respond
with appropriate affect to life situations and other people.”
135
Bostrom’s view
of ‘posthumanity’ is thus not value-neutral but strongly value-laden, as it
would automatically exclude from being considered ‘posthumanizing’ any fu-
ture technology that results in injury to human beings’ health, a degradation
of their cognitive capacities, or an impairment to their ability to enjoy social
interactions – even if the technology were developed as part of a transhuman-
ist bioengineering project whose explicit goal was to bring about the creation
of posthumanity and its negative impacts were an unintended effect.
136
Transhumanism as activism and project. In the understanding described
above, ‘posthumanity’ is positioned as though it were a new form of space
travel or nuclear power whose costs and benefits can be carefully weighed by
a government panel that then decides whether to appropriate funds to bring
such technology into existence or to ban the technology and prevent its de-
velopment. This understanding is quite different from that of analytic posthu-
manism, which believes that posthumanity is inevitable because it is already
here, and that the fundamental question is not whether one should seek to
actively bring about or prevent the world’s posthumanization but how to in-
terpret it.
Critique from the perspective of critical and cultural posthumanism.
Transhumanism involves efforts to intentionally engineer a new human spe-
cies through the use of emerging biotechnologies. It thus typically focuses on
the technological posthumanization of humanity and ignores the many non-
technological ways in which posthumanization has been occurring for centu-
ries. Ferrando notes that cultural and critical posthumanism are inclined to
negatively assess such an approach. From their perspective, transhumanism
appears to possess an overly simplistic conceptualization of the world: it is
willing to perpetuate a post-Enlightenment vision of ‘human exceptionalism’
that places human beings in a hierarchy over nonhuman animals and nature
– and indeed, transhumanism further expands this stratification of being by
135
Bostrom (2008), p. 108.
136
Identifying posthumanity with an ‘enhanced’ humanity reflects an optimistic assumption that
all posthumanizing bioengineering efforts will be driven by a well-intentioned (and effective)
vision of ‘improving’ human nature and not, for example, by a desire to produce quasi-human
workers, test subjects, toys, or personal companions that possess a diminished human nature
and whose creation is driven by the self-interest of particular governments, corporations, or
individual consumers.
90 • Posthuman Management
creating a new ‘hierarchy of hierarchies’ in which a soon-to-be-engineered
posthumanity will peer down from its superior vantage point outside of the
natural order. But transhumanism often glosses naïvely over the fact that such
frameworks have historically been used to place some human beings (such as
slaves) in positions of inhuman subjugation, that such injustices widely exist
even today, and that the development of transhumanist technologies could
easily exacerbate rather than solve such problems.
137
Thus Herbrechter posi-
tions the critical posthumanism of Hayles as being steadfastly opposed to
transhumanism and its goal of achieving the radical disembodiment and de-
materialization of the human intellect.
138
Transhumanism as commercialization of the human being. Anders and
Herbrechter suggest that at least some strains of transhumanism could be
viewed as outgrowths of the West’s hyper-commercialized culture of con-
sumer technology. Members of society have been conditioned to covet the
newest models of products – whether smartphones or televisions or automo-
biles – that possess the most innovative features and best specifications and
are ostensibly far superior to last year’s models; all ‘sophisticated’ and ‘suc-
cessful’ members of society participate in a cycle of continuous product up-
grades. According to this view, transhumanism laments – and is even ashamed
by – the fact that the human mind and body are not a purposefully engineered
consumer product that can be upgraded; through the application of biotech-
nologies and a reconceptualization of the nature of humanity, it seeks to trans-
form the human being into just such a consumer product.
139
Although trans-
humanism envisions itself as a positive movement that seeks to exalt human-
ity by transcending the limits of human nature, it could thus alternatively be
understood as a negative movement that is embarrassed by the messy imper-
fections inherent in human beings’ biological nature and which seeks to sup-
press that reality beneath a patina of technological enhancement.
Not all technologists are transhumanists. Not all (or even many) scien-
tists, engineers, and entrepreneurs doing cutting-edge work in the fields of
genetic engineering, neuroprosthetics, nanorobotics, and artificial intelligence
are transhumanists; many individuals involved with developing new technol-
ogies for the engineering and augmentation of human beings are content to
focus on the very concrete next steps involved with advancing the ‘evolution’
of humanity. For transhumanists, though, such incremental progress is a nec-
essary but only preliminary step toward the creation of fully disembodied
137
See Ferrando (2013), pp. 27-28.
138
See Hayles (1999) and Herbrechter (2013), p. 94.
139
See Anders, Die Antiquiertheit des Menschen. Band 1: Über die Seele im Zeitalter der zweiten
industriellen Revolution (1992), pp. 31ff., as analyzed in Herbrechter (2013), p. 170.
Chapter One: A Typology of Posthumanism • 91
posthuman entities that can slip effortlessly between biological and electronic
modes of being, between actual and virtual substrates.
140
Religious aspects of transhumanism. Transhumanism frequently takes
on aspects of a religious movement, formulating visions of “techno-transcend-
ence and digital cities of god in cyberspace, of the overcoming of the flesh”; it
thus cannot be understood simply from a technological perspective but also
requires insights from the field of theology.
141
Some would even contend that
transhumanism’s conceptual origins lie in (arguably misguided) interpreta-
tions of the work of Catholic theologian Pierre Teilhard de Chardin and his
idea of the ‘noosphere’ of shared digital information that would someday
come to surround the globe.
142
Building on Le Breton’s analysis, Herbrechter suggests that from the per-
spective of critical posthumanism, transhumanism can be understood as a sort
of ‘neognostic’ hatred of the body that privileges the mind over its vessel of
flesh that continuously degrades and decays.
143
Such conceptual objections to
transhumanism, however, are very different from bioconservatives’ objections
regarding the expected negative real-world impacts of transhumanist projects.
c. Three Typologies of Transhumanism
There are at least three ways of classifying different forms of transhuman-
ism: from political, objective, and instrumental perspectives.
A political typology of transhumanism. Ferrando identifies three distinct
strains within transhumanism:
144
1) Libertarian transhumanism argues that the free market – and not
governmental oversight – can best ensure that technologies for hu-
man enhancement are efficiently and effectively developed and
made accessible within human society.
2) Democratic transhumanism seeks to ensure – for example, by
means of government regulation – that technologies for human en-
hancement do not simply become privileges for the powerful and
140
See Herbrechter (2013), p. 101.
141
Herbrechter (2013), p. 103.
142
See Teilhard de Chardin, Le Phénomène humain (1955), and its discussion in Herbrechter
(2013), p. 104. The revolutionary nature of Teilhard’s scientific, philosophical, and theological
investigations open them to many possible interpretations; his thought has frequently been ap-
propriated by transhumanist groups that disconnect it from its ultimate grounding in the ortho-
dox Catholic intellectual tradition and thus interpret it in ways that do not necessarily reflect its
original import or context.
143
See Le Breton, David, L’Adieu au corps (1999), pp. 49, 219-223, as discussed in Herbrechter
(2013), pp. 96-97.
144
Ferrando (2013), p. 27.
92 • Posthuman Management
wealthy but are made freely accessible to all human beings regard-
less of their social or economic status.
3) Extropianism is a movement founded by More and others that ad-
vocates the development of genetic engineering, nanotechnology,
cryonics, mind uploading, and other technologies that can suppos-
edly allow human lives to be extended indefinitely and spent in pur-
suit of intellectual fulfillment.
This model for categorizing transhumanisms might be understood as consti-
tuting a ‘political’ typology of transhumanism, as it largely distinguishes
transhumanisms according to their view of the role of governments in steering
the development and deployment of transhumanist technologies.
An objective typology of transhumanism. Significant variations also ex-
ist between different forms of transhumanism regarding the kinds of entities
that are objects of the process of biotechnological posthumanization. Another
typology can thus be formulated by classifying strains of transhumanism ac-
cording to their objects:
4) Biotransformative transhumanism seeks to employ transformative
technologies to allow particular human beings who are already alive
to transcend the limits of human nature through manipulation or
augmentation of their existing biological organisms – for example,
through somatic cell gene therapy, cryonics, or neuroprosthetic en-
hancement.
5) Biogenerative transhumanism seeks to purposefully design the
characteristics of future beings who have not yet been conceived or
born (e.g., through the use of germline gene therapy (GGT) or syn-
thetic biology to engineer a new superhuman species).
6) Mimetic transhumanism seeks to transcend the limits of human
nature by creating superior and transcendent beings that are wholly
artificial and do not represent a continuation of humanity in an or-
ganic, biological sense but which in some conceptual sense might
nevertheless be considered our ‘offspring’ – and perhaps even more
so than can our biological offspring, insofar as they would be con-
sciously designed by human beings to embody our highest aspira-
tions, rather than being the non-designed products of randomized
biological reproductive processes. Such beings might include artifi-
cial superintelligences, sapient robot networks, or ‘uploaded’ hu-
man minds that are in fact artificial replicas rather than continua-
tions of their human models.
Herbrechter agrees with Le Breton that for the group we refer to as bio-
transformative transhumanists, the most relevant power relationship is not
that which allows other members of society to control (or be controlled by) an
individual but that which allows the individual to control his or her own
Chapter One: A Typology of Posthumanism • 93
body.
145
For example, Herbrechter notes that for transhumanists like Warwick,
transhumanism is about a rational humanist subject making a free choice be-
tween ‘good’ and ‘evil’ (or perhaps between ‘good’ and ‘better’) and choosing
the path that will result in the most happiness and independence.
146
Biotrans-
formative transhumanism might thus be understood as a form of extreme hu-
manism.
On the other hand, some forms of radical mimetic transhumanism seek to
actively break all connections with humanistic values. Building on McLuhan’s
notion of the ‘global electric village,’ Herbrechter observes that some trans-
humanists see it as humanity’s role (and even responsibility) to give birth to
our nonanthropic, artificially intelligent successors.
147
Similarly, drawing on
Truong’s analysis, Herbrechter notes that some transhumanists look forward
with hope to the day when human beings will be replaced by the AIs that
represent the next stage in the evolution of consciousness within our corner
of the universe. It is anticipated that such artificial intelligences would even-
tually become fundamentally ‘inhuman’ as they evolve beyond the shackles
created by human-like sociality, rationality, and knowledge; while ‘conscious-
ness’ might thus continue to exist long after the demise of humanity, ‘human-
like consciousness’ would not long survive the biological beings who provided
its template.
148
An instrumental typology of transhumanism. Distinctions also exist be-
tween the technologies advocated by different transhumanists for creating
posthumanized entities. There are correlations between the goals held by par-
ticular transhumanists and the technologies used to pursue those goals; how-
ever, the alignment between goals and instruments is not absolute. Some
transhumanists first choose the goal that they wish to accomplish and then
seek to develop technologies to accomplish that goal. For them, achievement
of their selected goal is paramount and the means used to achieve it are sec-
ondary and subject to change. On the other hand, some transhumanists work
as scientists, engineers, entrepreneurs, ethicists, policy experts, or advocates
specializing in a particular type of technology, such as artificial intelligence,
neuroprosthetics, or germline gene therapy. For them, their paramount desire
is discovering new avenues for improving humanity through the use of that
particular technology; the specific ways in which that technology can be em-
145
See Herbrechter (2013), pp. 96, and its analysis of Le Breton (1999), pp. 49.
146
Warwick’s views on human enhancement can be found, e.g., in Warwick, “The Cyborg Rev-
olution” (2014). Such perspectives are analyzed in Herbrechter (2013), p. 102.
147
Herbrechter (2013), p. 50.
148
See Truong, Jean-Michel, Totalement inhumaine (2001), pp. 49, 207, as translated and analyzed
in Herbrechter (2013), p. 172. See also Gladden, “The Social Robot as ‘Charismatic Leader’: A
Phenomenology of Human Submission to Nonhuman Power” (2014).
94 • Posthuman Management
ployed to create enhanced, transcendent, posthumanized beings are second-
ary. Such transhumanism can perhaps best be understood using the instru-
mental typology described here. For example, a scientist who specializes in
developing new techniques for synthetic biology and who possesses transhu-
manist inclinations might pursue the use of such methods for biotransforma-
tive, biogenerative, and mimetic transhumanism, while a transhumanist re-
searcher in the field of artificial intelligence might similarly pursue ways of
applying AI to advance all three objective types of transhumanism.
2. POPULAR (OR ‘COMMERCIAL’) POSTHUMANISM
Herbrechter distinguishes between “a fashionable and popular posthuman-
ism” and a more “serious and philosophical one.” Occasionally, he seems to
suggest that science fiction falls within the sphere of popular and faddish
posthumanism – such as when he speaks of the intimate collaboration be-
tween science fiction and the commercial film industry and notes that the im-
portance of science fiction for posthumanism is “most visible” when science
fiction is considered “in its Hollywood blockbuster incarnation.”
149
However,
as noted earlier, we would argue that in its best and truest form, science fiction
takes its place alongside philosophical posthumanism as a form of synthetic
theoretical posthumanism that seeks to deepen our understanding of future
posthumanities. While we would agree that for many members of the general
public, Hollywood blockbusters represent the most visible presentations of ex-
plicitly posthumanist themes, they are typically not the most insightful, in-
depth, or coherent presentations. By focusing on Hollywood blockbusters,
Herbrechter minimizes the role of other forms of science fiction (such as nov-
els, short stories, roleplaying and computer games, manga and anime, and in-
dependent films) that present more well-thought-out and incisive analyses of
posthumanist themes. We would suggest that the more popular (if not popu-
list) and commercially oriented works of speculative fiction – such as Holly-
wood blockbusters – can be better understood as a form of synthetic practical
posthumanism that is geared specifically at generating particular economic,
social, or political outcomes and which we will discuss here under the title of
popular (or ‘commercial’) posthumanism. Works of popular posthumanism
are typically aimed either at generating maximum profits for their producers,
influencing public opinion to create a demand for new posthumanizing tech-
nologies, or preparing the public to accept changes to daily life that are being
planned by government policymakers, corporations, or other powers.
Many of the criticisms directed broadly at the world of ‘science fiction’ can
more accurately be understood as targeting the products and methods of com-
149
Herbrechter (2013), pp. 22, 39, 107.
Chapter One: A Typology of Posthumanism • 95
mercial posthumanism. In discussing Best and Kellner’s analysis of posthu-
manism, Herbrechter notes the claim that “Economic neoliberalism, free mar-
ket ideology and late capitalist individualism can no longer be separated from
the various technological and cultural posthumanization processes.”
150
Ac-
cording to that view, popular posthumanism can be seen as simply the most
extreme manifestation of the link between commercial and political interests
and the ongoing infusion of posthumanist themes into contemporary culture.
Similarly, Herbrechter suggests that just as neuroscientists are exploring ways
to exploit the plasticity of the human brain, so, too, “Global virtual hypercap-
italism needs an equally plastic and flexible individual subject”;
151
popular
posthumanist narratives that emphasize the pliability, dissolubility, and re-
configurability of the human being support the development of subjects that
are ready-made for control by corporate interests.
Indeed, Herbrechter notes the cynical argument that the apparent pro-
cesses of posthuman technologization might simply be a ruse and distraction
foisted cleverly on the public by the forces of neoliberal hypercapitalism that
draw attention away from the “ever-increasing gap between rich and poor and
the further concentration of power and capital” by subduing the masses with
the hope or fear of a radically different future.
152
If such intentionally fabri-
cated posthumanism exists, we would suggest that it takes the form not of
critical or philosophical posthumanism (whose proponents are constitution-
ally on guard against such efforts at manipulation) but of techno-idealism,
transhumanism, and the sort of commercial posthumanism described here. In-
deed, Herbrechter alludes to the fact that complex, long-term, resource-inten-
sive programs for developing new technologies for virtualization, miniaturi-
zation, surveillance, cyborgization, and artificial intelligence are being funded
and led not primarily by philosophers who are interested in exploring the
boundaries of human nature but by powerful commercial and governmental
institutions (including banks, insurance companies, marketing firms, Internet
and technology companies, and military and police organizations) that are
seeking to develop such instruments for their own concrete ends. Such tech-
nologies not only give governments new tools for fighting crime and terrorism
but also facilitate the invention of new forms of crime and terrorism (such as
150
Herbrechter (2013), p. 55.
151
Herbrechter (2013), p. 25.
152
Herbrechter notes the substantiveness of this argument without necessarily fully endorsing
it; see Herbrechter (2013), p. 23.
96 • Posthuman Management
memory-hacking or the development of hybrid bioelectronic viruses
153
) that
were never previously possible.
154
Just as popular posthumanism can be employed as an instrument by cor-
porations and governments to aid in their technoscientific consolidation of
profits and power, so, too, can critical and sociopolitical posthumanism – with
support from science fiction – play an important role in identifying these tech-
nologically facilitated efforts to gain hegemony and in developing creative
new ways of conceptualizing the nature of citizenship in a posthuman world
that guarantee a more democratic basis for political and economic power.
155
E. Posthumanisms That Join the Analytic, Synthetic,
Theoretical, and Practical
Hybrid posthumanisms that include strong analytic, synthetic, theoretical,
and practical aspects can be understood as examples of a ‘posthumanism of
production’ that develops a robust and rigorous theoretical framework which
is then utilized to successfully generate concrete products or services within
the contemporary world. At least three forms of posthumanism display hybrid
traits to such an extent that it would be arbitrary to attempt to force them to
fit into just one quadrant of our framework. These forms of posthumanism are
the metahumanism developed by Del Val and Sorgner, sociopolitical posthu-
manism, and organizational posthumanism. We can consider each of these
posthumanisms in turn.
1. METAHUMANISM AS A MOVEMENT OF RELATIONAL METABODIES
Ferrando cites a form of ‘metahumanism’ originally formulated by Del Val
and Sorgner in 2010
156
and grounded in the thought of Nietzsche, Deleuze,
Haraway, Hayles, and others.
157
Such metahumanism draws explicitly on such
diverse fields as neuroscience, chaos theory, quantum physics, ecology, and
Eastern philosophy.
158
Sorgner explains that this metahumanism attempts to
build on the best insights from both Anglo-American transhumanist and Con-
tinental posthumanist thought. On the one hand, metahumanism adopts crit-
ical posthumanism’s “attempt to transcend dualisms” and cultivation of a
153
See Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), for
a discussion of such possibilities.
154
Herbrechter (2013), p. 190; see also Gladden, The Handbook of Information Security for Ad-
vanced Neuroprosthetics (2015).
155
See Gray (2002), p. 29, and its discussion in Herbrechter (2013), p. 190.
156
Ferrando (2013), p. 32.
157
Del Val et al. (2011), pp. 1-2, 6-9.
158
Del Val et al. (2011), p. 9.
Chapter One: A Typology of Posthumanism • 97
“this-worldly understanding of human beings”; although, rather than assum-
ing the materialist perspective attributed to posthumanism, metahumanism
adopts an intensely relational outlook.
159
At the same time, metahumanism is
compatible with the transhumanist desire to create transcendent beings. How-
ever, metahumanism holds that while it is acceptable for individuals to desire
such a transformation and to pursue that goal by applying advanced biotech-
nologies to themselves (i.e., as a form of biotransformative transhumanism),
driving the evolution of human beings into a superior species cannot be
claimed to be a necessary goal for humanity as a whole – because the trans-
humanist ideal is only one of many aims present within the “radical plurality
of concepts of the good.”
160
Sorgner positions metahumanism as an outgrowth of philosophical
posthumanism rather than cultural or critical posthumanism, insofar as me-
tahumanism’s key dynamic is its focus on consistently applying a particular
philosophical methodology that Sorgner describes as a ‘procedural attitude’
which “brings together Adorno’s negative dialectics and Vattimo’s radical her-
meneutics such that it is a particular procedure or a method which can get
applied to various discourses.” This method is employed by entering into the
discourses of other thinkers (such as utilitarian bioethicists) and helping them
develop their own paradigms by challenging, undermining, and breaking
apart those positions that they take for granted – thereby transforming their
thought into something that is “more fluid and multiperspectival.”
161
Metahumanism represents a form of ‘radical relationalism,’ insofar as it
suggests that physical or social bodies which appear to be discrete entities can
instead best be understood as the effects of contingent relations (such as
movement) and that such seemingly discrete bodies can be transformed by
altering the relations in which they participate. This notion is formalized in
the idea of a ‘metabody,’ which “is not a fixed entity but a relational body.”
Such metabodies are both ‘metasexual’ and post-anatomical.
162
Metahuman-
ism emphasizes that “Monsters are promising strategies for performing this
development away from humanism”
163
and its understanding of the human
body. In the recognition that the depiction of quasi-human monsters might
aid us to think about humanity in a new way, a concrete link exists between
159
Del Val et al. (2011), p. 2-3.
160
Such a position has connections with both postmodernism and posthumanism. See Del Val et
al. (2011), p. 3.
161
Metahumanism thus inherently possesses a strong outward orientation that reaches out to
engage thinkers who work in other disciplines and possess other perspectives. See Del Val et al.
(2011), pp. 3-4.
162
Del Val et al. (2011), pp. 5, 14, 8.
163
Del Val & Sorgner (2011), p. 1.
98 • Posthuman Management
the philosophical metahumanism proposed by Del Val and Sorgner and the
form of fictional metahumanism that we discussed in an earlier section.
Unlike biopolitical posthumanism, metahumanism does not have a strong
future orientation; it shares with cultural and critical posthumanism the fact
that “it is non-utopian, it does not see the metahuman as a future, but as a
strategy in the present.”
164
However, while metahumanism contains strong
analytic aspects, it is also a form of synthetic posthumanism, insofar as it envi-
sions a new kind of posthumanized being that does not yet fully exist but
which is only now in the process of appearing. Likewise, metahumanism
spans theoretical and practical posthumanism in that it not only seeks to better
understand human nature but also to give birth to concrete new forms of ar-
tistic expression and social and political interaction. This is done partly by
enacting “new strategies of resistance” to human beings’ subjugation to rep-
resentation and language; such strategies may take the form of “amorphous
becomings” manifested through the motion of dance and other forms of artis-
tic performance.
165
2. SOCIOPOLITICAL POSTHUMANISM
Sociopolitical posthumanism can be understood as a form of what Her-
brechter (building on Rosenau) describes as ‘techno-cultural pragmatism.’
166
Sociopolitical posthumanism accepts that posthumanizing technological
change is gaining in speed and intensity and – given the fact that the yearning
for technological advancement is a fundamental aspect of human nature – any
efforts to completely block such technologization are misguided and futile.
Instead, sociopolitical posthumanism seeks to steer the processes of technolo-
gization and posthumanization in a way that maximizes their positive impacts
while ameliorating or avoiding their detrimental side-effects.
Sociopolitical posthumanism frequently initiates new debates among sub-
ject-matter experts and the broader public on such topics and, insofar as pos-
sible, proposes solutions. The analytic and theoretical aspects of sociopolitical
posthumanism are evident when, for example, scholars explore how estab-
lished definitions of a ‘legal person’ are challenged by an increasingly dean-
thropocentrized environment in which some artificially intelligent systems al-
ready display human-like decision-making capacities and fill societal roles
previously restricted to human beings. The synthetic and practical aspects are
manifested when scholars draw on such theoretical investigations to propose
the implementation of new legislation, regulations, or financial systems not
164
Del Val et al. (2011), p. 6.
165
Del Val himself has pioneered such forms of artistic expression. For the role of practical action
in metahumanism, see Del Val et al. (2011), pp. 5-6, 12.
166
See Herbrechter (2013), pp. 23-24, and its discussion of Rosenau (1992).
Chapter One: A Typology of Posthumanism • 99
because they are needed to account for a reality that exists today but to ad-
dress the activities of posthumanized beings expected to appear in the future.
However, sociopolitical posthumanism differs from the synthetic practical
posthumanisms of transhumanism and bioconservatism, whose adherents
may manufacture theoretical frameworks to justify the pursuit or condemna-
tion of processes of technologization that they already instinctively find ap-
pealing or repellent. For practitioners of sociopolitical posthumanism, a seri-
ous and in-depth exploration of theoretical questions is generally the starting
point, and any resulting proposals for practical change emerge from a well-
developed theoretical framework of the sort commonly found within philo-
sophical or critical posthumanism.
Such sociopolitical posthumanism can be found, for example, within the
field of law, where Braman argues that the traditional “assumption that the
law is made by humans for humans” is no longer tenable; as the roles played
by computers in society’s decision-making processes grow, we are beginning
to witness “a transformation in the legal system so fundamental that it may
be said that we are entering a period of posthuman law.”
167
Another example
would be the theoretically grounded ‘Cyborg Bill of Rights’ proposed by Gray
as an attempt to ensure that the increasing technological capacity for cy-
borgization will result in beneficial new forms of posthumanized political or-
ganization and engagement and not simply the production of new military
instruments.
168
3. ORGANIZATIONAL POSTHUMANISM
Organizational posthumanism applies posthumanist insights and method-
ologies to the study and management of organizations including businesses,
nonprofit organizations, schools, religious groups, professional associations,
political parties, governments, and military organizations. Insofar as ongoing
technological and social change is reshaping the capacities and relationality
of the human beings who belong to organizations – and creating new kinds
of entities like social robots that can enter into goal-directed social relation-
ships with human beings and one another
169
– the nature of organizations is
itself changing. Organizational posthumanism can aid us in making sense of
and, ideally, anticipating such changes. While a scattered assortment of works
167
Berman, “Posthuman Law: Information Policy and the Machinic World” (2002).
168
See Gray (2002) and the discussion of that work in Herbrechter (2013), p. 105. For a further
sociopolitical posthumanist discussion of ways in which, e.g., the use of posthuman neuropros-
thetic technologies could give rise to new forms of utopian or dystopian societies, see Gladden,
“Utopias and Dystopias as Cybernetic Information Systems: Envisioning the Posthuman Neu-
ropolity” (2015).
169
See, e.g., Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
100 • Posthuman Management
by management theorists and practitioners have begun to explore the impli-
cations of posthumanism for organizational life, these investigations are still
in their incipient stages;
170
the explicit formulation within this book of organ-
izational posthumanism as an emerging discipline thus represents a novel de-
velopment within the fields of posthumanism and organizational manage-
ment.
Organizational posthumanism can be defined as an approach to analyzing,
understanding, creating, and managing organizations that employs a post-an-
thropocentric and post-dualistic perspective and which recognizes that
emerging technologies that complement traditional biological human beings
with new kinds of intelligent actors also transform the structures, member-
ship, dynamics, and roles available to organizations.
171
From this description,
it can be seen that – like sociopolitical posthumanism and the metahumanism
of Del Val and Sorgner – organizational posthumanism incorporates elements
of both analytic and synthetic and both theoretical and practical posthuman-
ism.
Analytic and synthetic elements. Organizational posthumanism is ana-
lytic in that it is not simply interested in imagining the radically novel forms
that organizations might take ten or twenty or fifty years from now, after on-
going trends of roboticization, cyborgization, digitalization, and virtualization
will have transformed organizations wholly beyond recognition; it is also in-
terested in understanding and shaping the dynamics of organizations that ex-
ist today to the extent that they have already been affected by technological
and nontechnological processes of posthumanization. Although the impact
that artificial intelligence, social robotics, nanorobotics, artificial life, genetic
engineering, neurocybernetics, and virtual reality have had on organizations
to date is relatively small when compared to biopolitical posthumanists’ vi-
sions of the sociotechnological changes that loom on the horizon, even those
modest impacts already realized are transforming the ways that organizations
can and must operate, rendering many previous best practices increasingly
obsolete.
170
For examples of such works, see, e.g., Gephart, “Management, Social Issues, and the Postmod-
ern Era” (1996); Berner, Management in 20XX: What Will Be Important in the Future – A Holistic
View (2004); Mara & Hawk, “Posthuman rhetorics and technical communication” (2009); Barile,
“From the Posthuman Consumer to the Ontobranding Dimension: Geolocalization, Augmented
Reality and Emotional Ontology as a Radical Redefinition of What Is Real” (2013); and Gladden,
“Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman Socioeconomic
Interaction” (2016).
171
For an in-depth discussion of this topic, see Part Two of this volume, “Organizational Posthu-
manism.”
Chapter One: A Typology of Posthumanism • 101
At the same time, organizational posthumanism is synthetic insofar as ef-
fective strategic management demands that organizations anticipate the con-
tours of new phenomena that may appear in the future and understand their
potential implications for an organization. For example, the frequently em-
ployed PESTLE analysis requires organizations to envision the short-, me-
dium-, and long-term political, economic, social, technological, legal, and en-
vironmental impacts that will result either from internal organizational deci-
sions or future changes in the organization’s external ecosystem.
172
In order to
anticipate such potential impacts and develop contingency plans for respond-
ing to them (or strategies to proactively shape them), organizations must at-
tempt to project as accurately as possible the future directions of posthuman-
ization processes and the new kinds of beings, organizational structures, in-
teractions, physical and virtual spaces, and ecosystems that they might pro-
duce. This demands a rigorous and imaginative futurology similar to that em-
ployed in philosophical posthumanism and the more thoughtful forms of sci-
ence fiction.
Theoretical and practical elements. Organizational posthumanism is
theoretical insofar as it attempts to identify and understand the manner in
which organizations are being affected by existing or potential processes of
posthumanization. This involves analyzing the ways in which organizations’
members, structures, processes, information systems, physical and virtual
spaces, and external environments are being changed through the action of
supplementing or replacing their natural biological human workers with ad-
vanced AIs, social robots, neuroprosthetically augmented human beings, and
other posthumanized beings. In this regard, organizational posthumanism
builds on existing lines of inquiry within philosophical posthumanism. For
example, Miah notes that posthumanist thought has long studied the growing
fusion of human beings with the technological devices that we use to interact
with one another and with our environment and to perform work-related
tasks. As such tools grow increasingly sophisticated, they acquire ever subtler
and more efficacious ways of liberating and empowering human beings, even
as they subjugate and oppress. Much of this ambivalent dynamic results from
our tools’ deepening integration into the mechanisms of organizations of
which we are members.
173
The theoretical component of organizational
posthumanism attempts to develop coherent conceptual frameworks to ex-
plain and anticipate such phenomena.
At the same time, organizational posthumanism is also practical in that its
goal is not simply to understand the ways in which posthuman realities are
172
See Cadle et al., Business Analysis Techniques: 72 Essential Tools for Success (2010), pp. 3-6.
173
See Miah (2008), p. 82, and its analysis of Mazlish, The Fourth Discontinuity: The Co-Evolution
of Humans and Machines (1993).
102 • Posthuman Management
affecting organizations but also to aid management practitioners in proac-
tively designing, creating, and maintaining organizations that can subsist
within such a complex and novel competitive environment. Organizational
posthumanism seeks to intentionally bring about the creation of a particular
type of near-future ‘posthumanity’ (i.e., a world of organizations that survive
as viable systems within a nonanthropocentric context of radical technologi-
cal change and convergence) and to purposefully block the creation of a dif-
ferent type of near-future ‘posthumanity’ (i.e., a world of organizations that
become unproductive, inefficient, unsustainable, dehumanizing, and even
dystopian as a result of their inability to deal with the emerging nonanthro-
pocentric context).
174
V. Conclusion
The term ‘posthumanism’ is employed within an increasingly wide array
of contexts to describe phenomena which, in one way or another, focus on a
change in the traditional understanding of the human being. Some forms of
posthumanism argue that the historical definition of humanity has always
been problematic, others that it is now fracturing and becoming obsolete as a
result of ongoing technological change. Still other forms of posthumanism ar-
gue that our traditional understanding of the ‘human’ must be expanded or
replaced as a next step in the development of sapient society. As we have seen,
posthumanisms include such diverse phenomena as new academic disciplines,
artistic and spiritual movements, research and development programs for new
technologies, works of science fiction, social advocacy campaigns, and legis-
lative lobbying efforts.
By grouping posthumanisms into a handful of basic types and clarifying
the similarities and differences between them, the two-dimensional concep-
tual framework formulated in this text attempts to create a more orderly and
comprehensive foundation for the investigation of posthumanism than has
previously existed. The first type considered in detail was analytic theoretical
posthumanism, which includes such fields as critical and cultural posthuman-
ism and can be understood roughly as a posthumanism of critique. Synthetic
theoretical posthumanism, which includes phenomena like philosophical
posthumanism, science fiction, and techno-idealism, can be generally under-
stood as a posthumanism of imagination. Analytic practical posthumanism,
which includes various forms of metahumanism and neohumanism, can be
seen as a posthumanism of conversion of hearts and minds. Synthetic practical
174
In the case of, e.g., commercial enterprises and military organizations, the theory and practice
of organizational posthumanism might be employed not only to maximize the efficiency and
productivity of one’s own posthumanized organization but also to degrade the efficiency and
productivity of competing or opposing organizations, to the extent that such actions are legally
and ethically permissible.
Chapter One: A Typology of Posthumanism • 103
posthumanism, which includes transhumanism, bioconservatism, and popular
or commercial posthumanism, can be understood as a posthumanism of control
over the actions of societies and individuals. Finally, the hybrid posthumanism
that combines both analytic and synthetic as well as theoretical and practical
aspects – as exemplified by the metahumanism of Sorgner and Del Val, socio-
political posthumanism, and organizational posthumanism – can be under-
stood as a posthumanism of production.
As posthumanist perspectives continue to be adapted and applied to new
fields – such as that of organizational management – the work of developing
conceptual frameworks that can coherently account for the full spectrum of
posthumanisms is only beginning. It is hoped that the typology formulated in
this text can contribute to such endeavors by highlighting areas of definitional
ambiguity, building new conceptual bridges between different forms of
posthumanism, and formulating terminological reference points that can be
relied upon both by those who embrace various forms of posthumanism and
those who wish to challenge the principles of posthumanist thought.
References
2001: A Space Odyssey. Directed by Stanley Kubrick. 1968. Warner Home Video, 2001.
DVD.
Anders, Günther. Die Antiquiertheit des Menschen. Band 1: Über die Seele im Zeitalter
der zweiten industriellen Revolution, Munich: Beck, 1956 [1992].
Asimov, Isaac. I, Robot. New York: Gnome Press, 1950.
Badmington, Neil. “Cultural Studies and the Posthumanities.” In New Cultural Stud-
ies: Adventures in Theory, edited by Gary Hall and Claire Birchall, pp. 260-72. Ed-
inburgh: Edinburgh University Press, 2006.
Barile, Nello. “From the Posthuman Consumer to the Ontobranding Dimension: Ge-
olocalization, Augmented Reality and Emotional Ontology as a Radical Redefini-
tion of What Is Real.” intervalla: platform for intellectual exchange, vol. 1 (2013).
http://www.fus.edu/intervalla/images/pdf/9_barile.pdf. Accessed May 18, 2016.
Berman, Sandra. “Posthuman Law: Information Policy and the Machinic World.” First
Monday 7, no. 12 (2002). http://www.ojphi.org/ojs/index.php/fm/arti-
cle/view/1011/932. Accessed March 15, 2016.
Berner, Georg. Management in 20XX: What Will Be Important in the Future – A Holis-
tic View. Erlangen: Publicis Corporate Publishing, 2004.
Berrigan, Joseph R. “The Prehumanism of Benzo d’Allesandria.” Traditio (1969): 249-
63.
Better Off Dead: The Evolution of the Zombie as Post-Human, edited by Deborah
Christie and Sarah Juliet Lauro. New York: Fordham University Press, 2011.
The Bible and Posthumanism, edited by Jennifer L. Koosed. Atlanta: Society of Biblical
Literature, 2014.
104 • Posthuman Management
Birnbacher, Dieter. “Posthumanity, Transhumanism and Human Nature.” In Medical
Enhancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
95-106. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
Bostrom, Nick. “A History of Transhumanist Thought.” Journal of Evolution and
Technology vol. 14, no. 1 (2005). http://jetpress.org/volume14/bostrom.html.
Bostrom, Nick. “Why I Want to Be a Posthuman When I Grow Up.” In Medical En-
hancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
107-37. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
Cadle, James, Debra Paul, and Paul Turner. Business Analysis Techniques: 72 Essential
Tools for Success. Swindon: British Informatics Society Limited, 2010.
Calverley, D.J. “Imagining a non-biological machine as a legal person.” AI & SOCIETY
22, no. 4 (2008): 523-37.
Coker, Christopher. “Biotechnology and War: The New Challenge.” Australian Army
Journal vol. II, no. 1 (2004): 125-40.
Curtis, Neal. “The Inhuman.” Theory, Culture & Society 23, 2-3 (2006): 434-36.
Cyberculture, Cyborgs and Science Fiction: Consciousness and the Posthuman, edited by
William S. Haney II. Amsterdam: Rodopi, 2006.
Del Val, Jaime, and Stefan Lorenz Sorgner. “A Metahumanist Manifesto.” The Agonist
IV no. II (Fall 2011). http://www.nietzschecircle.com/AGONIST/2011_08/ME-
TAHUMAN_MANIFESTO.html. Accessed March 2, 2016.
Del Val, Jaime, Stefan Lorenz Sorgner, and Yunus Tuncel. “Interview on the Me-
tahumanist Manifesto with Jaime del Val and Stefan Lorenz Sorgner.” The Agonist
IV no. II (Fall 2011). http://www.nietzschecircle.com/AGONIST/2011_08/Inter-
view_Sorgner_Stefan-Jaime.pdf. Accessed March 2, 2016.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts 8, no. 2 (Fall 2013): 26-32.
Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolu-
tion. New York: Farrar, Straus, and Giroux, 2002.
Gane, Nicholas. “Posthuman.” Theory, Culture & Society 23, 2-3 (2006): 431-34.
Gephart, Jr., Robert P. “Management, Social Issues, and the Postmodern Era.” In Post-
modern Management and Organization Theory, edited by David M. Boje, Robert P.
Gephart, Jr., and Tojo Joseph Thatchenkery, pp. 21-44. Thousand Oaks, CA: Sage
Publications, 1996.
Gladden, Matthew E. The Handbook of Information Security for Advanced Neuropros-
thetics, Indianapolis: Synthypnion Academic, 2015.
Gladden, Matthew E. “Neural Implants as Gateways to Digital-Physical Ecosystems
and Posthuman Socioeconomic Interaction.” In Digital Ecosystems: Society in the
Chapter One: A Typology of Posthumanism • 105
Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp.
85-98. Warsaw: Digital Economy Lab, University of Warsaw, 2016.
Gladden, Matthew E. “The Social Robot as ‘Charismatic Leader’: A Phenomenology
of Human Submission to Nonhuman Power.” In Sociable Robots and the Future of
Social Relations: Proceedings of Robo-Philosophy 2014, edited by Johanna Seibt,
Raul Hakli, and Marco Nørskov, pp. 329-39. Frontiers in Artificial Intelligence and
Applications 273. IOS Press, 2014.
Gladden, Matthew E. “Utopias and Dystopias as Cybernetic Information Systems: En-
visioning the Posthuman Neuropolity.” Creatio Fantastica nr 3 (50) (2015).
Goicoechea, María. “The Posthuman Ethos in Cyberpunk Science Fiction.” CLCWeb:
Comparative Literature and Culture 10, no. 4 (2008): 9. http://docs.lib.pur-
due.edu/cgi/viewcontent.cgi?article=1398&context=clcweb. Accessed May 18,
2016.
Graham, Stephen. “Imagining Urban Warfare: Urbanization and U.S. Military Tech-
noscience.” In War, Citizenship, Territory, edited by Deborah Cowen and Emily
Gilbert. New York: Routledge, 2008.
Gray, Chris Hables. Cyborg Citizen: Politics in the Posthuman Age, London: Routledge,
2002.
Gray, Chris Hables. “The Ethics and Politics of Cyborg Embodiment: Citizenship as a
Hypervalue.” Cultural Values 1, no 2 (1997): 252-58.
Haraway, Donna. “A Manifesto for Cyborgs: Science, Technology, and Socialist Fem-
inism in the 1980s.” Socialist Review vol. 15, no. 2 (1985), pp. 65-107.
Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New
York: Routledge, 1991.
Hassan, Ihab. “Prometheus as Performer: Toward a Posthumanist Culture? A Univer-
sity Masque in Five Scenes.” The Georgia Review vol. 31, no. 4 (1977): 830-50.
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Liter-
ature, and Informatics, Chicago: University of Chicago Press, 1999.
Hayles, N. Katherine. My Mother Was a Computer: Digital Subjects and Literary Texts.
Chicago: University of Chicago Press, 2005.
Herbrechter, Stefan. Posthumanism: A Critical Analysis. London: Bloomsbury, 2013.
[Kindle edition.]
I, Robot. Directed by Alex Proyas. 2004. 20th Century Fox, 2005. DVD.
Interface Zero 2.0: Full Metal Cyberpunk, developed by David Jarvis, Curtis Lyons, Sa-
rah Lyons, Thomas Shook, David Viars, and Peter J. Wacks, Gun Metal Games,
2013.
Kowalewska, Agata. “Symbionts and Parasites – Digital Ecosystems.” In Digital Eco-
systems: Society in the Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz,
and Renata Włoch, pp. 73-84. Warsaw: Digital Economy Lab, University of War-
saw, 2016.
106 • Posthuman Management
Krishnan, Armin. “Enhanced Warfighters as Private Military Contractors.” In Super
Soldiers: The Ethical, Legal and Social Implications, edited by Jai Galliott and Mi-
anna Lotz. London: Routledge, 2015.
Krüger, Oliver. Virtualität und Unsterblichkeit: Die Visionen des Posthumanismus [Vir-
tuality and Immortality]. Freiburg: Rombach, 2004.
Kurzweil, Ray. The Singularity is Near: When Humans Transcend Biology. New York:
Viking Penguin, 2005.
Le Breton, David. L’Adieu au corps. Paris: Métaillé, 1999.
Lovecraft, Howard Phillips. At the Mountains of Madness and Other Novels. Sauk City,
Wisconsin: Arkham House, 1985.
Lovecraft, Howard Phillips. The Dunwich Horror and Others. Sauk City, Wisconsin:
Arkham House, 1983.
Mara, Andrew, and Byron Hawk. “Posthuman rhetorics and technical communica-
tion.” Technical Communication Quarterly 19, no. 1 (2009): 1-10.
Mazlish, Bruce. The Fourth Discontinuity: The Co-Evolution of Humans and Machines.
New Haven: Yale University Press, 1993.
Miah, Andy. “A Critical History of Posthumanism.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 71-94. The Interna-
tional Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
Milán, Victor. “Transfigurations” and “Appendix: The Science of the Wild Card Vi-
rus: Excerpts from the Literature.” In Wild Cards, edited by George R. R. Martin.
Bantam Books: 1986.
Miller, Jr., Gerald Alva. “Conclusion: Beyond the Human: Ontogenesis, Technology,
and the Posthuman in Kubrick and Clarke’s 2001.” In Exploring the Limits of the
Human through Science Fiction, pp. 163-90. American Literature Readings in the
21st Century. Palgrave Macmillan US, 2012.
Nealon, Jeffrey T. Foucault Beyond Foucault. Stanford, CA: Stanford University Press,
2008.
Neohumanist Educational Futures: Liberating the Pedagogical Intellect, edited by Sohail
Inayatullah, Marcus Bussey, and Ivana Milojevi. Tansui, Taipei: Tamkang Uni-
versity Press, 2006. http://www.metafuture.org/Books/nh-educational-futures-
foreword.htm. Accessed March 16, 2016.
Pepperell, R. The Posthuman Condition: Consciousness Beyond the Brain. Bristol, UK:
Intellect Books, 2003.
Poster, Mark. What’s the Matter with the Internet? Minneapolis: University of Minne-
sota Press, 2001.
Posthuman Bodies, edited by Judith Halberstam and Ira Livingstone. Bloomington, IN:
Indiana University Press, 1995.
Posthumanist Shakespeares, edited by Stefan Herbrechter and Ivan Callus. Palgrave
Shakespeare Studies. Palgrave Macmillan UK, 2012.
Chapter One: A Typology of Posthumanism • 107
Rosenau, Pauline Marie. Post-Modernism and the Social Sciences: Insights, Inroads, and
Intrusions. Princeton, NJ: Princeton University Press, 1992.
Sanbonmatsu, John. The Postmodern Prince: Critical Theory, Left Strategy, and the
Making of a New Political Subject. New York: Monthly Review Press, 2004.
Sarkar, Prabhat Rainjan. “Neohumanism Is the Ultimate Shelter (Discourse 11).” In
The Liberation of Intellect: Neohumanism. Kolkata: Ananda Marga Publications,
1982.
Shadowrun: Core Rulebook 5, edited by Kevin Killiany and Katherine Monasterio.
Lake Stevens, Washington: Catalyst Game Labs, 2013.
Sixth World Almanac, edited by Jason Hardy and John Helfers. The Topps Company,
2010.
Smith, Andrew James. Genetic Gold: The Post-human Homunculus in Alchemical and
Visual Texts. M.A. Thesis. University of Pretoria, 2009.
Teilhard de Chardin, Pierre. Le Phénomène humain. Paris: Seuil, 1955.
Thacker, Eugene. “What Is Biomedia?” Configurations 11 (2003): 47-79.
Truong, Jean-Michel. Totalement inhumaine. Paris: Seuil, 2001.
Warwick, K. “The Cyborg Revolution.” Nanoethics 8 (2014): 263-73.
Witt, Ronald. “Francesco Petrarca and the Parameters of Historical Research.” Reli-
gions 3, no. 3 (2012): 699-709.
Wolin, Richard. “Foucault the Neohumanist?” The Chronicle of Higher Education. The
Chronicle Review (September 1, 2006).
Chapter Two
Organizational Posthumanism
1
Abstract. Building on existing forms of critical, cultural, biopolitical,
and sociopolitical posthumanism, in this text a new framework is de-
veloped for understanding and guiding the forces of technologiza-
tion and posthumanization that are reshaping contemporary or-
ganizations. This ‘organizational posthumanism ’ is an approach to
analyzing, creating, and ma naging organizations that employs a
post-dualistic and post -anthropocentric perspecti ve and which
recognizes that emerging technologies will increasingly transform
the kinds of members, structures, systems, processes, physical and
virtual spaces, and external ecos ystems that are available for or-
ganizations to utilize. It is argued that this posthumanizing technolo-
gization of organizations will especially be driven by developments
in three areas: 1) technologies for human augmentation and en-
hancement, including many forms of neuroprosthetics and genetic
engineering; 2) technologies for synthetic agency, includi ng robot-
ics, artificial intelli gence, and artificial life ; and 3) technologies for
digital-physical ecosystems and networks that create the envi ron-
ments within which and infrastructure through whi ch human and ar-
tificial agents will interact.
Drawi ng on a typology of contemporary posthumanism, organiza-
tional posthuma nism is shown to be a hybrid form of posthumanism
that combines both analytic, synthetic, theoretical, and practical
elements. Like analytic forms of posthumanism , organi zational
posthumanism recognizes the extent to which posthumanization
has already transformed businesses and other organizations; it thus
occupies itself with understanding organizations as they exist today
and developing strategies and best practices for responding to the
forces of posthumanization. On the other hand, like synthetic forms
of posthumanism, organizational posthumanism anticipates the
1
This chapter was originally published in Gladden, Matthew E., Sapient Circuits and Digitalized
Flesh: The Organization as Locus of Technological Posthumanization, pp. 93-131, Indianapolis:
Defragmenter Media, 2016.
110 • Posthuman Management
fact that intensifyi ng and accelerating processes of posthumaniza-
tion will create future realities quite different from those seen today;
it thus attempts to develop conceptual schemas to account for
such potential developme nts, both as a means of expanding our
theoretical knowledge of organizations and of enhancing the abil-
ity of contemporary organizational stakeholders to conduct strate-
gic planni ng for a radically posthumanized l ong -term future.
I. Introduction
‘Posthumanism’ can be defined briefly as an intellectual framework for un-
derstanding reality that is post-anthropocentric and post-dualistic; for posthu-
manism, the ‘natural’ biological human being as traditionally understood be-
comes just one of many intelligent subjects acting within a complex ecosys-
tem.
2
Some forms of posthumanism focus on the ways in which our notion of
typical human beings as the only members of society has been continuously
challenged over the centuries through the generation of cultural products like
myths and works of literature that feature quasi-human beings such as mon-
sters, ghosts, angels, anthropomorphic animals, cyborgs, and space aliens (i.e.,
through processes of nontechnological ‘posthumanization’).
3
Other forms of
posthumanism address the ways in which the circle of persons and intelligent
agents dwelling within our world is being transformed and expanded through
the engineering of new kinds of entities such as human beings possessing neu-
roprosthetic implants, genetically modified human beings, social robots, sen-
tient networks, and other advanced forms of artificial intelligence (i.e.,
through processes of technological posthumanization).
4
The development of
2
This definition builds on the definitions formulated by scholars of posthumanism such as Fer-
rando, Miller, Herbrechter, Miah, and Birnbacher, as well as on our own typology of posthuman-
ism found in Part One of this volume, “A Typology of Posthumanism: A Framework for Differ-
entiating Analytic, Synthetic, Theoretical, and Practical Posthumanisms.” See Ferrando, “Posthu-
manism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms: Differences
and Relations” (2013), p. 29; Miller, “Conclusion: Beyond the Human: Ontogenesis, Technology,
and the Posthuman in Kubrick and Clarke’s 2001” (2012), p. 164; Herbrechter, Posthumanism: A
Critical Analysis (2013), pp. 2-3; Miah, “A Critical History of Posthumanism” (2008), p. 83; and
Birnbacher, “Posthumanity, Transhumanism and Human Nature” (2008), p. 104.
3
Such forms of posthumanism include the critical and cultural posthumanism pioneered by Har-
away, Halberstam and Livingstone, Hayles, Badmington, and others. See, e.g., Haraway, “A Man-
ifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s” (1985); Haraway,
Simians, Cyborgs, and Women: The Reinvention of Nature (1991); Posthuman Bodies, edited by
Halberstam & Livingstone (1995); Hayles, How We Became Posthuman: Virtual Bodies in Cyber-
netics, Literature, and Informatics (1999); Graham, Representations of the Post/Human: Monsters,
Aliens and Others in Popular Culture (2002); Badmington, “Cultural Studies and the Posthumani-
ties” (2006); and Herbrechter (2013).
4
Such forms of posthumanism include philosophical posthumanism, bioconservatism, and trans-
humanism, which are analyzed in Miah (2008), pp. 73-74, 79-82, and Ferrando (2013), p. 29. Such
approaches can be seen, for example, in Fukuyama, Our Posthuman Future: Consequences of the
Chapter Two: Organizational Posthumanism • 111
sound and discerning forms of posthumanist thought is becoming increas-
ingly important as society grapples with the ontological, ethical, legal, and
cultural implications of emerging technologies that are generating new forms
of posthumanized existence.
The establishing of conceptual links between organizational management
and the idea of the ‘posthuman’ is nothing new. As early as 1978, management
scholars Bourgeois, McAllister, and Mitchell had written that “Much of the
organization theory literature from the posthuman relations era concentrates
on defining which organizational structures, management styles, et cetera are
most appropriate (effective) for different technologies and/or environmental
contingencies.”
5
Writing in 1996, Gephart drew on fictional depictions of cy-
borgs to envision an emerging ‘Postmanagement Era’ in which an organiza-
tion’s complex network of computerized systems – with its own synthetic val-
ues and logic – would become the true manager of an organization that no
longer exists and acts for the sake of human beings. Although a human being
might still appear to function as a ‘manager’ within such an organization, in
reality she would be neither a manager nor a natural, biological human being;
instead she would possess the form of a cyborg who has been permanently
integrated into her employer’s operational, financial, and technological sys-
tems and who has been weaponized for commercial ends – a being whose
human agency has been dissolved until she becomes little more than a cold
and lethally efficient “posthuman subject, ripping at flesh as part of her job.”
6
More recently, scholars have explored potential relationships between
posthumanism and particular specialized fields within organizational theory
and management. For example, Mara and Hawk consider the relationship of
posthumanism to the technical communication that constitutes an important
Biotechnology Revolution (2002); Bostrom, “Why I Want to Be a Posthuman When I Grow Up”
(2008); and other texts in Medical Enhancement and Posthumanity, edited by Gordijn & Chadwick
(2008).
5
Bourgeois et al., “The Effects of Different Organizational Environments upon Decisions about
Organizational Structure” (1978), pp. 508-14. This allusion to the posthuman is not elaborated
upon elsewhere in the text. The article describes an empirical study that was conducted to test
hypotheses relating to the default behavior of managers when their organizations encounter
“turbulent and threatening business environments” (p. 508).
6
See Gephart, “Management, Social Issues, and the Postmodern Era” (1996), pp. 36-37, 41. Strictly
speaking, Gephart’s approach is more postmodernist than posthumanist. While there are areas
of overlap between postmodernism and posthumanism, postmodernism generally posits a more
nihilistic deconstruction of the notion of ‘humanity,’ while posthumanism seeks to transform
and expand the historically anthropocentric concepts of personal agency and subjectivity to in-
corporate quasi-human, parahuman, and nonhuman entities. See Part One of this volume, “A
Typology of Posthumanism: A Framework for Differentiating Analytic, Synthetic, Theoretical,
and Practical Posthumanisms,” and Herbrechter (2013).
112 • Posthuman Management
form of information flow within contemporary organizations that are so de-
pendent on technology. They note the evolving roles that organizations’ hu-
man and nonhuman actors play in change management, organizational cul-
ture, human-computer interaction (HCI), and the integration of technology
into the workplace within the context of a complex posthuman organizational
ecology in which “it is no longer tenable to divide the world into human choice
and technological or environmental determinism.”
7
Barile, meanwhile, ex-
plores the impact that technologies for augmented reality play in creating
‘posthuman consumers’ by breaking down boundaries between the virtual
and the actual and supplanting previous forms of HCI with “a new kind of
interaction where the machines become softer and immaterial, emotions be-
come contents, and places become media.”
8
Other scholars have sought to identify the ultimate drivers of the processes
of posthumanization that are expected to increasingly impact organizations of
all types. For example, Herbrechter notes the ongoing and intensifying ‘tech-
nologization’ of humanity, by which technoscientific forces that had previ-
ously constituted just one element of society attempt to gain economic and
political power over all aspects of human culture.
9
Insofar as all organizations
exist within human cultures, utilize technology, and are subject to economic
and political forces, they become a participant in these dynamics of technolo-
gization and posthumanization. However, while the forces of technologization
are undoubtedly real, they may not fully explain the rising prominence of
posthuman dynamics and motifs within organizational life. Indeed, it has even
been suggested that the popular notion of posthumanism may have been en-
gineered as a sort of ruse generated by the power structures of postmodern
neoliberal capitalism to pacify the masses with the hope or fear (or both) of a
radically different future that looms just over the horizon.
10
According to that
view, posthumanist imagery, themes, and philosophies are a mechanism em-
ployed by some organizations in order to facilitate the achievement of their
strategic objectives.
While a diverse array of connections between posthumanism and organi-
zational management has thus been hinted at for some time, it has not been
7
Mara & Hawk, “Posthuman rhetorics and technical communication” (2009), pp. 1-3.
8
Barile, “From the Posthuman Consumer to the Ontobranding Dimension: Geolocalization, Aug-
mented Reality and Emotional Ontology as a Radical Redefinition of What Is Real” (2013), p. 101.
9
See Herbrechter (2013), p. 19.
10
See the discussion of such cynical interpretations of posthumanism in Herbrechter (2013), p.
80.
Chapter Two: Organizational Posthumanism • 113
comprehensively or systematically explored. Much scholarship has been ded-
icated to understanding fields such as literature,
11
film,
12
computer games,
13
biomedical engineering,
14
and politics and economics
15
in light of posthuman-
ist thought. However, efforts to apply posthumanist methodologies and in-
sights to organizational management have remained relatively underdevel-
oped. This is striking, given the fact that many of the issues of interest to
posthumanism have strong organizational repercussions.
In this text, we attempt to address this lacuna by presenting one approach
to developing a comprehensive ‘organizational posthumanism.’ After formu-
lating a definition for organizational posthumanism, we compare it to estab-
lished forms of post-dualistic and post-anthropocentric posthumanist
thought, arguing that it constitutes a type of ‘hybrid posthumanism’ that in-
corporates both analytic, synthetic, theoretical, and practical aspects. We then
consider six organizational elements that will increasingly be impacted by the
forces of posthumanization: namely, an organization’s members, personnel
structures, information systems, processes, physical and virtual spaces, and
external environment. Finally, three main types of technologies that facilitate
the development of organizational posthumanity are described; these are tech-
nologies for human augmentation and enhancement (including implantable
computers, neuroprosthetic devices, virtual reality systems, genetic engineer-
ing, new forms of medicine, and life extension); technologies for synthetic
agency (including social robotics, artificial intelligence, and artificial life); and
technologies for building digital-physical ecosystems and networks (such as
the Internet of Things). It is our hope that the questions raised and the frame-
work formulated within this text can offer a useful starting point for those
scholars and management practitioners who will address in an ever more ex-
plicit manner the increasingly important intersection of organizational life
and posthumanist thought.
11
See posthumanist analyses of literature in, e.g., Hayles (1999); Posthumanist Shakespeares, ed-
ited by Herbrechter & Callus (2012); and Thomsen, The New Human in Literature: Posthuman
Visions of Change in Body, Mind and Society after 1900 (2013).
12
Examples can be found in the articles relating to cinema in Posthuman Bodies (1995); Short,
Cyborg Cinema and Contemporary Subjectivity (2005); and Miller (2012).
13
For such studies, see, e.g., Schmeink, “Dystopia, Alternate History and the Posthuman in Bi-
oshock” (2009); Krzywinska & Brown, “Games, Gamers and Posthumanism” (2015); and Boulter,
Parables of the Posthuman: Digital Realities, Gaming, and the Player Experience (2015).
14
See, e.g., Medical Enhancement and Posthumanity (2008); Thacker, “Data made flesh: biotech-
nology and the discourse of the posthuman” (2003); and Lee, “Cochlear implantation, enhance-
ments, transhumanism and posthumanism: some human questions” (2016).
15
Examples of such analyses include Gray, Cyborg Citizen: Politics in the Posthuman Age (2002);
Fukuyama (2002); and Cudworth & Hobden, “Complexity, ecologism, and posthuman politics”
(2013).
114 • Posthuman Management
II. Definition of Organizational Posthumanism
Having considered the nature of posthumanism and some links that have
been suggested between posthumanism and the theory and management of
organizations, we are in a position to explicitly formulate a systematic ap-
proach that applies posthumanist insights and methodologies to the study and
management of organizations. This approach can be described as organiza-
tional posthumanism.
Lune defines an organization as “a group with some kind of name, purpose,
and a defined membership” that possesses “a clear boundary between its inside
and its outside” and which can take the form of either a formal organization
with clearly defined roles and rules, an informal organization with no explic-
itly defined structures and processes, or a semi-formal organization that pos-
sesses nominal roles and guidelines that in practice are not always observed.
16
Meanwhile, Daft et al. define organizations as “(1) social entities that (2) are
goal-directed, (3) are designed as deliberately structured and coordinated ac-
tivity systems, and (4) are linked to the external environment.”
17
Such organi-
zations include businesses, nonprofit organizations, schools, religious groups,
professional associations, political parties, governments, and military organi-
zations. Other collections of human beings – such as cities, families, or the
proponents of a particular philosophical perspective – share some of the char-
acteristics of organizations but are not generally classified as such.
The very nature of organizations is changing as ongoing technological and
social change reshapes the capacities and relationality of the human beings
who belong to organizations and creates new kinds of entities (like social ro-
bots) that can engage in goal-directed social interaction with human beings
and one another. Organizational posthumanism can aid us in making sense of
– and, ideally, anticipating and controlling – such changes. By way of a formal
definition, we would suggest that:
Organizational posthumanism is an approach to analyzing, under-
standing, creating, and managing organizations that employs a
post-anthropocentric and post-dualistic perspective; it recognizes
that the emerging technologies which complement traditional bio-
logical human beings with new types of intelligent actors also trans-
form the kinds of members , structures, dynamics, and roles that are
available for organizations.
As we shall see, while organizational posthumanism shares elements in com-
mon with established disciplines such as philosophical posthumanism, critical
posthumanism, and biopolitical posthumanism, it also possesses unique and
contrasting elements that prevent it from being understood simply as a sub-
field of one of those disciplines. Rather, we would argue that as defined above,
16
Lune, Understanding Organizations (2010), p. 2.
17
Daft et al., Organization Theory and Design (2010), p. 10.
Chapter Two: Organizational Posthumanism • 115
organizational posthumanism is better viewed as an independently conceptu-
alized body of thought within posthumanism. When understood in the context
of organizational and management theory, organizational posthumanism does
not represent a new discipline, insofar as it still addresses historical topics of
organizational structures, systems, and processes; however, it does constitute
an entirely new perspective and set of methodologies – a new approach.
III. Classification of Organizational Posthumanism as a
Type of Posthumanism
It is possible to categorize different forms of posthumanism into general
types by employing a two-dimensional conceptual framework that classifies a
form of posthumanism based on its understanding of posthumanity and the
role or purpose for which the posthumanism was developed. With regard to
its perspective on posthumanity, a form of posthumanism may be: 1) an ana-
lytic posthumanism that understands posthumanity as a sociotechnological re-
ality that already exists in the contemporary world and which needs to be
analyzed; or 2) a synthetic posthumanism that understands posthumanity as a
collection of hypothetical future entities whose development can be either in-
tentionally realized or prevented, depending on whether or not human society
chooses to research and deploy certain transformative technologies. With re-
gard to the purpose or role for which it was created, a form of posthumanism
can be: 1) a theoretical posthumanism that seeks primarily to develop new
knowledge and understanding; or 2) a practical posthumanism that seeks pri-
marily to bring about some social, political, economic, or technological change
in the world.
18
This framework yields five general types of posthumanism:
Analytic theoretical posthumanisms seek to understand the posthu-
manized present and include fields like critical and cultural posthu-
manism. Such disciplines can collectively be understood as constitut-
ing a ‘posthumanism of critique’ that employs posthumanist method-
ologies to diagnose hidden anthropocentric biases and posthumanist
aspirations contained within different fields of human activity.
19
Synthetic theoretical posthumanisms envision hypothetical forms
of posthumanity and include such pursuits as philosophical posthu-
manism and many forms of science fiction. Such fields could be seen
18
For a more detailed discussion of the distinctions between analytic, synthetic, theoretical, and
practical posthumanisms, see Part One of this book, “A Typology of Posthumanism.”
19
For an example, see the critical posthumanism described in Herbrechter (2013).
116 • Posthuman Management
as representing a ‘posthumanism of imagination’ that creatively con-
ceptualizes future (or otherwise inexistent) posthumanities so that
their implications can be explored.
20
Analytic practical posthumanisms seek to reshape the posthuman-
ized present and include some forms of metahumanism and neohu-
manism. Such movements can be understood as constituting a
‘posthumanism of conversion’ that is aimed at changing hearts and
minds and influencing the way in which human beings view and treat
the world around themselves.
21
Synthetic practical posthumanisms seek to steer the processes that
can generate a future posthumanity; they include such movements as
transhumanism and bioconservatism. Such programs can be viewed
as representing a ‘posthumanism of control’ that seeks to develop new
technologies that give individuals control over their own posthuman-
ization or to implement legal or economic controls to block the devel-
opment of such technologies.
22
Hybrid posthumanisms that span all four spheres of the analytic, syn-
thetic, practical, and theoretical include such phenomena as sociopo-
litical posthumanism and the metahumanism of Del Val and Sorgner.
Such ventures can be understood as examples of a ‘posthumanism of
production’ that develops a robust and rigorous theoretical frame-
work that is then utilized to successfully generate concrete products
or services within the contemporary world.
23
By applying this framework, organizational posthumanism can be classified
as a form of hybrid posthumanism that integrates strong analytic, synthetic,
theoretical, and practical elements. We can consider each of these elements of
organizational posthumanism in more detail.
20
Regarding, e.g., posthumanist aspects of science fiction, see Short (2005); Goicoechea, “The
Posthuman Ethos in Cyberpunk Science Fiction” (2008); Miller (2012); and Herbrechter (2013),
pp. 115-17.
21
Regarding different forms of metahumanism, see Ferrando (2013), p. 32. For the form of neo-
humanism developed by Sarkar, see Sarkar, “Neohumanism Is the Ultimate Shelter (Discourse
11)” (1982). A classification of different forms of metahumanism and neohumanism is found in
Part One of this volume, “A Typology of Posthumanism.”
22
For examples, see Fukuyama (2002); Bostrom, “A History of Transhumanist Thought” (2005);
and Bostrom (2008).
23
For an instance of sociopolitical posthumanism as it relates to law, see Berman, “Posthuman
Law: Information Policy and the Machinic World” (2002). For the form of metahumanism devel-
oped by Sorgner and Del Val, see Del Val & Sorgner, “A Metahumanist Manifesto” (2011), and
Del Val et al., “Interview on the Metahumanist Manifesto with Jaime del Val and Stefan Lorenz
Sorgner” (2011).
Chapter Two: Organizational Posthumanism • 117
A. Theoretical Aspects
Organizational posthumanism is theoretical insofar as it involves efforts to
understand the ways in which organizations’ form and dynamics are being
affected by (and are shaping) processes of posthumanization. Such work in-
volves developing new conceptual frameworks that can explain and predict
the unique ways in which organizations will become agents and objects of
posthumanization and will exist as elements of a larger posthumanized eco-
system.
For example, scholars can explore the ways in which organizations’ mem-
bers, personnel structures, processes, information systems, physical and vir-
tual spaces, and external environment will be altered by the integration of
artificial general intelligences, sentient robotic swarms, sapient networks,
neuroprosthetically augmented cyborgs, genetically engineered human be-
ings, and other posthumanized entities into organizations whose membership
was previously the exclusive domain of unmodified, ‘natural’ biological hu-
man beings. Such posthumanization may allow the creation of new organiza-
tional forms that were previously impossible while simultaneously rendering
some traditional organizational forms ineffective or obsolete.
In its theoretical aspects, organizational posthumanism draws on and can
inform fields such as organizational theory, systems theory, and cybernetics.
It can work in parallel with sociopolitical posthumanism, which explores at a
theoretical level the impact of posthumanization on legal, political, and eco-
nomic systems and institutions. Similarly, organizational posthumanism can
take up many existing lines of theoretical inquiry within fields such as philo-
sophical, critical, and biopolitical posthumanism and science fiction and ad-
vance them in a way that is informed by a deeper concern for and insight into
their implications at the organizational level.
For example, Miah notes posthumanism’s longstanding interest in the blur-
ring physical and cognitive boundaries between human beings and the tools
that we use to accomplish work. Drawing on Mazlish, Miah notes that tools
have historically served to extend human beings’ capacities and freedom while
simultaneously subjugating human beings to the organizational systems re-
quired for the tools’ production and effective use.
24
Whereas tools can serve
as an ‘artificial skin’ that mediates our relationship with our environment and
offers us protection, they have also facilitated the creation of large, impersonal
organizations in which human beings are reduced to functional bodies that
provide some economic value. The creation of new tools such as neuropros-
thetic devices is serving to make human beings “more machine-like, physically
24
See Miah (2008), p. 82, and its discussion of Mazlish, The Fourth Discontinuity: The Co-Evolution
of Humans and Machines (1993).
118 • Posthuman Management
and cognitively,” while the creation of increasingly autonomous tools such as
artificial intelligences threatens to replace human beings altogether as com-
ponents of some organizational systems.
25
Organizational posthumanism can
develop new theoretical frameworks that shed light on such relationships be-
tween agent and instrument, between human ‘employee’ and nonhuman
‘tool,’ within the evolving context of posthumanized organizations.
B. Practical Aspects
Organizational posthumanism is also practical, insofar as its goal is not
simply to understand at an abstract level the ways in which posthuman reali-
ties are affecting organizations but also to aid managers in proactively design-
ing, creating, and maintaining organizations that can survive and thrive
within novel competitive environments such as those emerging as a result of
the posthumanization of our world. Just as sociopolitical posthumanism works
to produce new legal, political, and economic systems that are adapted to
emerging posthuman realities, so organizational posthumanism works to pro-
duce successfully posthumanized organizations – and, through them, to pro-
duce the goods, services, and other resources that such organizations release
into the wider ecosystem. In its more practical aspects, organizational posthu-
manism draws on, shapes, and acts through disciplines like organizational de-
sign, organizational architecture, enterprise architecture, organization devel-
opment, management cybernetics, and strategic management.
Research has already begun to explore the practical implications of tech-
nological posthumanization (though without necessarily naming the phenom-
enon as such) for areas such as strategic planning, business models, entrepre-
neurship, marketing, knowledge management, and customer relationship
management (CRM);
26
change management, organizational culture, and or-
ganizational HCI;
27
potential roles for artificial intelligences in leading teams
of human workers;
28
and the creation of neurocybernetically linked organiza-
tional systems.
29
25
Miah (2008), p. 82.
26
See the thoughtful overview of the impacts of posthumanizing technologies on such areas in
Berner, Management in 20XX: What Will Be Important in the Future – A Holistic View (2004).
27
See Mara & Hawk (2009).
28
See Gladden, “Leveraging the Cross-Cultural Capacities of Artificial Agents as Leaders of Hu-
man Virtual Teams” (2014); Gladden, “The Social Robot as ‘Charismatic Leader’: A Phenomenol-
ogy of Human Submission to Nonhuman Power” (2014); and Gladden, “Managerial Robotics: A
Model of Sociality and Autonomy for Robots Managing Human Beings and Machines” (2014).
29
See Gladden, “Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman
Socioeconomic Interaction” (2016).
Chapter Two: Organizational Posthumanism • 119
C. Analytic Aspects
The fact that processes of posthumanization are expected to accelerate and
expand in the future does not diminish the posthumanizing impacts that have
already been felt and which every day are creating new opportunities and
challenges for organizations. Organizational posthumanism is analytic, inso-
far as it strives to understand the changes to organizations that have already
occurred as a result of such previous and ongoing processes of posthumaniza-
tion. On the basis of such knowledge, managers and other organizational
stakeholders can develop strategies and best practices to optimize the func-
tioning of real-world organizations today.
For example, researchers in the field of organizational posthumanism
might, for example, attempt to anticipate the implications of employing arti-
ficial general intelligences (AGIs) to fill roles as senior executives within oth-
erwise human organizations.
30
Such efforts to imagine the eventual impacts of
radically posthumanized far-future technological systems complement organ-
izational posthumanism’s efforts to analyze the impact that is already being
felt on organizations by more rudimentary technologies for artificial intelli-
gence, such as those that control industrial robots for assembly-line manufac-
turing,
31
automated systems for resource scheduling and planning,
32
web-
based chatbots for basic interactions with customers,
33
and robotic sales asso-
ciates for dispensing goods and services to customers.
34
D. Synthetic Aspects
In addition to analyzing the kinds of posthumanized organizations that al-
ready exist today, organizational posthumanism seeks to envision the kinds
of even more radically posthumanized organizations that may be able to exist
in the future thanks to accelerating forces of technologization and other an-
ticipated sociotechnological change.
In a sense, all long-term organizational decision-making involves a sort of
‘futurology,’ as stakeholders make decisions on the basis of their empirically
30
See, e.g., Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
31
For an overview of such technologies, see, e.g., Perlberg, Industrial Robotics (2016).
32
See, e.g., Automated Scheduling and Planning: From Theory to Practice, edited by Etaner-Uyar
et al. (2013).
33
Such technologies are described, e.g., in Perez-Marin & Pascual-Nieto, Conversational Agents
and Natural Language Interaction: Techniques and Effective Practices (2011).
34
See, e.g., the account from a consumer’s perspective of interactions with such technologies in
Nazario, “I went to Best Buy and encountered a robot named Chloe – and now I’m convinced
she’s the future of retail” (2015).
120 • Posthuman Management
grounded projections, estimates, or intuitions about how an organization’s ex-
ternal context is likely to evolve over time (e.g., as captured in a PESTLE anal-
ysis
35
) and how the impact of a decision is likely to reshape the organization’s
internal form and dynamics. Organizational posthumanism involves a special-
ized form of organizational futurology that attempts to conceptualize and pre-
dict the ways in which organizations in general (or one organization in par-
ticular) will be transformed by the dynamics of posthumanization or will be
able to exploit those dynamics for their own strategic purposes.
Within organizational posthumanism, the analytic and theoretical effort to
understand effective posthumanized organizations and the synthetic and prac-
tical effort to design and create them are thus joined as two sides of a single
coin.
IV. Organizational Posthumanization as Reflected in
Organizational Elements
One aspect of posthumanization is the emergence of a world in which nat-
ural human beings are joined by other kinds of entities such as cyborgs, social
robots, AGIs, sapient networks, and artificial life-forms in serving as employ-
ees, collaborators, and consumers. This posthuman reality will increasingly be
reflected in various aspects of organizational life. Particular implications of
such posthumanization can be identified in the kinds of members, structures,
systems, processes, spaces, and external ecosystems that organizations will pos-
sess.
36
Below we consider each of these elements.
A. Posthumanized Members
Traditionally, the members of organizations have been ‘natural’ biological
human beings who have not been engineered or extensively enhanced with
the aid of biomedical technologies. The membership of future organizations
will comprise a much more diverse array of entities. It is expected that increas-
ingly the members of organizations will, for example, also include:
37
35
See Cadle et al., Business Analysis Techniques: 72 Essential Tools for Success (2010), pp. 3-6, for
a description of various versions of this analytic tool.
36
Structures, processes, and systems constitute the three main elements within the ‘congruence
model’ of organizational architecture as conceptualized by Nadler and Tushman. See Nadler &
Tushman, Competing by Design: The Power of Organizational Architecture (1997), p. 47.
37
For an overview of the roles that such beings may play in future organizations, see Berner
(2004). Discussions of specific types of posthumanized organizational members are found, e.g.,
in Bradshaw et al., “From Tools to Teammates: Joint Activity in Human-Agent-Robot Teams”
(2009); Samani et al., “Towards Robotics Leadership: An Analysis of Leadership Characteristics
and the Roles Robots Will Inherit in Future Human Society” (2012); Wiltshire et al., “Cybernetic
Teams: Towards the Implementation of Team Heuristics in HRI” (2013); Gladden, “The Social
Robot as ‘Charismatic Leader’” (2014); Gladden, “The Diffuse Intelligent Other: An Ontology of
Chapter Two: Organizational Posthumanism • 121
Human beings possessing implantable computers (such as devices re-
sembling subcutaneous smartphones)
Human beings equipped with sensory, cognitive, or motor neuropros-
thetics, including human beings who possess full cyborg bodies
Genetically engineered human beings
Human beings who are long-term users of virtual reality systems and
whose interaction with other persons and their environment takes
place largely within virtual worlds
Social robots
Artificial general intelligences
Artificial life-forms
Sapient networks
Human and synthetic beings whose thoughts and volitions have been
cybernetically linked to create ‘hive minds’
Such members will be discussed in more detail later in this text, in our analysis
of technological changes facilitating organizational posthumanization. From
an organizational perspective, the capacities, vulnerabilities, needs, and forms
of interaction demonstrated by such entities can differ radically from those of
the natural human beings who have historically constituted an organization’s
membership. The use of posthuman entities (including artificial beings) to fill
organizational roles as senior executives, product designers, or the providers
of sensitive goods or services (such as health care or military activities) raises
a range of complex ethical, legal, and information security questions.
38
Organ-
izational posthumanism can investigate the theoretical constraints and possi-
bilities for creating organizations that include such posthumanized members
Nonlocalizable Robots as Moral and Legal Actors” (2016); and Gladden, “Neural Implants as Gate-
ways” (2016).
38
For a discussion of questions that can arise when entrusting organizational roles and respon-
sibilities to robots and AIs, see, e.g., Stahl, “Responsible Computers? A Case for Ascribing Quasi-
Responsibility to Computers Independent of Personhood or Agency” (2006); Sparrow, “Killer
Robots” (2007); Calverley, “Imagining a non-biological machine as a legal person” (2008);
Grodzinsky et al., “Developing Artificial Agents Worthy of Trust: ‘Would You Buy a Used Car
from This Artificial Agent?’” (2011); Coeckelbergh, “Can We Trust Robots?” (2012); Datteri, “Pre-
dicting the Long-Term Effects of Human-Robot Interaction: A Reflection on Responsibility in
Medical Robotics” (2013); Gladden, “The Social Robot as ‘Charismatic Leader’” (2014); and Glad-
den, “The Diffuse Intelligent Other” (2016). Regarding questions that arise in the case of neuro-
cybernetically enhanced human workers, see, e.g., McGee, “Bioelectronics and Implanted De-
vices” (2008); Koops & Leenes, “Cheating with Implants: Implications of the Hidden Information
Advantage of Bionic Ears and Eyes” (2012); and Gladden, “Neural Implants as Gateways” (2016).
122 • Posthuman Management
and can develop practical approaches for the management of organizations
that incorporate them.
B. Posthumanized Structures
The types of internal and external structures that are available for use by
organizations are expected be reshaped and expanded by emerging posthu-
man realities. When managing contemporary organizations, possible organi-
zational forms identified by Horling and Lesser include hierarchies (which can
be either simple, uniform, or multi-divisional), holarchies (or ‘holonic organi-
zations’), coalitions, teams, congregations, societies, federations (or ‘federated
systems’), matrix organizations, compound organizations, and sparsely con-
nected graph structures (which may either possess statically defined elements
or be an ‘adhocracy’).
39
Such structures have been developed over time to suit
the particular characteristics of the members that constitute contemporary or-
ganizations – i.e., natural biological human beings. As organizations evolve to
include members that possess radically different physical and cognitive capac-
ities and novel ways of interacting with one another, the kinds of structures
that are available to organize the work of these groups of members will
change, and novel organizational structures are expected to become feasible
and even necessary.
40
For example, an organization composed of neuroprosthetically augmented
human members may be able to link them through a decentralized network
that enables the direct sharing of thoughts and sentiments between members’
minds, allowing information to be disseminated in an instantaneous fashion
and decisions to be made in a distributed and collective manner that is impos-
sible for conventional human organizations.
41
The reporting and decision-
39
Horling & Lesser, “A Survey of Multi-Agent Organizational Paradigms” (2004).
40
For the sake of convenience, it is possible to refer to such developments as ‘novel personnel
structures’ – however it must be kept in mind that the ‘personnel’ constituting such future or-
ganizations will not necessarily be human ‘persons’ but may include, e.g., such radically different
types of entities as nanorobot swarms or sapient networks of computerized devices.
41
Regarding the prospect of creating hive minds and neuroprosthetically facilitated collective
intelligence, see, e.g., McIntosh, “The Transhuman Security Dilemma” (2010); Roden, Posthuman
Life: Philosophy at the Edge of the Human (2014), p. 39; and Gladden, “Utopias and Dystopias as
Cybernetic Information Systems: Envisioning the Posthuman Neuropolity” (2015). For a classifi-
cation of different kinds of potential hive minds, see Chapter 2, “Hive Mind,” in Kelly, Out of
Control: The New Biology of Machines, Social Systems and the Economic World (1994); Kelly, “A
Taxonomy of Minds” (2007); Kelly, “The Landscape of Possible Intelligences” (2008); Yonck, “To-
ward a standard metric of machine intelligence” (2012); and Yampolskiy, “The Universe of
Minds” (2014). For critical perspectives on hive minds, see, e.g., Maguire & McGee, “Implantable
brain chips? Time for debate” (1999); Bendle, “Teleportation, cyborgs and the posthuman ideol-
ogy” (2002); and Heylighen, “The Global Brain as a New Utopia” (2002).
Chapter Two: Organizational Posthumanism • 123
making structures of such an organization might reflect multidimensional cy-
bernetic network topologies that were previously possible only for computer-
ized systems (or some nonhuman animal species) but which could not be ef-
fectively employed within human organizations.
42
Organizational posthuman-
ism can conceptualize such new possibilities and develop concrete recommen-
dations regarding organizational structures that are especially well- or poorly
suited for organizations comprising posthumanized members.
C. Posthumanized (Information) Systems
The word ‘system’ is used with different meanings in different organiza-
tional contexts. From the perspective of management cybernetics, an organi-
zation as a whole can be considered a ‘viable system,’ as can each of its con-
stituent subsystems.
43
On the other hand, within the context of contemporary
42
See, e.g., Gladden, “Utopias and Dystopias as Cybernetic Information Systems” (2015). Efforts
by organizational posthumanists to envision and implement new kinds of posthumanized or-
ganizational structures should be distinguished from management approaches such as the Ho-
lacracy movement, which abolishes job titles and hierarchical structures for decision-making
and authority and replaces them with largely self-organizing, self-guiding circles of employees.
From the perspective of Holacracy, an organization can essentially be viewed as though it were
a conventional electronic computer and each of the organization’s human members were com-
ponents of that computer. The Holacracy Constitution provides an organization with a complex
set of decision-making rules and procedures that constitute the organization’s ‘operating system’
and which – after this ‘OS’ has become sufficiently engrained in employees’ interactions and
decision-making patterns – allow new business processes to be implemented in the form of
‘apps’ which, in theory, can be downloaded and installed in the minds and behaviors of the or-
ganization’s human employees in a manner similar to that of installing a new program on a
desktop computer. See Robertson, Holacracy: The New Management System for a Rapidly Chang-
ing World (2015), pp. 9-14, and the Holacracy Constitution v4.1 (2015).
Superficially, Holacracy shares some elements in common with posthumanism, insofar as it
recognizes the fact that innovative new organizational structures that draw inspiration from
sources other than traditional human institutions are increasingly becoming possible and even
necessary. However, Holacracy diverges from the principles of organizational posthumanism by
declining to acknowledge that the circle of intelligent actors within organizations is expanding
to include entities other than natural biological human beings. Holacracy is essentially anthro-
pocentric, insofar as it presumes that natural biological human beings are and will continue to
be the lone relevant actors within organizations; it simply attempts to induce such human beings
to behave as if they were electronic computer components rather than human persons. Such an
approach may prove more effective in the future, if implantable computers, neurocybernetics,
long-term immersive virtual environments, and other technologizing phenomena lead to the de-
velopment of human workers that display sufficiently ‘computronic’ characteristics. (See Part
Three of this volume, “The Posthuman Management Matrix: Understanding the Organizational
Impact of Radical Biotechnological Convergence,” for a discussion of such phenomena.) How-
ever, current attempts at implementing approaches such as Holacracy would appear to signifi-
cantly underestimate the fundamental structural and behavioral differences that presently exist
between human and synthetic agents.
43
For cybernetic accounts of viable systems from a management perspective, see, e.g., Beer,
Brain of the Firm (1981); Barile et al., “An Introduction to the Viable Systems Approach and Its
124 • Posthuman Management
organizational architecture, ‘systems’ are typically computerized information
systems such as manufacturing systems that govern and constitute a physical
assembly line, an internally hosted accounting database, a cloud-based HR
management system, a public-facing website for handling retail transactions,
or a social media platform for use in marketing and public relations.
Traditionally, the relationship of human employees to such systems has
been relatively straightforward: human workers serve as the designers, pro-
grammers, data-entry specialists, and end users of the information systems,
while the systems themselves are assigned the role of receiving, storing, and
transmitting data securely and manipulating it in an efficient and accurate
fashion, as instructed by human employees. However, the boundary between
the electronic systems that store and process information and the human
workers that use them are expected to increasingly blur as implantable com-
puters, neuroprosthetic devices, and persistent virtual reality environments
integrate human workers ever more intimately into organizational infor-
mation systems at both the physical and cognitive levels.
44
Moreover, the
growing sophistication of artificial intelligence platforms for use in data min-
ing and other applications
45
is expected to increasingly create information sys-
tems that are self-organizing, self-analyzing, and even self-aware. Through
the use of such systems, organizations may move beyond the era of Big Data
and Smart Data and into an era of ‘Sapient Data’ in which information systems
utilize human workers as tools rather than being utilized by them. Organiza-
tional posthumanism can offer critical perspectives regarding both the onto-
logical and ethical aspects of such human-electronic systems as well as their
practical implementation.
D. Posthumanized Processes
The essential processes found within an organization do not simply include
those by which it directly generates the end products for which the organiza-
tion is known – such as the actions used to physically assemble some device
on an assembly line (for a consumer electronics company) or to generate
sounds from musical instruments during a concert (for a symphony orches-
Contribution to Marketing” (2012); and Gladden, “The Artificial Life-Form as Entrepreneur: Syn-
thetic Organism-Enterprises and the Reconceptualization of Business” (2014).
44
For an in-depth analysis of the ways in which such historical barriers between human workers
and electronic information systems are being dissolved, see Part Three of this text, “The Posthu-
man Management Matrix.”
45
Regarding the prospects of developing autonomous AI systems for data mining, see, for ex-
ample, Warkentin et al., “The Role of Intelligent Agents and Data Mining in Electronic Partner-
ship Management” (2012); Bannat et al., “Artificial Cognition in Production Systems” (2011), pp.
152-55; and Wasay et al., “Queriosity: Automated Data Exploration” (2015).
Chapter Two: Organizational Posthumanism • 125
tra). An organization’s fundamental processes also include all of those behav-
iors and dynamics through which resources (including human resources, fi-
nancial resources, material resources, and information)
46
are acquired from the
external environment, created internally, transmitted between different parts
of the organization, combined or transformed, or released into the external
environment – as well as all of the second-order processes by which those
behaviors and dynamics are planned, led, organized, and controlled.
47
Such
second-order processes include the use of the three key mechanisms of pro-
gramming, feedback, and hierarchical supervision to coordinate the activities
of an organization’s members.
48
They also include compensation and incentive
schemes that are used to reward and motivate desired behaviors on the part
of an organization’s members, as well as processes of career advancement
which ensure that an organization’s most talented and effective workers move
into positions in which their abilities can be employed to their fullest poten-
tial.
49
In the case of contemporary organizations that include only traditional bi-
ological human members, there exists a rich body of theory and best practices
relating to the design and implementation of such processes. However, it is
clear that the nature of these processes can change dramatically within a rad-
ically posthumanized organizational context. For example, some kinds of ad-
vanced robots and AIs may require no compensation at all – other than ‘com-
pensation’ in the form of an electric power supply, physical maintenance and
software upgrades, and other resources needed to ensure their continued op-
eration. However, very sophisticated AGIs whose cognitive dynamics are
based on those of human beings might request – and, as a practical matter,
require – compensation in the form of intellectual stimulation, self-fulfillment,
and generic financial resources (i.e., a paycheck) that an entity can spend as it
sees fit to pursue its own personal goals or objectives in its spare time.
50
Sim-
ilarly, neurocybernetically augmented human employees may be able to in-
stantly acquire new skills or capacities in ways that render traditional profes-
sional advancement schemes outdated and irrelevant, and such employees
46
For the role of such resources in organizational dynamics, see, e.g., Pride et al., Foundations of
Business (2014), p. 8., and Gladden, “The Artificial Life-Form as Entrepreneur” (2014).
47
Planning, organizing, leading, and controlling are considered to be the four primary functions
that must be performed by managers. See Daft, Management (2011).
48
For a review of the scholarship on such mechanisms and their role in organizations, see Pura-
nam et al., “Organization Design: The Epistemic Interdependence Perspective” (2012), p. 431.
49
See Brickley et al., “Corporate Governance, Ethics, and Organizational Architecture” (2003), p.
43; Puranam et al. (2012); and Nadler & Tushman (1997), loc. 862, 1807.
50
For an in-depth analysis of the prospects of developing AGIs with human-like cognitive ca-
pacities and psychological needs, see Friedenberg, Artificial Psychology: The Quest for What It
Means to Be Human (2008).
126 • Posthuman Management
might demand new forms of compensation (such as lifetime technical support
for neuroprosthetic devices that have been implanted to enable the fulfillment
of their official organizational responsibilities
51
). Organizational posthuman-
ism can develop theoretical accounts of such posthumanized processes as well
as best practices to facilitate their management.
E. Posthumanized Spaces
The physical spaces in which an organization’s members come together to
plan and execute its activities have historically included venues such as facto-
ries, office buildings, warehouses, retail stores, farms, campuses, military ba-
ses, and other specialized locations. As organizations evolve and expand to
include nonhuman members such as sapient networks or robotic swarms, the
range of physical spaces in which such organizational members can (or need)
to work will be similarly transformed. Moreover, building on the use of tech-
nologies such as telephony, email, instant messaging, and videoconferencing,
even the traditional biologically human members of organizations will find
themselves interacting in new posthumanized venues such as persistent vir-
tual worlds. Within such new physical and virtual organizational spaces, one
member of an organization may or may not always know whether the other
intelligent members with which the member is interacting socially are natural
biological human beings, neurocybernetically enhanced human beings, ro-
bots, AIs, or other kinds of entities.
52
Organizational posthumanism can en-
gage with practitioners in the fields of architecture, facilities design, ergonom-
ics, operations management, and logistics to create and operate posthuman-
ized physical facilities for organizations functioning in such a deanthropocen-
trized context. With regard to the development and use of posthumanized vir-
tual spaces, organizational posthumanism can provide a conceptual bridge by
seeking out insights from fields as diverse as biocybernetics, HCI, psychology,
anthropology, communications, philosophy of mind, computer game design,
science fiction, and film and television studies to develop immersive multisen-
sory worlds that serve as effective venues for organizational life.
F. Posthumanized External Environments and Ecosystems
An organization can be understood as a viable system that operates within
a broader ecosystem (or ‘suprasystem’) that includes other competing or col-
laborating organizations as well as natural resources, potential consumers,
and other external environmental features.
53
These ecosystems are expected
51
See Gladden, “Neural Implants as Gateways” (2016).
52
See Grodzinsky et al. (2011) and Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
53
Regarding viable systems and their environments, see, e.g., Beer (1981) and Gladden, “The
Artificial Life-Form as Entrepreneur” (2014).
Chapter Two: Organizational Posthumanism • 127
to take on an increasingly posthumanized nature. For example, new environ-
mental elements might include other organizations that consist entirely of in-
telligent nonhuman members such as robotic swarms and societies of AIs.
Similarly, a highly interconnected Internet of Things might be filled with in-
formational resources that are no longer simply passive sets of data but which
– through their integration with AI platforms – become intelligent, volitional,
and potentially even sapient collections of data that act to pursue their own
goals and interests.
54
The world’s increasingly rich and complex digital-phys-
ical ecosystems might be populated by self-generating, self-propagating,
highly adaptable memes in the form of evolvable computer worms or viruses
that shape human popular culture as a whole and the thoughts and memories
of individual human beings in particular, either through traditional forms of
communication and social interaction or through the targeted reprogramming
or technological manipulation of, for example, neurocybernetically aug-
mented human beings.
55
The emergence of such new posthuman ecosystems
is expected to significantly reshape the kinds of resources that organizations
are able to obtain from their environments, the nature of collaboration and
competition with external organizations, the types of consumers available to
utilize the goods and services produced by an organization, and the organiza-
tion’s definition of long-term viability and success.
The roles that individual organizations play within societies may also be
radically reshaped. For example, if future AIs and robotic systems are able to
efficiently perform all of the functions of food production and preparation,
health care, education, construction, transportation, energy production, retail
sales, accounting, security, and other tasks that are needed for human beings
and societies to thrive, there will no longer be a financial or operational need
for organizations to employ human beings as workers in such roles. In that
case, governments might take on the role of coordinating their human citi-
zens’ access to such superabundant resources, perhaps offering a ‘universal
basic income’ redeemable in goods or services. The societal roles of govern-
mental and commercial organizations would thus be dramatically trans-
formed. On the other hand, widespread roboticization resulting in mass un-
employment could potentially yield a loss of purpose for human beings, social
54
For discussions of the theoretical and practical possibilities for and obstacles to the emergence
of such systems, see, e.g., Gladden, “From Stand Alone Complexes to Memetic Warfare: Cultural
Cybernetics and the Engineering of Posthuman Popular Culture” (2016), and Gladden, “The Ar-
tificial Life-Form as Entrepreneur” (2014).
55
Regarding the growing possibilities that ideas and other forms of information might exist as
actors that can propagate themselves through interaction with other nonhuman or human actors
within complex posthumanized digital-physical ecosystems, see, e.g., Gladden, “From Stand
Alone Complexes to Memetic Warfare” (2016), and Kowalewska, “Symbionts and Parasites –
Digital Ecosystems” (2016).
128 • Posthuman Management
unrest, violent revolution, and the oppression of the human species by auto-
mated systems; in this case, processes of posthumanization might result in
‘dystopian’ rather than ‘utopian’ organizational outcomes.
56
Organizational
posthumanism can provide a theoretical bridge that links the consideration of
posthumanization at an organizational level with that at a broader social or
environmental level (as considered by fields such as economics, political sci-
ence, sociology, evolutionary biology, or environmental science), while also
developing concrete practices to aid organizations with optimizing their use
of resources from and contribution of products to a posthumanized external
environment.
V. Technological Changes Facilitating Organizational
Posthumanization
While advanced technologies play an essential role in contemporary pro-
cesses of posthumanization, they are not the only mechanisms through which
such processes operate. As noted earlier, there exist many forms of ‘posthu-
manism without technology.’
57
Such nontechnological critical or cultural
posthumanism might focus, for example, on historical references to ghosts,
angels, monsters, and semidivine heroes in theology and the arts and the ways
in which they have long encouraged human beings to expand the boundaries
of society to include a nonhuman ‘other.’
58
Posthumanized beings have always been part of organizations. Even
if only tangentially, human organizations have always incorporated such
quasi-human, parahuman, or nonhuman others. For example, the decision-
making processes of Ancient Roman governmental and military organizations
relied on augurs that were supposed by their practitioners to reveal the will
of the gods.
59
According to the Catholic Church’s traditional teaching on the
Communion of Saints, the organization of the Church incorporates both hu-
man members who are presently living on earth, members who have died but
are still undergoing a purification, and members who have died and now con-
template God in His heavenly glory.
60
In a metaphorical sense, the ‘ghost’ of a
56
For the debate on whether mass roboticization and the end of human employment as we know
it is likely to generate utopian, dystopian, or less extreme social impacts, see, e.g., Sachs et al.,
“Robots: Curse or Blessing? A Basic Framework” (2015); Nourbakhsh, “The Coming Robot Dys-
topia” (2015); and Ford, Rise of the Robots: Technology and the Threat of a Jobless Future (2015).
For longer-term interdisciplinary perspectives, see the texts in Singularity Hypotheses, edited by
Eden et al. (2012).
57
Herbrechter (2013), p. 157.
58
Herbrechter (2013), pp. 2-3, 106. See also Graham (2002).
59
See Hamilton, “What Is Roman Ornithomancy? A Compositional Analysis of an Ancient Ro-
man Ritual” (2007), and Green, “Malevolent gods and Promethean birds: Contesting augury in
Augustus's Rome” (2009).
60
See the Catechism of the Catholic Church, Second Edition (2016), pp. 249-250.
Chapter Two: Organizational Posthumanism • 129
company’s beloved founder can continue to guide the company’s actions even
after his or her death, gazing watchfully from framed portraits on office walls
and inspiring new generations of employees through aphorisms quoted rev-
erently in the company’s mission statement or employee handbook. And non-
human others in the form of dogs, horses, and other animals have long been
incorporated into human military organizations and businesses (e.g., family
farms or circuses) in important roles as intelligent – if not sapient – agents.
Technologization is changing the nature of posthumanization. How-
ever, even critical posthumanists who argue that the processes of posthuman-
ization have historically taken many forms unrelated to technological change
will acknowledge that in today’s world, the accelerating and intensifying tech-
nologization of humanity has become an essential – if not the most essential
– driver of posthumanization.
61
Herbrechter notes that from the time of its
prehistoric origins, humanity has always utilized technology. Indeed, it was
only the creation of techniques and technologies for performing such tasks as
making fire, hunting animals, and communicating information symbolically
that humankind as such was able to develop; “Culture in a sense is therefore
always ‘technoculture’, namely achieved and transmitted by technics.”
62
How-
ever, the manner and extent of our integration with workplace technologies
is now undergoing a qualitative transformation. Herbrechter suggests that the
human operators of equipment are increasingly merging with their tools in
order to manipulate them more effectively, thereby undergoing a process of
cyborgization. But just as we are becoming more dependent on our technol-
ogy, our technology is becoming less dependent on us – thanks to the growing
sophistication of artificial intelligence and automated systems that can make
decisions without any need for human input. Human agency is thus being
attenuated by technology at the same time that the world of ‘smart objects’ is
gaining its own agency.
63
The new kinds of posthumanized beings produced through such technolo-
gization will become incorporated into human organizations in novel fash-
ions. A ghost or saint or animal can indeed be ‘incorporated’ into the life and
behaviors of an organization in meaningful ways – but not, for example, as an
employee of the organization. The ‘ghost’ of a company’s founder might offer
vague principles to guide decision-making but cannot determine which of
three smartphone models to offer for sale in a particular country. A horse can
transport a company’s goods from place to place but cannot formulate the
company’s long-term business strategy. However, posthuman beings in the
form of artificial intelligences, social robots, sentient (and even sapient) net-
61
See Herbrechter (2013), pp. 15, 6-7.
62
Herbrechter (2013), p. 152.
63
For a discussion of these simultaneous trends, see Herbrechter (2013), p. 150.
130 • Posthuman Management
works, and cyborgs will be able to do such things. Increasingly, such posthu-
manized entities will not simply operate at the fringes of an organization or
in supporting roles that aid the decision-making of the organization’s natural
human members; such posthuman beings will instead increasingly fill critical
roles as designers, producers, strategists, and decision-makers within organi-
zations.
64
While processes such as roboticization, cyborgization, and virtualization
have not created the phenomenon of posthumanization, they are making its
dynamics visible in new and more vivid ways.
65
Hayles suggests that some
forms of ‘uncritical’ posthumanism (including strains of transhumanism and
cybernetics) possess a naïvely technologized interpretation of these processes:
such a perspective understands the human body as merely a prosthesis or
computational substrate and the mind as a collection of informational pat-
terns; it considers the biological organism of a human being, a social robot
resembling a human being, and a computer simulation of a human being to be
just three interchangeable manifestations of the same sort of viable system.
66
Critical posthumanists such as Hayles and Herbrechter reject such simplistic
‘technoeuphoria’ and argue that more rigorous critical posthumanist thought
is necessary in order to understand, anticipate, and guide the processes of so-
ciotechnological transformation that are challenging our concept of humanity
and altering humanity’s role in the world.
67
Organizational posthumanism is
well-positioned to explore such questions of technological posthumanization
in a way that marries the circumspectness of critical posthumanism with a
strategic awareness of the fact that the ability to generate and embrace radical
new forms of technological transformation is growing ever more important to
the survival of organizations.
Three categories of posthumanizing technologies. For the purposes of
this text, there are three broad categories of ongoing or anticipated techno-
logical developments that are contributing to posthumanization in especially
relevant ways: 1) technologies for human augmentation and enhancement,
which include many forms of neuroprosthetics and genetic engineering; 2)
technologies for synthetic agency, which include robotics, artificial intelli-
gence, and artificial life; and 3) technologies for digital-physical ecosystems
64
An exploration of these possibilities can be found, e.g., in Samani et al. (2012) and Gladden,
“The Social Robot as ‘Charismatic Leader’” (2014).
65
See Herbrechter (2013), p. 77.
66
See Hayles (1999), pp. 2-3, and its discussion in Herbrechter (2013), p. 42.
67
Herbrechter (2013), p. 200.
Chapter Two: Organizational Posthumanism • 131
and networks that help create the environments within which and infrastruc-
ture through which human and artificial agents will interact.
68
We can con-
sider these three types of technologies in turn.
A. Technologies for Human Augmentation and Enhancement
Technologies that are expected to alter the sensory, motor, and cognitive
capacities of human beings include implantable computers, advanced neuro-
prosthetics, genetic engineering, and the use of immersive virtual reality sys-
tems.
69
The implementation of such technologies will result in a posthumani-
zation of organizations’ members (e.g., as an organization purposefully hires
cyborgs to fill particular roles or the organization’s current employees acquire
cybernetic enhancements on their own initiative), structures (e.g., as implant-
able computers and communication devices allow workers to engage in new
types of decision-making and reporting relationships), systems (e.g., by giving
human workers new abilities to control, be controlled by, and otherwise in-
terface with an organization’s technological infrastructure), processes (e.g., by
facilitating direct brain-to-brain communication and providing workers with
in-body access to organizational databases), spaces (e.g., by allowing cyborg
workers to operate in areas dangerous or inaccessible to natural human be-
ings), and external ecosystems (e.g., by creating cyborg consumers that need
new kinds of goods and services and external cyborg partners and consultants
that can provide them). We can consider such posthumanizing technologies
in more detail.
1. IMPLANTABLE COMPUTERS
The universe of contemporary information and communications technol-
ogy (ICT) includes a wide range of implantable devices such as passive RFID
tags that are not in themselves computers but which can interact with com-
puters and serve as elements of computerized systems. However, an increas-
ing number of implantable devices indeed constitute full-fledged computers
that possess their own processor, memory, software, and input/output mech-
anisms and whose programming can be updated after they are implanted into
the body of their human host. Among these are many implantable medical
68
For a discussion of the role of such technologies in posthumanization, see Herbrechter (2013),
pp. 90-91, and its analysis of Graham (2002) and Graham, “Post/Human Conditions” (2004). Note
that while we focus in this text on three kinds of posthumanizing technologization that have a
particular impact on the form and dynamics of organizations, they are by no means the only
kinds of technologization that will contribute to posthumanization. Technological developments
in other fields such as agriculture, transportation, energy, space exploration, and the military
will also likely contribute to the posthumanization of our world and the organizations within it.
69
Such technologies are reviewed, e.g., in Bostrom (2008); Fukuyama (2002); Gray (2002); and
Herbrechter (2013), pp. 90-91.
132 • Posthuman Management
devices (IMDs) such as pacemakers, defibrillators, neuroprostheses including
retinal and cochlear implants, deep brain stimulation (DBS) devices, body sen-
sor networks (BSNs), and even some of the more sophisticated implantable
RFID transponders.
70
A growing number of these implantable computers uti-
lize sophisticated biocybernetic control loops that allow the physiological and
cognitive activity of their host to be detected, processed, and interpreted for
use in exercising real-time computer control.
71
The implantable computers that have been developed to date typically
serve a restorative or therapeutic medical purpose: they are used to treat a
particular illness or restore to their user a sensory, motor, or cognitive ability
that has been lost through illness or injury. Increasingly, though, implantable
computers will be developed not to restore some regular human capacity that
has been lost but to augment their users’ physical or intellectual capacities in
ways that exceed typical human abilities.
72
For example, implantable comput-
ers resembling miniaturized subcutaneous smartphones might provide their
users with wireless communication capacities including access to cloud-based
services.
73
The elective use of implantable computers for physical and cogni-
tive augmentation will expand the market for such devices to broader seg-
ments of the population beyond those who currently rely on them to address
medical conditions.
74
2. ADVANCED NEUROPROSTHETICS
Drawing on definitions offered by Lebedev and others, we can define a
neuroprosthesis as a technological device that is integrated into the neural
circuitry of a human being; such devices are often categorized as being sen-
sory, motor, bidirectional sensorimotor, or cognitive.
75
While there is much
overlap between implantable computers and neuroprosthetic devices, not all
70
See Gasson et al., “Human ICT Implants: From Invasive to Pervasive” (2012); Gasson, “ICT
Implants” (2008); and Gladden, The Handbook of Information Security for Advanced Neuropros-
thetics (2015), pp. 19-20.
71
See Fairclough, “Physiological Computing: Interfacing with the Human Nervous System”
(2010), and Park et al., “The Future of Neural Interface Technology” (2009).
72
Regarding the anticipated increasing use of implantable computers for purposes of human
enhancement, see, e.g., Warwick & Gasson, “Implantable Computing” (2008); Berner (2004), p.
17; and Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), p.
28.
73
For discussion of such a device, see Gladden, The Handbook of Information Security for Ad-
vanced Neuroprosthetics (2015), p. 93.
74
See McGee (2008) and Gasson et al. (2012).
75
Such a classification is discussed in Lebedev, “Brain-Machine Interfaces: An Overview” (2014),
and Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), pp. 21-
22.
Chapter Two: Organizational Posthumanism • 133
implantable computers interface directly with their host’s neural circuitry and
not all neuroprosthetic devices are implantable.
76
The power and potential applications of neuroprosthetic devices are ex-
pected to grow significantly in the coming years. For example, it is anticipated
that current types of retinal implants that demonstrate very limited function-
ality will be supplanted by future sensory neuroprosthetics such as artificial
eyes
77
that give their human hosts the capacity to experience their environ-
ments in dramatic new ways, such as through the use of telescopic or night
vision
78
or by presenting an augmented reality that overlays actual sense data
with supplemental information from a neuroprosthetic device’s computer.
79
A
neuroprosthetic device could also allow all of the sense data experienced by a
human mind to be recorded as a stream of digital data that can be played back
on demand by other human beings, enabling them to vicariously experience
the world as though they were temporarily occupying the body of the device’s
host. Similar technologies might allow a person to play back any of his or her
own earlier sensory experiences with perfect fidelity or replace the sense data
generated by his or her actual external environment with sense data depicting
some fictional virtual world.
80
Meanwhile, cognitive neuroprosthetic devices may offer their user the abil-
ity to create, delete, or otherwise edit memories stored within his or her
brain’s biological neural network; such abilities could be used, for example, to
acquire new knowledge or skills or to erase existing fears.
81
Some scholars
76
For this distinction, see Gladden, The Handbook of Information Security for Advanced Neuro-
prosthetics (2015), p. 32.
77
Regarding such possibilities, see Berner (2004), p. 17, and Koops & Leenes (2012).
78
Such enhanced forms of vision are discussed, e.g., in Gasson et al. (2012) and Merkel et al.,
“Central Neural Prostheses” (2007).
79
See Koops & Leenes (2012) and Gladden, The Handbook of Information Security for Advanced
Neuroprosthetics (2015), pp. 32-33.
80
Regarding such sensory playback and virtual reality systems, see Gladden, The Handbook of
Information Security for Advanced Neuroprosthetics (2015), pp. 33, 156-57; Koops & Leenes (2012),
pp. 115, 120, 126; Merkel et al. (2007); Robinett, “The Consequences of Fully Understanding the
Brain” (2002); and McGee (2008), p. 217.
81
Such possibilities build on experimental techniques and technologies that are currently being
tested in mice. See Han et al., “Selective Erasure of a Fear Memory” (2009); Ramirez et al., “Cre-
ating a False Memory in the Hippocampus” (2013); McGee (2008); Warwick, “The Cyborg Revo-
lution” (2014), p. 267; and Gladden, The Handbook of Information Security for Advanced Neuro-
prosthetics (2015), p. 148.
134 • Posthuman Management
envision the development of ingestible ‘knowledge pills’ whose contents (per-
haps a swarm of networked nanorobots
82
) travel to the brain, where they ma-
nipulate neurons to create engrams containing particular memories.
83
Other
researchers foresee the possibility of being able to simply download new skills
or knowledge onto a memory chip implanted within the brain.
84
Cognitive
neuroprosthetic devices might also be used to provide their human hosts with
enhanced levels of intelligence
85
and creativity,
86
more desirable emotional dy-
namics and behavior,
87
enhanced conscious awareness (e.g., by reducing the
need for sleep),
88
a strengthened or modified conscience,
89
and real-time assis-
tance with decision-making to mitigate the impact of cognitive biases.
90
Similarly, a motor neuroprosthetic device might grant its user enhanced
control over his or her existing biological body, expand the user’s body to
82
See Pearce, “The Biointelligence Explosion” (2012).
83
For such possibilities, see Spohrer, “NBICS (Nano-Bio-Info-Cogno-Socio) Convergence to Im-
prove Human Performance: Opportunities and Challenges” (2002).
84
See McGee (2008) and Gladden, The Handbook of Information Security for Advanced Neuropros-
thetics (2015), p. 33.
85
Berner (2004), p. 17.
86
Increases in creativity have been anecdotally reported to occur after the use of neuroprosthet-
ics for deep brain stimulation. See Cosgrove, “Session 6: Neuroscience, brain, and behavior V:
Deep brain stimulation” (2004); Gasson, “Human ICT Implants: From Restorative Application to
Human Enhancement” (2012); Gladden, The Handbook of Information Security for Advanced Neu-
roprosthetics (2015), p. 149; Gladden, “Neural Implants as Gateways” (2016); and Gasson (2012),
pp. 23-24.
87
Regarding the intentional creation of emotional neuroprosthetics, see, e.g., Soussou & Berger,
“Cognitive and Emotional Neuroprostheses” (2008). Effects on emotion have already been ob-
served, for example, with devices used for deep brain stimulation. See Kraemer, “Me, Myself and
My Brain Implant: Deep Brain Stimulation Raises Questions of Personal Authenticity and Alien-
ation” (2011).
88
Regarding efforts by the DARPA military research agency and others to develop neurotech-
nologies that can increase soldiers’ alertness and reduce their need for sleep, see, e.g., Falconer,
“Defense Research Agency Seeks to Create Supersoldiers” (2003); Moreno, “DARPA On Your
Mind” (2004); Clancy, “At Military's Behest, Darpa Uses Neuroscience to Harness Brain
Power” (2006); Wolf-Meyer, “Fantasies of extremes: Sports, war and the science of sleep” (2009);
Kourany, “Human Enhancement: Making the Debate More Productive” (2013), pp. 992-93; and
Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), p. 151.
89
The conscience can be understood as a set of metavolitions, or desires about the kinds of
volitions that a person wishes to possess. See Calverley (2008) and Gladden, The Handbook of
Information Security for Advanced Neuroprosthetics (2015), pp. 151-52. To the extent that a neu-
roprosthetic device enhances processes of memory and emotion that allow for the development
of the conscience, it may enhance one’s ability to develop, discern, and follow one’s conscience.
90
Regarding the potential use of neuroprosthetic devices for such purposes, see Gladden, “Neural
Implants as Gateways” (2016). For a description of common cognitive biases and their impact on
organizational decision-making, see Kinicki & Williams, Management: A Practical Introduction
(2010), pp. 217-19.
Chapter Two: Organizational Posthumanism • 135
incorporate new devices (such as an exoskeleton or robotic vehicle) through
body schema engineering, replace most of the user’s existing biological body
with electromechanical components to turn the individual into a cyborg,
91
al-
low the user to control external networked physical systems such as drones
or 3D printers, or provide the host with a radically nonhuman body for use in
sensing and manipulating a virtual environment.
92
3. VIRTUAL REALITY
In principle, a virtual reality system may be capable of creating a fully im-
mersive visual, auditory, olfactory, gustatory, and tactile environment that its
human user would find impossible to qualitatively distinguish from the real
world, if the system is capable of presenting either roughly 200 Gbps of raw
sense data to the body’s sensory organs (such as the retina, hair cells in the
ear, and taste buds) through their external stimulation or roughly 250 Mbps of
already-processed sense data in the form of direct electrochemical stimulation
either of the nerves (such as the optic and cochlear nerves) that carry such
data to the brain or of the relevant brain regions themselves.
93
Such fully im-
mersive – and potentially continuous and long-term – virtual reality experi-
ences could be facilitated through the use of advanced neuroprosthetic devices
that provide a human brain with all of its sense data, perhaps aided by the use
of genetic engineering to make the brain or sensory organs better suited to
receive input from such devices.
94
There is no logical necessity for these fully immersive virtual worlds to
resemble our real world in all respects: within a virtual world, human beings
might be given new kinds of sensory capacities
95
or even radically nonhuman
bodies.
96
Moreover, the laws of physics and biology that hold sway within the
real world need not apply in a virtual world; the designers of such worlds
could formulate their own cultural, social, biological, physical, and even logi-
cal and ontological principles that govern or mediate the interactions of sub-
jects and objects within a virtual world. For example, a world designer might
decide that within a particular virtual world all human beings, all computers
91
See Lebedev (2014) and Berner (2004), p. 16.
92
Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics: Video Games as Tools for Posthu-
man ‘Body Schema (Re)Engineering’” (2015).
93
See Berner (2004), pp. 37-38, 45-47.
94
On implantable systems for augmented or virtual reality, see Sandor et al., “Breaking the Bar-
riers to True Augmented Reality” (2015), pp. 5-6. Regarding the theoretical possibilities and limits
of such technologies, taking into account human physiological and psychological constraints,
see Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics” (2015).
95
See Merkel et al. (2007).
96
Such possibilities are explored in Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics”
(2015).
136 • Posthuman Management
possessing artificial general intelligence, and some of the more intelligent
forms of animals represented within it are able to instantaneously share their
thoughts and emotions with one another through a form of ‘telepathy,’
thereby creating new kinds of communal creativity, thought, and agency.
97
Such technologies could potentially have significant negative conse-
quences; for example, particularly immersive and stimulating virtual environ-
ments may become addictive, with their users unable or unwilling to leave
them.
98
Moreover, if a user possesses a permanently implanted virtual reality
device that is able to alter or replaces its host’s sensory perceptions, it may be
impossible for the user to know which (if any) of the sense data that he or she
is experiencing corresponds to some actual element of an external physical
environment and which is ‘virtual’ or simply ‘false’; such an individual may
lose the ability (and perhaps desire) to distinguish between real and virtual
experiences and worlds.
99
4. GENETIC ENGINEERING, MEDICINE, AND LIFE EXTENSION
Notwithstanding the many serious questions about whether such applica-
tions are ontologically coherent and ethically acceptable, as a practical matter
scholars expect that new techniques for genetic engineering will eventually
be used, for example, to produce a continually refreshed inventory of person-
alized replacement organs that can be implanted when their human host’s
previous organs ‘wear out’ – or even organs that regenerate themselves
97
Such options available to the designers of virtual worlds in immersive and long-term multi-
sensory VR environments are discussed in Gladden, “Cybershells, Shapeshifting, and Neuropros-
thetics” (2015), and Gladden, “‘Upgrading’ the Human Entity: Cyberization as a Path to Posthu-
man Utopia or Digital Annihilation?” (2015).
98
Regarding the ramifications of long-term immersion in virtual reality environments, see, e.g.,
Heim, The Metaphysics of Virtual Reality (1993); Koltko-Rivera, “The potential societal impact of
virtual reality” (2005); and Bainbridge, The Virtual Future (2011). Regarding the danger of ‘toxic
immersion’ in a virtual world, see Castronova, Synthetic Worlds: The Business and Culture of
Online Games (2005). See also Berner (2004), p. 16, and Gladden, The Handbook of Information
Security for Advanced Neuroprosthetics (2015), pp. 55-56.
99
For the possibility that a device designed to receive raw data from an external environment
could have that data replaced with other data transmitted from some external information sys-
tem, see Koops & Leenes (2012). Regarding the possibility of neuroprosthetic devices being used
to provide false data or information to their hosts or users, see McGee (2008), p. 221, and Gladden,
The Handbook of Information Security for Advanced Neuroprosthetics (2015). For an analysis of the
relationship between physical and virtual reality and ways in which entities can move between
these worlds, see Kedzior, “How Digital Worlds Become Material: An Ethnographic and
Netnographic Investigation in Second Life” (2014). For more general analyses of the phenome-
non of virtual reality, see, e.g., Communication in the Age of Virtual Reality, edited by Biocca &
Levy (1995); Cybersociety 2.0: Revisiting Computer-Mediated Communication and Community, ed-
ited by Jones (1998); and Lyon, “Beyond Cyberspace: Digital Dreams and Social Bodies" (2001).
Chapter Two: Organizational Posthumanism • 137
within their host’s body.
100
It is also anticipated that gene therapy will be em-
ployed not simply to replace damaged body components with healthy replicas
but to modify the form and functioning of an individual’s body or to create
new human beings who possess particularly desirable characteristics.
101
Some scholars expect that the use of medical technologies for radical life
extension will become more widespread even as the availability of such tech-
nologies remains restricted for legal, ethical, financial, or cultural reasons.
Those individuals who possess access to such technologies may be allowed to
extend their life indefinitely (in whatever form such a life might take) and may
be permitted and expected to choose the time of their own death.
102
Genetic engineering may also be used to create new forms of sensory, mo-
tor, or computing devices within the human body. For example, a neuropros-
thetic device need not be electronic in nature: ongoing developments in fields
such as genetic engineering, synthetic biology, bionanotechnology, and bio-
molecular computing are expected to make possible the creation of neuropros-
thetic devices that are partially or wholly composed of biological material
(perhaps based on the DNA of the device’s host) or other non-electronic com-
ponents.
103
Other advances in medical technology may involve the use of more
traditional electronics and robotics. For example, a swarm of nanorobots that
has been injected or ingested may travel to a specific location within the body
to perform surgery, clean clogged arteries, or modify or stimulate neurons to
create new information within neural networks.
104
Ingestible robotic pills
100
See Berner (2004), p. 61, and Ferrando (2013), p. 27.
101
For a range of perspectives on such possibilities, see, e.g., Berner (2004), p. 17; Panno, Gene
Therapy: Treating Disease by Repairing Genes (2005); Mehlman, Transhumanist Dreams and Dys-
topian Nightmares: The Promise and Peril of Genetic Engineering (2012); Bostrom, “Human Genetic
Enhancements: A Transhumanist Perspective” (2012); Lilley, Transhumanism and Society: The
Social Debate over Human Enhancement (2013); and De Melo-Martín, “Genetically Modified Or-
ganisms (GMOs): Human Beings” (2015).
102
For a discussion of various approaches to human life extension, see Koene, “Embracing Com-
petitive Balance: The Case for Substrate-Independent Minds and Whole Brain Emulation” (2012).
See also Berner (2004), pp. 16-17, and Ferrando (2013), p. 27.
103
Such technologies are discussed, e.g., in Ummat et al., “Bionanorobotics: A Field Inspired by
Nature” (2005); Andrianantoandro et al., “Synthetic biology: new engineering rules for an emerg-
ing discipline” (2006); Cheng & Lu, “Synthetic biology: an emerging engineering disci-
pline” (2012); Lamm & Unger, Biological Computation (2011); and Berner (2004), pp. 15, 18, 31,
61-62. For a hybrid biological-electronic interface device that includes a network of cultured
neurons, see Rutten et al., “Neural Networks on Chemically Patterned Electrode Arrays: Towards
a Cultured Probe” (2007). Hybrid biological-electronic interface devices are also discussed by
Stieglitz in “Restoration of Neurological Functions by Neuroprosthetic Technologies: Future Pro-
spects and Trends towards Micro-, Nano-, and Biohybrid Systems” (2007).
104
Medical and other applications of such technologies are discussed in Spohrer (2002); Berner
(2004), pp. 18, 76; Pearce (2012); and Ferrando (2013), p. 27.
138 • Posthuman Management
might be used to evaluate an individual’s internal biological processes and to
administer precise dosages of drugs according to complex criteria.
105
More futuristic and contentious is the concept of ‘mind uploading’ as a
means of extending the life (or if not the life, then in some sense the ‘agency’)
of a particular human being by somehow copying or transferring the struc-
tures and processes of his or her mind from their original biological substrate
to a new electronic form – for example, by gradually replacing all of a brain’s
original biological neurons with electronic artificial neurons. Many scholars
argue that while it may, for example, be possible to copy the data that com-
prise the contents of a mind’s memories to some external system, it is impos-
sible to transfer or extend the conscious awareness of the mind itself in such
a fashion. Nevertheless, some transhumanist proponents of mind uploading
argue that such a process would not truly destroy the consciousness or es-
sence of its human host – and that even if it did, they would be willing to
transform their own bodies in this fashion, insofar as it might provide a bridge
that would allow them to duplicate their memories and patterns of mental
activity in a robotic or computerized body that could survive indefinitely.
106
B. Technologies for Synthetic Agency: Robotics, AI, and
Artificial Life
Ongoing rapid developments are expected in those fields such as robotics,
artificial intelligence, and artificial life that involve the creation of entities that
possess artificial agency and which are able to receive data from their envi-
ronment, process information, select a course of action, and act to influence
their world. For example, research within the field of artificial intelligence is
expected to yield artificial agents that possess human-like levels of intelli-
gence, creativity, learning capacity, sociality, and cultural knowledge and
which will eventually claim to possess consciousness and their own spiritual-
ity.
107
Such artificial agents might be capable of serving as charismatic leaders
105
Berner (2004), p. 76.
106
For different perspectives on techniques such as the use of artificial neurons to gradually
replace the natural biological neurons within a living human brain as a means of effecting ‘mind
uploading,’ see Moravec, Mind Children: The Future of Robot and Human Intelligence (1990); Han-
son, “If uploads come first: The crack of a future dawn” (1994); Proudfoot, “Software Immortals:
Science or Faith?” (2012); Koene (2012); Pearce (2012); and Ferrando (2013), p. 27.
107
Regarding the prospect of robots and AIs that possess truly human-like cognitive capacities,
see Friedenberg (2008) and Berner (2004), pp. 16-17, 38. For discussion of robots that interact
socially with human beings, see Breazeal, “Toward sociable robots” (2003); Kanda and Ishiguro,
Human-Robot Interaction in Social Robotics (2013); Social Robots and the Future of Social Relations,
edited by Seibt et al. (2014); Social Robots from a Human Perspective, edited by Vincent et al.
(2015); and Social Robots: Boundaries, Potential, Challenges, edited by Nørskov (2016). Regarding
elements that must be present in order for a computerized device to develop its own spirituality,
see, e.g., Geraci, “Spiritual robots: Religion and our scientific view of the natural world” (2006);
Chapter Two: Organizational Posthumanism • 139
of human beings by utilizing their powers of persuasion, inspiration, and in-
terpersonal attractiveness,
108
and they may be able to draw on their social ca-
pacities and cultural knowledge to serve, for example, as the managers of vast
global virtual teams of human workers.
109
Significant changes are also expected regarding the physical substrates
upon which robots and AI platforms are based, as it becomes possible to design
systems utilizing components that are increasingly miniaturized, spatially dis-
persed, and biological; no longer will an artificially intelligent software-based
system be chained to the electronic physical substrate found in traditional
computers.
110
Entirely new kinds of robots and AI systems may become possi-
ble thanks to emerging technologies for physical neural networks,
111
photonic
computing, quantum computing, the use of DNA for digital data storage and
computing, and other kinds of biocomputing.
112
Thanks to advances in nano-
robotics, robots will come to outnumber human beings and become truly ubiq-
uitous: through the use of piezoelectric components, nanoscale switches and
sensors can be created that require no electrical power source, allowing clouds
of nanorobots to float on the air and fill the space around us with an invisible
mesh of sensors, actuators, and information-processors.
113
Such swarms of
customized nanorobots might be sent into dangerous environments to aid
with disaster relief or to conduct military operations,
114
and moving beyond
today’s relatively simple 3D printing systems, portable (perhaps even
Nahin, “Religious Robots” (2014); and Section 6.2.3.2 on “Religion for Robots” in Yampolskiy,
Artificial Superintelligence: A Futuristic Approach (2015).
108
See Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
109
Regarding potential managerial roles for robots and AIs, see Samani & Cheok, “From human-
robot relationship to robot-based leadership” (2011); Samani et al. (2012); and Gladden, “Lever-
aging the Cross-Cultural Capacities of Artificial Agents” (2014). Regarding the possibility of ‘su-
persocial’ AIs that can simultaneously maintain social relations with massive numbers of human
colleagues or subordinates, see, e.g., Gladden, “Managerial Robotics” (2014).
110
Regarding the evolving physical form of robots, see, e.g., Gladden, “The Diffuse Intelligent
Other” (2016), and Berner (2004), p. 16.
111
Regarding AIs that utilize physical neural networks rather than running as an executable
software program on a conventional computer employing a Von Neumann architecture, see, e.g.,
Snider, “Cortical Computing with Memristive Nanodevices” (2008); Versace & Chandler, “The
Brain of a New Machine” (2010); and Advances in Neuromorphic Memristor Science and Applica-
tions, edited by Kozma et al. (2012).
112
For discussion of DNA-based and biological computing, see, e.g., Berner (2004), pp. 15, 18, 31,
61-62; Ummat et al. (2005); Andrianantoandro et al. (2006); Lamm & Unger (2011); Church et al.,
“Next-generation digital information storage in DNA” (2012); and Cheng & Lu (2012).
113
Berner (2004), pp. 16, 18, 38, 40-41.
114
See Coeckelbergh, “From Killer Machines to Doctrines and Swarms, or Why Ethics of Military
Robotics Is Not (Necessarily) About Robots” (2011), and Berner (2004), pp. 16-17.
140 • Posthuman Management
handheld) manufacturing facilities could be created that employ specialized
swarms of nanorobots to produce highly sophisticated physical goods.
115
Ongoing developments in the fields of synthetic biology, bionanotechnol-
ogy, biologically inspired robotics, soft robotics, evolutionary robotics, and ar-
tificial life are expected to result in robotic systems whose structures and dy-
namics resemble those of living organisms and ecosystems or are even com-
posed of biological material. For example, researchers envision the develop-
ment of robotic systems controlled not by a traditional CPU-based computer
but by a synthetic brain;
116
autonomous robots that can learn, adapt, reproduce
themselves, and evolve through competition for resources within a digital-
physical ecosystem;
117
autonomous computer networks that function as a liv-
ing entity
118
that possesses its own immune system and whose remaining net-
worked components are able to automatically take over the work of a member
computer that has been disconnected or destroyed;
119
and software programs
that can repair damage to themselves or even reprogram themselves to ac-
complish a new purpose, as well as computer chips or entire robots that can
intentionally repair or automatically heal damage to themselves.
120
Emerging
technologies are expected to eventually allow the development of ‘biological
operating systems’ for groups of cells and entire organisms as well as the de-
sign of entirely new species
121
that could be understood alternatively as either
artificial biological organisms or biological robots.
Together, technologies that create advanced synthetic agents such as social
robots, artificial general intelligences, and artificial life-forms are expected to
drive an ongoing posthumanization of organizations’ members (e.g., by allow-
ing such nonhuman entities to serve as organizational members alongside or
instead of human beings), structures (e.g., by allowing optimized decision-
making and reporting structures designed through genetic algorithms that are
free from human cognitive biases and limitations), systems (e.g., by allowing
115
Berner (2004), p. 17.
116
See Warwick (2014) and Berner (2004), p. 17.
117
See Gladden, “The Artificial Life-Form as Entrepreneur” (2014), and Berner (2004), pp. 16, 18.
118
Regarding collectively conscious computer networks, see Callaghan, “Micro-Futures” (2014).
For a future Internet that is technically ‘self-aware’ (if not subjectively conscious), see Galis et
al., “Management Architecture and Systems for Future Internet Networks” (2009), pp. 112-13. A
sentient Internet is also discussed in Porterfield, “Be Aware of Your Inner Zombie” (2010), p. 19.
For a future Internet whose degree of self-awareness resembles that of a living entity, see Hazen,
“What is life?” (2006). See also Gladden, “The Artificial Life-Form as Entrepreneur” (2014).
119
See Berner (2004), pp. 17, 31.
120
Berner (2004), pp. 17-18. Regarding self-maintenance and self-healing as one capacity that
robotic systems must possess in order to be fully autonomous, see Gladden, “The Diffuse Intel-
ligent Other” (2016).
121
Berner (2004), pp. 16, 61. See also the discussion in Friedenberg (2008), pp. 201-03, of essential
elements that must be present in order for an artificial entity to be ‘alive,’ which are based on
the criteria for biological life presented in Curtis, Biology (1983).
Chapter Two: Organizational Posthumanism • 141
the development of organizational systems that are operated by synthetic be-
ings with high speed and accuracy, without the need for human workers to
enter data or access information through the slow and error-prone processes
of reading printed text), processes (e.g., by allowing an organization’s syn-
thetic members to analyze data and make decisions faster, more accurately, or
more imaginatively than is possible for human beings), spaces (e.g., by elimi-
nating the need for physical facilities whose atmosphere, temperature, radia-
tion levels, and other characteristics can sustain human life), and external eco-
systems (e.g., by creating external resource-providers and consumers that are
synthetic beings whose needs and capacities differ widely from those of hu-
man beings).
C. Technologies for Digital-Physical Ecosystems and Networks:
Connectivity, Relationships, and Knowledge
Many technological changes are either underway or expected that do not
relate exclusively to human or artificial agents but which instead shape the
larger networks and ecosystems within which all intelligent agents interact.
Through the incorporation into the Internet of all public knowledge that has
been generated by the human species, the expansion of the Internet of Things
to encompass a growing variety and number of networked devices (including
ubiquitous sensors conducting real-time surveillance),
122
and the use of RFID
or other technologies to assign a unique identifier to any physical object, cy-
berspace can in effect become a virtual representation of the entire world.
123
Successor networks to the current-day Internet may serve as a mesh that cre-
ates a digital-physical ecosystem tying together all kinds of intelligent agents
that are able to access the network through biological, electronic, or other
means, including unmodified ‘natural’ human beings, genetically engineered
human beings, human beings with extensive cybernetic augmentations, hu-
man minds that dwell permanently within virtual realities, social robots, arti-
ficially intelligent software, nanorobot swarms, and sapient networks.
124
Within such vast, complex digital ecosystems, most communication will no
longer involve human beings but will take place between networked de-
vices,
125
as real-time data mining is performed by automated systems to con-
tinually unearth new theoretical, historical, and predictive knowledge.
126
Some
122
This evolution in the Internet of Things is discussed in Evans, “The Internet of Everything:
How More Relevant and Valuable Connections Will Change the World” (2012).
123
See Berner (2004), pp. 18, 35, and Gladden, “Utopias and Dystopias as Cybernetic Information
Systems” (2015).
124
Cybernetic networks that can link such entities are discussed in Gladden, “Utopias and Dys-
topias as Cybernetic Information Systems” (2015).
125
See Berner (2004), p. 18, and Evans (2012).
126
See Berner (2004), p. 32. Existing semi-automated data-mining processes are described, e.g.,
142 • Posthuman Management
researchers expect that so close will be the symbiotic
127
integration of comput-
erized networks with their natural environment that it may be possible to ‘re-
boot’ entire ecosystems as needed, in order to save or improve the lives of
their inhabitants.
128
In particular, neuroprosthetic devices may serve as gateways that unite the
human and electronic inhabitants of a digital-physical ecosystem, allowing
their human hosts to participate in new kinds of technologically mediated so-
cial relations and structures that were previously impossible – perhaps includ-
ing new forms of merged agency
129
or cybernetic networks that display uto-
pian (or dystopian) characteristics that are not possible for non-neuroprosthe-
tically-enabled societies.
130
Neuroprosthetic devices may also link hosts or us-
ers in ways that form communication and information systems
131
that can gen-
erate greater collective knowledge, skills, and wisdom than are possessed by
any individual member of the system.
132
Because this ubiquitous digital-phys-
ical mesh of networked neuroprosthetic devices, sensors, actuators, data
pools, and servers will allow human and synthetic minds to exchange
thoughts with one another in a manner that seems direct, instantaneous, and
unmediated and to control physical systems and objects and virtual environ-
ments, it will create what is, for practical purposes, a ‘quasi-magical’ world in
which beings demonstrate functional telepathy and telekinesis.
133
Such technological change will not only result in a posthumanization of
the larger external ecosystems within which organizations exist; it will also
in Giudici, Applied Data Mining: Statistical Methods for Business and Industry (2003), and Provost
& Fawcett, Data Science for Business (2013), p. 7. Regarding the prospects of developing more
fully autonomous AI systems for data mining, see, for example, Warkentin et al. (2012); Bannat
et al. (2011), pp. 152-55; and Wasay et al. (2015).
127
For a philosophical exploration (drawing on Actor-Network Theory) of ways in which non-
human and human actors coexisting within digital-physical ecosystems might enter into ‘sym-
bioses’ that are not simply metaphorical but are instead true symbiotic relationships, see Kow-
alewska (2016).
128
This possibility is raised in Berner (2004), p. 16.
129
See McGee (2008), p. 216, and Koops & Leenes (2012), pp. 125, 132.
130
Different forms that such societies might take are discussed in Gladden, “Utopias and Dysto-
pias as Cybernetic Information Systems” (2015).
131
The intentional or ad hoc creation of such systems is discussed, e.g., in McGee (2008), p. 214;
Koops & Leenes (2012), pp. 128-29; Gasson (2012), p. 24; and Gladden, “‘Upgrading’ the Human
Entity” (2015).
132
The dynamics through which this can occur are discussed, e.g., in Wiener, Cybernetics: Or
Control and Communication in the Animal and the Machine (1961), loc. 3070ff., 3149ff.; Gladden,
“Utopias and Dystopias as Cybernetic Information Systems” (2015); and Gladden, The Handbook
of Information Security for Advanced Neuroprosthetics (2015), pp. 160-61.
133
See Berner (2004), pp. 16-17, 38; Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics”
(2015); and the potential indistinguishability of advanced technology and magic, as famously
discussed in Clarke, “Hazards of Prophecy: The Failure of Imagination” (1973), p. 36.
Chapter Two: Organizational Posthumanism • 143
spur an ongoing posthumanization of organizations’ members (e.g., by in-
creasing or decreasing members’ sensory input, span of motor control, and
social interaction with other intelligent nodes within the environment), struc-
tures (e.g., by allowing decision-making and reporting relations to be overlaid
on top of naturally existing cybernetic relationships created between members
within the environment), systems (e.g., by providing free or fee-based public
information systems that can be utilized by an organization), processes (e.g.,
by allowing an organization to develop its own customized processes or ex-
ploit SaaS-based approaches that utilize the environment’s publically accessi-
ble cloud infrastructure), and spaces (e.g., by creating ready-made physical
and virtual spaces that an organization can move into and adapt for its own
ends).
VI. Conclusion
The relationship of posthumanist thought to organizational studies and
management is a topic that is increasingly worth exploring, thanks largely to
the ongoing acceleration and intensification of technological change that is
fashioning a new organizational context which can appropriately be described
as ‘posthuman.’ Within this text, we have attempted to advance the develop-
ment of this new sphere of academic inquiry and management practice by
presenting one approach to formulating a systematic organizational posthu-
manism.
We began by noting that established forms of posthumanism could be di-
vided into analytic types that view posthumanity as an existing sociotechno-
logical reality that is best understood from a post-dualist and post-anthropo-
centric perspective and synthetic types that view posthumanity as a kind of
future entity whose creation can either be intentionally brought about or
avoided. Similarly, established forms of posthumanism can be understood as
either theoretical or practical in nature, depending on whether their goal is to
expand human knowledge or generate some concrete impact in the world. We
have argued that organizational posthumanism combines analytic, synthetic,
theoretical, and practical elements as a type of hybrid posthumanism. It is an-
alytic and theoretical insofar as it attempts to identify and understand the
ways in which contemporary organizations’ structures and dynamics are be-
ing affected by emerging sociotechnological realities, and it is synthetic and
practical insofar as its goal is to fashion a new ‘posthuman entity’ not in the
form of a genetically or neuroprosthetically augmented human being but in
the form of organizations that can survive and thrive within a rapidly evolving
posthumanized ecosystem. Building on concepts from the field of organiza-
tional architecture, six particular aspects of organizations were identified that
are likely to be impacted by ongoing posthumanization: namely, an organiza-
144 • Posthuman Management
tion’s members, structures, information systems, processes, physical and vir-
tual spaces, and external environment. Finally, we explored the manner in
which technologies for human augmentation and enhancement, synthetic
agency, and the construction of digital-physical ecosystems and networks are
expected to increasingly drive the development of organizational posthuman-
ity. It is our hope that this investigation of the ways in which a current and
emerging posthumanity is transforming the shape, dynamics, and roles of or-
ganizations will both raise new questions and offer a path to developing cre-
ative insights that can inform the work of those who seek to understand the
nature of organizations and those who are charged with managing them now
and in the future.
References
Advances in Neuromorphic Memristor Science and Applications, edited by Robert
Kozma, Robinson E. Pino, and Giovanni E. Pazienza. Dordrecht: Springer Sci-
ence+Business Media, 2012.
Andrianantoandro, Ernesto, Subhayu Basu, David K. Karig, and Ron Weiss. “Syn-
thetic biology: new engineering rules for an emerging discipline.” Molecular Sys-
tems Biology 2, no. 1 (2006).
Automated Scheduling and Planning: From Theory to Practice, edited by A. Şima
Etaner-Uyar, Ender Özcan, and Neil Urquhart. Springer Berlin Heidelberg, 2013.
Badmington, Neil. “Cultural Studies and the Posthumanities.” In New Cultural Stud-
ies: Adventures in Theory, edited by Gary Hall and Claire Birchall, pp. 260-72. Ed-
inburgh: Edinburgh University Press, 2006.
Bainbridge, William Sims. The Virtual Future. London: Springer, 2011.
Bannat, Alexander, Thibault Bautze, Michael Beetz, Juergen Blume, Klaus Diepold,
Christoph Ertelt, Florian Geiger, et al. “Artificial Cognition in Production Sys-
tems.” IEEE Transactions on Automation Science and Engineering 8, no. 1 (2011):
148-74.
Barile, Nello. “From the Posthuman Consumer to the Ontobranding Dimension: Ge-
olocalization, Augmented Reality and Emotional Ontology as a Radical Redefini-
tion of What Is Real.” intervalla: platform for intellectual exchange, vol. 1 (2013).
http://www.fus.edu/intervalla/images/pdf/9_barile.pdf. Accessed May 18, 2016.
Barile, S., J. Pels, F. Polese, and M. Saviano. “An Introduction to the Viable Systems
Approach and Its Contribution to Marketing,” Journal of Business Market Man-
agement 5(2) (2012): 54-78 (2012).
Beer, Stafford. Brain of the Firm. 2nd ed. New York: John Wiley, 1981.
Bendle, Mervyn F. “Teleportation, cyborgs and the posthuman ideology.” Social Semi-
otics 12, no. 1 (2002): 45-62.
Chapter Two: Organizational Posthumanism • 145
Berman, Sandra. “Posthuman Law: Information Policy and the Machinic World.” First
Monday 7, no. 12 (2002). http://www.ojphi.org/ojs/index.php/fm/arti-
cle/view/1011/932. Accessed March 15, 2016.
Berner, Georg. Management in 20XX: What Will Be Important in the Future – A Holis-
tic View. Erlangen: Publicis Corporate Publishing, 2004.
Birnbacher, Dieter. “Posthumanity, Transhumanism and Human Nature.” In Medical
Enhancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
95-106. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
Bostrom, Nick. “A History of Transhumanist Thought.” Journal of Evolution and
Technology vol. 14, no. 1 (2005). http://jetpress.org/volume14/bostrom.html.
Bostrom, Nick. “Human Genetic Enhancements: A Transhumanist Perspective.” In
Arguing About Bioethics, edited by Stephen Holland, pp. 105-15. New York:
Routledge, 2012.
Bostrom, Nick. “Why I Want to Be a Posthuman When I Grow Up.” In Medical En-
hancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
107-37. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
Boulter, Jonathan. Parables of the Posthuman: Digital Realities, Gaming, and the Player
Experience. Detroit: Wayne State University Press, 2015.
Bourgeois, III, L.J., Daniel W. McAllister, and Terence R. Mitchell. “The Effects of Dif-
ferent Organizational Environments upon Decisions about Organizational Struc-
ture.” Academy of Management Journal, vol. 21, no. 3 (1978), pp. 508-14.
Bradshaw, Jeffrey M., Paul Feltovich, Matthew Johnson, Maggie Breedy, Larry
Bunch, Tom Eskridge, Hyuckchul Jung, James Lott, Andrzej Uszok, and Jurriaan
van Diggelen. “From Tools to Teammates: Joint Activity in Human-Agent-Robot
Teams.” In Human Centered Design, edited by Masaaki Kurosu, pp. 935-44. Lec-
ture Notes in Computer Science 5619. Springer Berlin Heidelberg, 2009.
Breazeal, Cynthia. “Toward sociable robots.” Robotics and Autonomous Systems 42
(2003): 167-75.
Brickley, James A., Clifford W. Smith, and Jerold L. Zimmerman. “Corporate Govern-
ance, Ethics, and Organizational Architecture.” Journal of Applied Corporate Fi-
nance 15, no. 3 (2003): 34-45.
Cadle, James, Debra Paul, and Paul Turner. Business Analysis Techniques: 72 Essential
Tools for Success. Swindon: British Informatics Society Limited, 2010.
Callaghan, Vic. “Micro-Futures.” Presentation at Creative-Science 2014, Shanghai,
China, July 1, 2014.
Calverley, D.J. “Imagining a non-biological machine as a legal person.” AI & SOCIETY
22, no. 4 (2008): 523-37.
Castronova, Edward. Synthetic Worlds: The Business and Culture of Online Games.
Chicago: The University of Chicago Press, 2005.
146 • Posthuman Management
Catechism of the Catholic Church, Second Edition. Washington, DC: United States
Conference of Catholic Bishops, 2016.
Cheng, Allen A., and Timothy K. Lu. “Synthetic biology: an emerging engineering
discipline.” Annual Review of Biomedical Engineering 14 (2012): 155-78.
Church, George M., Yuan Gao, and Sriram Kosuri. “Next-generation digital infor-
mation storage in DNA.” Science 337, no. 6102 (2012): 1628.
Clancy, Frank. “At Military's Behest, Darpa Uses Neuroscience to Harness Brain
Power.” Neurology Today 6, no. 2 (2006): 4-8.
Clarke, Arthur C. “Hazards of Prophecy: The Failure of Imagination.” In Profiles of
the Future: An Inquiry into the Limits of the Possible, revised edition. New York:
Harper & Row, 1973.
Coeckelbergh, Mark. “Can We Trust Robots?” Ethics and Information Technology 14
(1) (2012): 53-60. doi:10.1007/s10676-011-9279-1.
Coeckelbergh, Mark. “From Killer Machines to Doctrines and Swarms, or Why Ethics
of Military Robotics Is Not (Necessarily) About Robots.” Philosophy & Technology
24, no. 3 (2011): 269-78.
Communication in the Age of Virtual Reality, edited by Frank Biocca and Mark R.
Levy. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers, 1995.
Cosgrove, G.R. “Session 6: Neuroscience, brain, and behavior V: Deep brain stimula-
tion.” Meeting of the President’s Council on Bioethics. Washington, DC, June 24-
25, 2004. https://bioethicsarchive.georgetown.edu/pcbe/transcripts/june04/ses-
sion6.html. Accessed June 12, 2015.
Cudworth, Erika, and Stephen Hobden. “Complexity, ecologism, and posthuman pol-
itics.” Review of International Studies 39, no. 03 (2013): 643-64.
Curtis, H. Biology, 4th edition. New York: Worth, 1983.
Cybersociety 2.0: Revisiting Computer-Mediated Communication and Community, ed-
ited by Steven G. Jones. Thousand Oaks: Sage Publications, 1998.
Daft, Richard L. Management. Mason, OH: South-Western / Cengage Learning, 2011.
Daft, Richard L., Jonathan Murphy, and Hugh Willmott. Organization Theory and De-
sign. Andover, Hampshire: Cengage Learning EMEA, 2010.
Datteri, E. “Predicting the Long-Term Effects of Human-Robot Interaction: A Reflec-
tion on Responsibility in Medical Robotics.” Science and Engineering Ethics vol. 19,
no. 1 (2013): 139-60.
De Melo-Martín, Inmaculada. “Genetically Modified Organisms (GMOs): Human Be-
ings.” In Encyclopedia of Global Bioethics, edited by Henk ten Have. Springer Sci-
ence+Business Media Dordrecht. Version of March 13, 2015. doi: 10.1007/978-3-
319-05544-2_210-1. Accessed January 21, 2016.
Del Val, Jaime, and Stefan Lorenz Sorgner. “A Metahumanist Manifesto.” The Agonist
IV no. II (Fall 2011). http://www.nietzschecircle.com/AGONIST/2011_08/ME-
TAHUMAN_MANIFESTO.html. Accessed March 2, 2016.
Chapter Two: Organizational Posthumanism • 147
Del Val, Jaime, Stefan Lorenz Sorgner, and Yunus Tuncel. “Interview on the Me-
tahumanist Manifesto with Jaime del Val and Stefan Lorenz Sorgner.” The Agonist
IV no. II (Fall 2011). http://www.nietzschecircle.com/AGONIST/2011_08/Inter-
view_Sorgner_Stefan-Jaime.pdf. Accessed March 2, 2016.
Evans, Dave. “The Internet of Everything: How More Relevant and Valuable Connec-
tions Will Change the World.” Cisco Internet Solutions Business Group: Point of
View, 2012. https://www.cisco.com/web/about/ac79/docs/innov/IoE.pdf. Accessed
December 16, 2015.
Fairclough, S.H. “Physiological Computing: Interfacing with the Human Nervous
System.” In Sensing Emotions, edited by J. Westerink, M. Krans, and M.
Ouwerkerk, pp. 1-20. Philips Research Book Series 12. Springer Netherlands,
2010.
Falconer, Bruce. “Defense Research Agency Seeks to Create Supersoldiers,” Govern-
ment Executive, November 10, 2003. http://www.govexec.com/de-
fense/2003/11/defense-research-agency-seeks-to-create-supersoldiers/15386/. Ac-
cessed May 22, 2016.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts 8, no. 2 (Fall 2013): 26-32.
Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. New
York: Basic Books, 2015.
Friedenberg, Jay. Artificial Psychology: The Quest for What It Means to Be Human,
Philadelphia: Psychology Press, 2008.
Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolu-
tion. New York: Farrar, Straus, and Giroux, 2002.
Galis, Alex, Spyros G. Denazis, Alessandro Bassi, Pierpaolo Giacomin, Andreas Berl,
Andreas Fischer, Hermann de Meer, J. Srassner, S. Davy, D. Macedo, G. Pujolle, J.
R. Loyola, J. Serrat, L. Lefevre, and A. Cheniour. “Management Architecture and
Systems for Future Internet Networks.” In Towards the Future Internet: A Euro-
pean Research Perspective, edited by Georgios Tselentis, John Domingue, Alex Ga-
lis, Anastasius Gavras, David Hausheer, Srdjan Krco, Volkmar Lotz, and Theo-
dore Zahariadis, pp. 112-22. IOS Press, 2009.
Gasson, M.N. “Human ICT Implants: From Restorative Application to Human En-
hancement.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 11-28. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
Gasson, M.N. “ICT implants.” In The Future of Identity in the Information Society, ed-
ited by S. Fischer-Hübner, P. Duquenoy, A. Zuccato, and L. Martucci, pp. 287-95.
Springer US, 2008.
Gasson, M.N., Kosta, E., and Bowman, D.M. “Human ICT Implants: From Invasive to
Pervasive.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 1-8. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
148 • Posthuman Management
Gephart, Jr., Robert P. “Management, Social Issues, and the Postmodern Era.” In Post-
modern Management and Organization Theory, edited by David M. Boje, Robert P.
Gephart, Jr., and Tojo Joseph Thatchenkery, pp. 21-44. Thousand Oaks, CA: Sage
Publications, 1996.
Geraci, Robert M. “Spiritual robots: Religion and our scientific view of the natural
world.” Theology and Science 4, issue 3 (2006). doi: 10.1080/14746700600952993.
Giudici, P. Applied Data Mining: Statistical Methods for Business and Industry. Wiley,
2003.
Gladden, Matthew E. “The Artificial Life-Form as Entrepreneur: Synthetic Organism-
Enterprises and the Reconceptualization of Business.” In Proceedings of the Four-
teenth International Conference on the Synthesis and Simulation of Living Systems,
edited by Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lip-
son, pp. 417-18. The MIT Press, 2014.
Gladden, Matthew E. “Cybershells, Shapeshifting, and Neuroprosthetics: Video
Games as Tools for Posthuman ‘Body Schema (Re)Engineering’.” Keynote presen-
tation at the Ogólnopolska Konferencja Naukowa Dyskursy Gier Wideo, Facta
Ficta / AGH, Kraków, June 6, 2015.
Gladden, Matthew E. “The Diffuse Intelligent Other: An Ontology of Nonlocalizable
Robots as Moral and Legal Actors.” In Social Robots: Boundaries, Potential, Chal-
lenges, edited by Marco Nørskov, pp. 177-98. Farnham: Ashgate, 2016.
Gladden, Matthew E. “From Stand Alone Complexes to Memetic Warfare: Cultural
Cybernetics and the Engineering of Posthuman Popular Culture.” Presentation at
the 50 Shades of Popular Culture International Conference. Facta Ficta / Uniwer-
sytet Jagielloński, Kraków, February 19, 2016.
Gladden, Matthew E. The Handbook of Information Security for Advanced Neuropros-
thetics, Indianapolis: Synthypnion Academic, 2015.
Gladden, Matthew E. “Leveraging the Cross-Cultural Capacities of Artificial Agents
as Leaders of Human Virtual Teams.” Proceedings of the 10th European Conference
on Management Leadership and Governance, edited by Visnja Grozdani, pp. 428-
35. Reading: Academic Conferences and Publishing International Limited, 2014.
Gladden, Matthew E. “Managerial Robotics: A Model of Sociality and Autonomy for
Robots Managing Human Beings and Machines.” International Journal of Contem-
porary Management 13, no. 3 (2014), pp. 67-76.
Gladden, Matthew E. “Neural Implants as Gateways to Digital-Physical Ecosystems
and Posthuman Socioeconomic Interaction.” In Digital Ecosystems: Society in the
Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp.
85-98. Warsaw: Digital Economy Lab, University of Warsaw, 2016.
Gladden, Matthew E. “The Social Robot as ‘Charismatic Leader’: A Phenomenology
of Human Submission to Nonhuman Power.” In Sociable Robots and the Future of
Social Relations: Proceedings of Robo-Philosophy 2014, edited by Johanna Seibt,
Raul Hakli, and Marco Nørskov, pp. 329-39. Frontiers in Artificial Intelligence and
Applications 273. IOS Press, 2014.
Chapter Two: Organizational Posthumanism • 149
Gladden, Matthew E. “‘Upgrading’ the Human Entity: Cyberization as a Path to
Posthuman Utopia or Digital Annihilation?” Lecture in the Arkana Fantastyki lec-
ture cycle, Centrum Informacji Naukowej i Biblioteka Akademicka (CINiBA), Ka-
towice, May 27, 2015.
Gladden, Matthew E. “Utopias and Dystopias as Cybernetic Information Systems: En-
visioning the Posthuman Neuropolity.” Creatio Fantastica nr 3 (50) (2015).
Goicoechea, María. “The Posthuman Ethos in Cyberpunk Science Fiction.” CLCWeb:
Comparative Literature and Culture 10, no. 4 (2008): 9. http://docs.lib.pur-
due.edu/cgi/viewcontent.cgi?article=1398&context=clcweb. Accessed May 18,
2016.
Graham, Elaine. “Post/Human Conditions.” Theology & Sexuality 10:2 (2004), pp. 10-
32.
Graham, Elaine. Representations of the Post/Human: Monsters, Aliens and Others in
Popular Culture. Manchester: Manchester University Press, 2002.
Gray, Chris Hables. Cyborg Citizen: Politics in the Posthuman Age, London: Routledge,
2002.
Green, Steven J. “Malevolent gods and Promethean birds: Contesting augury in Au-
gustus's Rome.” In Transactions of the American Philological Association, vol. 139,
no. 1, pp. 147-167. The Johns Hopkins University Press, 2009.
Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. “Developing Artificial Agents Worthy
of Trust: ‘Would You Buy a Used Car from This Artificial Agent?’” Ethics and In-
formation Technology 13, no. 1 (2011): 17-27.
Hamilton, S. “What Is Roman Ornithomancy? A Compositional Analysis of an An-
cient Roman Ritual.” Proceedings of the Multidisciplinary Graduate Research Con-
ference. Lethbridge: University of Lethbridge Graduate Students Association,
2007.
Han, J.-H., S.A. Kushner, A.P. Yiu, H.-W. Hsiang, T. Buch, A. Waisman, B. Bontempi,
R.L. Neve, P.W. Frankland, and S.A. Josselyn. “Selective Erasure of a Fear
Memory.” Science 323, no. 5920 (2009): 1492-96.
Hanson, R. “If uploads come first: The crack of a future dawn.” Extropy 6, no. 2
(1994): 10-15.
Haraway, Donna. “A Manifesto for Cyborgs: Science, Technology, and Socialist Fem-
inism in the 1980s.” Socialist Review vol. 15, no. 2 (1985), pp. 65-107.
Haraway, Donna. Simians, Cyborgs, and Women: The Reinvention of Nature. New
York: Routledge, 1991.
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Liter-
ature, and Informatics, Chicago: University of Chicago Press, 1999.
Hazen, Robert. “What is life?”, New Scientist 192, no. 2578 (2006): 46-51.
Heim, Michael. The Metaphysics of Virtual Reality. New York: Oxford University
Press, 1993.
150 • Posthuman Management
Herbrechter, Stefan. Posthumanism: A Critical Analysis. London: Bloomsbury, 2013.
[Kindle edition.]
Heylighen, Francis. “The Global Brain as a New Utopia.” In Renaissance der Utopie.
Zukunftsfiguren des 21. Jahrhunderts, edited by R. Maresch and F. Rötzer. Frank-
furt: Suhrkamp, 2002.
Holacracy Constitution v4.1. Spring City, PA: HolacracyOne, 2015. http://www.ho-
lacracy.org/wp-content/uploads/2015/07/Holacracy-Constitution-v4.1.pdf. Ac-
cessed May 21, 2016.
Horling, Bryan, and Victor Lesser. “A Survey of Multi-Agent Organizational Para-
digms.” The Knowledge Engineering Review 19, no. 04 (2004): 281-316.
Kanda, Takayuki, and Hiroshi Ishiguro. Human-Robot Interaction in Social Robotics.
Boca Raton: CRC Press, 2013.
Kedzior, Richard. How Digital Worlds Become Material: An Ethnographic and
Netnographic Investigation in Second Life. Economics and Society: Publications of
the Hanken School of Economics Nr. 281. Helsinki: Hanken School of Economics,
2014.
Kelly, Kevin. “The Landscape of Possible Intelligences.” The Technium, September 10,
2008. http://kk.org/thetechnium/the-landscape-o/. Accessed January 25, 2016.
Kelly, Kevin. Out of Control: The New Biology of Machines, Social Systems and the Eco-
nomic World. Basic Books, 1994.
Kelly, Kevin. “A Taxonomy of Minds.” The Technium, February 15, 2007.
http://kk.org/thetechnium/a-taxonomy-of-m/. Accessed January 25, 2016.
Kinicki, Angelo, and Brian Williams. Management: A Practical Introduction, 5th edi-
tion. New York: McGraw Hill, 2010.
Koene, Randal A. “Embracing Competitive Balance: The Case for Substrate-Inde-
pendent Minds and Whole Brain Emulation.” In Singularity Hypotheses, edited by
Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, pp. 241-
67. The Frontiers Collection. Springer Berlin Heidelberg, 2012.
Koltko-Rivera, Mark E. “The potential societal impact of virtual reality.” Advances in
virtual environments technology: Musings on design, evaluation, and applications 9
(2005).
Koops, B.-J., and R. Leenes. “Cheating with Implants: Implications of the Hidden In-
formation Advantage of Bionic Ears and Eyes.” In Human ICT Implants: Technical,
Legal and Ethical Considerations, edited by Mark N. Gasson, Eleni Kosta, and Di-
ana M. Bowman, pp. 113-34. Information Technology and Law Series 23. T. M. C.
Asser Press, 2012.
Kourany, J.A. “Human Enhancement: Making the Debate More Productive.” Erkennt-
nis 79, no. 5 (2013): 981-98.
Kowalewska, Agata. “Symbionts and Parasites – Digital Ecosystems.” In Digital Eco-
systems: Society in the Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz,
and Renata Włoch, pp. 73-84. Warsaw: Digital Economy Lab, University of War-
saw, 2016.
Chapter Two: Organizational Posthumanism • 151
Kraemer, Felicitas. “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises
Questions of Personal Authenticity and Alienation.” Neuroethics 6, no. 3 (May 12,
2011): 483-97. doi:10.1007/s12152-011-9115-7.
Krzywinska, Tanya, and Douglas Brown. “Games, Gamers and Posthumanism.”
In The Palgrave Handbook of Posthumanism in Film and Television, pp. 192-201.
Palgrave Macmillan UK, 2015.
Lamm, Ehud, and Ron Unger. Biological Computation, Boca Raton: CRC Press, 2011.
Lebedev, M. “Brain-Machine Interfaces: An Overview.” Translational Neuroscience 5,
no. 1 (March 28, 2014): 99-110.
Lee, Joseph. “Cochlear implantation, enhancements, transhumanism and posthuman-
ism: some human questions.” Science and engineering ethics 22, no. 1 (2016): 67-92.
Lilley, Stephen. Transhumanism and Society: The Social Debate over Human Enhance-
ment. Springer Science & Business Media, 2013.
Lune, Howard. Understanding Organizations, Cambridge: Polity Press, 2010.
Lyon, David. “Beyond Cyberspace: Digital Dreams and Social Bodies.” In Education
and Society, third edition, edited by Joseph Zajda, pp. 221-38. Albert Park: James
Nicholas Publishers, 2001.
Maguire, Gerald Q., and Ellen M. McGee. “Implantable brain chips? Time for de-
bate.” Hastings Center Report 29, no. 1 (1999): 7-13.
Mara, Andrew, and Byron Hawk. “Posthuman rhetorics and technical communica-
tion.” Technical Communication Quarterly 19, no. 1 (2009): 1-10.
Mazlish, Bruce. The Fourth Discontinuity: The Co-Evolution of Humans and Machines.
New Haven: Yale University Press, 1993.
McGee, E.M. “Bioelectronics and Implanted Devices.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 207-24. The Inter-
national Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
McIntosh, Daniel. “The Transhuman Security Dilemma.” Journal of Evolution and
Technology 21, no. 2 (2010): 32-48.
Medical Enhancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick.
The International Library of Ethics, Law and Technology 2. Springer Netherlands,
2008.
Mehlman, Maxwell J. Transhumanist Dreams and Dystopian Nightmares: The Promise
and Peril of Genetic Engineering. Baltimore: The Johns Hopkins University Press,
2012.
Merkel, R., G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. “Cen-
tral Neural Prostheses.” In Intervening in the Brain: Changing Psyche and Society,
117-60. Ethics of Science and Technology Assessment 29. Springer Berlin Heidel-
berg, 2007.
Miah, Andy. “A Critical History of Posthumanism.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 71-94. The Interna-
tional Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
152 • Posthuman Management
Miller, Jr., Gerald Alva. “Conclusion: Beyond the Human: Ontogenesis, Technology,
and the Posthuman in Kubrick and Clarke’s 2001.” In Exploring the Limits of the
Human through Science Fiction, pp. 163-90. American Literature Readings in the
21st Century. Palgrave Macmillan US, 2012.
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cam-
bridge: Harvard University Press, 1990.
Moreno, Jonathan. “DARPA On Your Mind.” Cerebrum vol. 6 issue 4 (2004): 92-100.
Nadler, David, and Michael Tushman. Competing by Design: The Power of Organiza-
tional Architecture. Oxford University Press, 1997. [Kindle edition.]
Nahin, Paul J. “Religious Robots.” In Holy Sci-Fi!, pp. 69-94. Springer New York, 2014.
Nazario, Marina. “I went to Best Buy and encountered a robot named Chloe – and
now I’m convinced she’s the future of retail.” Business Insider, October 23, 2015.
http://www.businessinsider.com/what-its-like-to-use-best-buys-robot-2015-10.
Accessed May 20, 2016.
Nourbakhsh, Illah Reza. “The Coming Robot Dystopia.” Foreign Affairs 94, no. 4
(2015): 23.
Panno, Joseph. Gene Therapy: Treating Disease by Repairing Genes. New York: Facts
on File, 2005.
Park, M.C., M.A. Goldman, T.W. Belknap, and G.M. Friehs. “The Future of Neural In-
terface Technology.” In Textbook of Stereotactic and Functional Neurosurgery, ed-
ited by A.M. Lozano, P.L. Gildenberg, and R.R. Tasker, 3185-3200. Heidelberg/Ber-
lin: Springer, 2009.
Pearce, David. “The Biointelligence Explosion.” In Singularity Hypotheses, edited by
A.H. Eden, J.H. Moor, J.H. Søraker, and E. Steinhart, pp. 199-238. The Frontiers
Collection. Berlin/Heidelberg: Springer, 2012.
Perez-Marin, D., and I. Pascual-Nieto. Conversational Agents and Natural Language
Interaction: Techniques and Effective Practices. Hershey, PA: IGI Global, 2011.
Perlberg, James. Industrial Robotics. Boston: Cengage Learning, 2016.
Porterfield, Andrew. “Be Aware of Your Inner Zombie.” ENGINEERING & SCIENCE
(Fall 2010): 14-19.
Posthuman Bodies, edited by Judith Halberstam and Ira Livingstone. Bloomington, IN:
Indiana University Press, 1995.
Posthumanist Shakespeares, edited by Stefan Herbrechter and Ivan Callus. Palgrave
Shakespeare Studies. Palgrave Macmillan UK, 2012.
Pride, W., R. Hughes, and J. Kapoor. Foundations of Business, 4e. Stamford, CT: Cen-
gage Learning, 2014.
Proudfoot, Diane. “Software Immortals: Science or Faith?” In Singularity Hypotheses,
edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart,
pp. 367-92. The Frontiers Collection. Springer Berlin Heidelberg, 2012.
Chapter Two: Organizational Posthumanism • 153
Provost, Foster, and Tom Fawcett. Data Science for Business. Sebastopol, CA: O’Reilly
Media, Inc., 2013.
Puranam, Phanish, Marlo Raveendran, and Thorbjørn Knudsen. “Organization De-
sign: The Epistemic Interdependence Perspective.” Academy of Management Re-
view 37, no. 3 (July 1, 2012): 419-40.
Ramirez, S., X. Liu, P.-A. Lin, J. Suh, M. Pignatelli, R.L. Redondo, T.J. Ryan, and S.
Tonegawa. “Creating a False Memory in the Hippocampus.” Science 341, no. 6144
(2013): 387-91.
Robertson, Brian J. Holacracy: The New Management System for a Rapidly Changing
World. New York: Henry Holt, 2015.
Robinett, W. “The consequences of fully understanding the brain.” In Converging
Technologies for Improving Human Performance: Nanotechnology, Biotechnology,
Information Technology and Cognitive Science, edited by M.C. Roco and W.S. Bain-
bridge, pp. 166-70. National Science Foundation, 2002.
Roden, David. Posthuman Life: Philosophy at the Edge of the Human. Abingdon:
Routledge, 2014.
Rutten, W. L. C., T. G. Ruardij, E. Marani, and B. H. Roelofsen. “Neural Networks on
Chemically Patterned Electrode Arrays: Towards a Cultured Probe.” In Operative
Neuromodulation, edited by Damianos E. Sakas and Brian A. Simpson, pp. 547-54.
Acta Neurochirurgica Supplements 97/2. Springer Vienna, 2007.
Sachs, Jeffrey D., Seth G. Benzell, and Guillermo LaGarda. “Robots: Curse or Bless-
ing? A Basic Framework.” NBER Working Papers Series, Working Paper 21091.
Cambridge, MA: National Bureau of Economic Research, 2015.
Samani, Hooman Aghaebrahimi, and Adrian David Cheok. “From human-robot rela-
tionship to robot-based leadership.” In 2011 4th International Conference on Hu-
man System Interactions (HSI), pp. 178-81. IEEE, 2011.
Samani, Hooman Aghaebrahimi, Jeffrey Tzu Kwan Valino Koh, Elham Saadatian, and
Doros Polydorou. “Towards Robotics Leadership: An Analysis of Leadership
Characteristics and the Roles Robots Will Inherit in Future Human Society.” In
Intelligent Information and Database Systems, edited by Jeng-Shyang Pan, Shyi-
Ming Chen, and Ngoc Thanh Nguyen, pp. 158-65. Lecture Notes in Computer Sci-
ence 7197. Springer Berlin Heidelberg, 2012.
Sandor, Christian, Martin Fuchs, Alvaro Cassinelli, Hao Li, Richard Newcombe,
Goshiro Yamamoto, and Steven Feiner. “Breaking the Barriers to True Aug-
mented Reality.” arXiv preprint, arXiv:1512.05471 [cs.HC], December 17, 2015.
http://arxiv.org/abs/1512.05471. Accessed January 25, 2016.
Sarkar, Prabhat Rainjan. “Neohumanism Is the Ultimate Shelter (Discourse 11).” In
The Liberation of Intellect: Neohumanism. Kolkata: Ananda Marga Publications,
1982.
Schmeink, Lars. “Dystopia, Alternate History and the Posthuman in Bioshock.” Cur-
rent Objectives of Postgraduate American Studies 10 (2009). http://copas.uni-re-
gensburg.de/article/viewArticle/113/137. Accessed May 19, 2016.
154 • Posthuman Management
Short, Sue. Cyborg Cinema and Contemporary Subjectivity. New York: Palgrave Mac-
millan, 2005.
Singularity Hypotheses, edited by A.H. Eden, J.H. Moor, J.H. Søraker, and E. Steinhart.
The Frontiers Collection. Berlin/Heidelberg: Springer, 2012.
Snider, Greg S. “Cortical Computing with Memristive Nanodevices.” SciDAC Review
10 (2008): 58-65.
Social Robots and the Future of Social Relations, edited by Johanna Seibt, Raul Hakli,
and Marco Nørskov. Amsterdam: IOS Press, 2014.
Social Robots from a Human Perspective, edited by Jane Vincent, Sakari Taipale, Bar-
tolomeo Sapio, Giuseppe Lugano, and Leopoldina Fortunati. Springer Interna-
tional Publishing, 2015.
Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov. Farnham:
Ashgate Publishing, 2016.
Soussou, Walid V., and Theodore W. Berger. “Cognitive and Emotional Neuropros-
theses.” In Brain-Computer Interfaces, pp. 109-23. Springer Netherlands, 2008.
Sparrow, R. “Killer Robots.” Journal of Applied Philosophy vol. 24, no. 1 (2007): 62-77.
Spohrer, Jim. “NBICS (Nano-Bio-Info-Cogno-Socio) Convergence to Improve Human
Performance: Opportunities and Challenges.” In Converging Technologies for Im-
proving Human Performance: Nanotechnology, Biotechnology, Information Technol-
ogy and Cognitive Science, edited by M.C. Roco and W.S. Bainbridge, pp. 101-17.
Arlington, Virginia: National Science Foundation, 2002.
Stahl, B. C. “Responsible Computers? A Case for Ascribing Quasi-Responsibility to
Computers Independent of Personhood or Agency.” Ethics and Information Tech-
nology 8, no. 4 (2006): 205-13.
Stieglitz, Thomas. “Restoration of Neurological Functions by Neuroprosthetic Tech-
nologies: Future Prospects and Trends towards Micro-, Nano-, and Biohybrid Sys-
tems.” In Operative Neuromodulation, edited by Damianos E. Sakas, Brian A.
Simpson, and Elliot S. Krames, 435-42. Acta Neurochirurgica Supplements 97/1.
Springer Vienna, 2007.
Thacker, Eugene. “Data made flesh: biotechnology and the discourse of the posthu-
man.” Cultural Critique 53, no. 1 (2003): 72-97.
Thomsen, Mads Rosendahl. The New Human in Literature: Posthuman Visions of
Change in Body, Mind and Society after 1900. London: Bloomsbury, 2013.
Ummat, Ajay, Atul Dubey, and Constantinos Mavroidis. “Bionanorobotics: A Field
Inspired by Nature." In Biomimetics: Biologically Inspired Technologies, edited by
Yoseph Bar-Cohen, pp. 201-26. Boca Raton: CRC Press, 2005.
Versace, Massimiliano, and Ben Chandler. “The Brain of a New Machine.” IEEE spec-
trum 47, no. 12 (2010): 30-37.
Warkentin, Merrill, Vijayan Sugumaran, and Robert Sainsbury. “The Role of Intelli-
gent Agents and Data Mining in Electronic Partnership Management.” Expert Sys-
tems with Applications 39, no. 18 (2012): 13277-88.
Chapter Two: Organizational Posthumanism • 155
Warwick, K. “The Cyborg Revolution.” Nanoethics 8 (2014): 263-73.
Warwick, K., and M. Gasson. “Implantable Computing.” In Digital Human Modeling,
edited by Y. Cai, pp. 1-16. Lecture Notes in Computer Science 4650. Berlin/Heidel-
berg: Springer, 2008.
Wasay, A., M. Athanassoulis, and S. Idreos. “Queriosity: Automated Data Explora-
tion.” In 2015 IEEE International Congress on Big Data (BigData Congress), pp. 716-
19, 2015. doi:10.1109/BigDataCongress.2015.116.
Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the
Machine, second edition. Cambridge, MA: The MIT Press, 1961. [Quid Pro ebook
edition for Kindle, 2015.]
Wiltshire, Travis J., Dustin C. Smith, and Joseph R. Keebler. “Cybernetic Teams: To-
wards the Implementation of Team Heuristics in HRI.” In Virtual Augmented and
Mixed Reality. Designing and Developing Augmented and Virtual Environments, ed-
ited by Randall Shumaker, pp. 321-30. Lecture Notes in Computer Science 8021.
Springer Berlin Heidelberg, 2013.
Wolf-Meyer, Matthew. “Fantasies of extremes: Sports, war and the science of
sleep.” BioSocieties 4, no. 2 (2009): 257-71.
Yampolskiy, Roman V. “The Universe of Minds.” arXiv preprint, arXiv:1410.0369
[cs.AI], October 1, 2014. http://arxiv.org/abs/1410.0369. Accessed January 25, 2016.
Yampolskiy, Roman V. Artificial Superintelligence: A Futuristic Approach. Boca Raton:
CRC Press, 2015.
Yonck, Richard. “Toward a standard metric of machine intelligence.” World Future
Review 4, no. 2 (2012): 61-70.
Chapter Three
The Posthuman Management Matrix:
Understanding the Organizational Impact
of Radical Biotechnological
Convergence
1
Abstract. In this text we present the Posthuman Management Ma-
trix, a model for understanding the ways in which organizations of
the future will be affected by the blurring – or even dissolution – of
boundaries between human beings and computers. In this model,
an organization’s employees and cons umers can include two dif-
ferent kinds of agents (human and artificial) who may possess either
of two sets of characteristics (anthropic or computer-like); the
model thus defines four types of possible entities. For millennia, the
only type of relevance for management theory and practice was
that of human agents who possess anthropic characteristics – i.e.,
natural human beings. During the 20 th Century, the arrival of com-
puters and industrial robots made relevant a second type: that of
artificial agents possessing computer-like characteristics.
Management theory and practice have traditionally overlooked
the remaining two types of possible entities – human agents pos-
sessi ng computer-like physical and cognitive characteristics (w hich
can be referred to as ‘cyborgs’) and artifici al agents possessing an-
thropic physical and cognitive characteristics (which for lack of a
more appropriate term might be called ‘bioroids ’) – because such
agents did not yet exist to serve as employees or consumers for or-
ganizations. However, in this text we argue that ongoing develop-
ments in neuroprosthetics, genetic engineering , virtual reality, ro-
botics, and artificial intelligence are indeed giving rise to such types
1
This chapter was originally published in Gladden, Matthew E., Sapient Circuits and Digitalized
Flesh: The Organization as Locus of Technological Posthumanization, pp. 133-201, Indianapolis:
Defragmenter Media, 2016.
158 • Posthuman Management
of agents and that new spheres of management theory and prac-
tice will be needed to allow organizations to understa nd the oper-
ational, legal, and ethi cal issues that arise as their pools of potential
workers and customers evolve to include human beings whose bod-
ies and minds incorporate ever more computerized elements and
artificial entities that increasingly resemble biological beings.
By analyzing the full spectrum of human, computeri zed, and hybrid
entities that will constitute future organizations, the Posthuman
Management Matrix highlights ways in which establi shed disciplines
such as cybernetics, systems theory, organizational design, and en-
terprise architecture can work alongside new di sciplines li ke psy-
chological engineering, AI resource management, metapsychol-
ogy, and exoeconomics to help organizations anticipate and
adapt to posthumanizing technological and social change.
I. Introduction
Facilitated by ongoing technological developments in fields like neuropros-
thetics, genetic engineering, social robotics, nanorobotics, and artificial intel-
ligence, a growing convergence between sapient biological entities like human
beings and electronic computerized systems is underway. Looking beyond the
current reality in which human beings interact with technological instru-
ments that mediate so many of our daily activities, researchers anticipate a
future in which human persons themselves become technological instruments.
Human beings who display carefully engineered architectures,
2
electrome-
chanical physical components,
3
software-guided cognitive processes,
4
and dig-
itally mediated interactions
5
will increasingly resemble computers – and they
2
See, e.g., Canton, “Designing the future: NBIC technologies and human performance enhance-
ment” (2004); De Melo-Martín, “Genetically Modified Organisms (GMOs): Human Beings”
(2015); Nouvel, “A Scale and a Paradigmatic Framework for Human Enhancement” (2015); and
Bostrom, “Human Genetic Enhancements: A Transhumanist Perspective” (2012). Regarding
‘brain engineering,’ see Gross, “Traditional vs. modern neuroenhancement: notes from a medico-
ethical and societal perspective” (2011).
3
Regarding expected future growth in the use of implantable electronic neuroprosthetic devices
for purposes of human enhancement, see, e.g., McGee, “Bioelectronics and Implanted Devices”
(2008), and Gasson, “Human ICT Implants: From Restorative Application to Human Enhance-
ment” (2012).
4
For the potential use of an electronic ‘brain pacemaker’ to regulate cognitive activity, see Nau-
fel, “Nanotechnology, the Brain, and Personal Identity” (2013). Regarding possible manipulation
of the human brain’s activity through the use of computerized neuroprosthetic devices, see Viirre
et al., “Promises and perils of cognitive performance tools: A dialogue” (2008), and Heinrichs,
“The promises and perils of non-invasive brain stimulation” (2012).
5
See, e.g., Communication in the Age of Virtual Reality, edited by Biocca & Levy (1995); Cyberso-
ciety 2.0: Revisiting Computer-Mediated Communication and Community, edited by Jones (1998);
and Lyon, “Beyond Cyberspace: Digital Dreams and Social Bodies” (2001).
Chapter Three: The Posthuman Management Matrix • 159
will share digital-physical ecosystems with computerized systems whose bio-
logical or biomimetic components,
6
evolutionary processes,
7
unpredictable
neural networks,
8
and physically mediated social relations
9
cause them to ever
more closely resemble human beings.
Such technological and social changes will be so transformative in their
effects that they can be understood as creating a world best described as
posthuman.
10
Within such a post-anthropocentric and post-dualistic environ-
ment,
11
it will no longer be natural biological human beings alone who seek
6
See, e.g., Ummat et al., “Bionanorobotics: A Field Inspired by Nature” (2005); Andrianantoandro
et al., “Synthetic biology: new engineering rules for an emerging discipline” (2006); Cheng & Lu,
“Synthetic biology: an emerging engineering discipline” (2012); Lamm & Unger, Biological Com-
putation (2011); Church et al., “Next-generation digital information storage in DNA” (2012); and
Berner, Management in 20XX: What Will Be Important in the Future – A Holistic View (2004), pp.
15, 18, 31, 61-62.
7
For a discussion of evolutionary robotics and evolvable robotic hardware, see Friedenberg, Ar-
tificial Psychology: The Quest for What It Means to Be Human (2008), pp. 206-10.
8
Regarding factors that make it difficult to analyze or predict the behavior of artificially intelli-
gent systems – especially of distributed artificial intelligences (DAIs) displaying emergent be-
havior – see Friedenberg (2008), pp. 31-32. For a discussion of the behavior of physical artificial
neural networks, see, e.g., Snider, “Cortical Computing with Memristive Nanodevices” (2008);
Versace & Chandler, “The Brain of a New Machine” (2010); and Advances in Neuromorphic
Memristor Science and Applications, edited by Kozma et al. (2012).
9
For robots that interact socially with human beings, see, e.g., Breazeal, “Toward sociable ro-
bots” (2003); Kanda & Ishiguro, Human-Robot Interaction in Social Robotics (2013); Social Robots
and the Future of Social Relations, edited by Seibt et al. (2014); Social Robots from a Human Per-
spective, edited by Vincent et al. (2015); and Social Robots: Boundaries, Potential, Challenges, edited
by Marco Nørskov (2016). For robots that interact socially with one another, see, e.g., Arkin &
Hobbs, “Dimensions of communication and social organization in multi-agent robotic systems”
(1993); Barca & Sekercioglu, “Swarm robotics reviewed” (2013); and Brambilla et al., “Swarm
robotics: a review from the swarm engineering perspective” (2013).
10
The processes of posthumanization that expand the boundaries of society to include entities
other than natural biological human beings as traditionally understood include the age-old forces
of nontechnological posthumanization (as reflected in works of critical and cultural posthuman-
ism and fantasy literature) and the newly emerging and intensifying forces of technological
posthumanization, which is the focus of this text and is explored in works of biopolitical posthu-
manism, philosophical posthumanism, and science fiction. Regarding nontechnological posthu-
manization, see, e.g., Graham, Representations of the Post/Human: Monsters, Aliens and Others in
Popular Culture (2002); Badmington, “Cultural Studies and the Posthumanities” (2006); and Her-
brechter, Posthumanism: A Critical Analysis (2013). Regarding technological posthumanization,
see, e.g., Fukuyama, Our Posthuman Future: Consequences of the Biotechnology Revolution (2002);
Bostrom, “Why I Want to Be a Posthuman When I Grow Up” (2008); and other texts in Medical
Enhancement and Posthumanity, edited by Gordijn & Chadwick (2008). For an overview of the
forms of posthumanism that take these phenomena as their objects of study and practice, see
Ferrando, “Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materi-
alisms: Differences and Relations” (2013), and our classification scheme in Part One of this text,
“A Typology of Posthumanism: A Framework for Differentiating Analytic, Synthetic, Theoreti-
cal, and Practical Posthumanisms.”
11
See Ferrando (2013).
160 • Posthuman Management
out and create meaning through their exercise of imagination, reason, voli-
tion, and conscience; instead the world will likely include a bewildering array
of sources of intelligent agency that create meaning through their networks
and relations.
12
The implications for organizational management of this dawn-
ing ‘Posthuman Age’ are expected to be vast, and yet they have not yet been
comprehensively explored from a theoretical perspective.
Fig. 1: The Posthuman Management Matrix delineates four types of entities, each of which may
be of greater or lesser relevance for the practice of organizational management at a particular
point in human history.
In an effort to advance such study, in this text we develop the Posthuman
Management Matrix, a two-dimensional model designed to aid management
scholars and practitioners in analyzing and anticipating the impacts of posthu-
manizing technological and social change on organizations. We begin by
12
See Ferrando (2013).
Chapter Three: The Posthuman Management Matrix • 161
showing that the agents that are relevant to organizational management can
be divided into two varieties (human and artificial agents) and that the traits
possessed by a particular agent fall into one of two kinds (which we refer to
as ‘anthropic’ and ‘computronic’ characteristics
13
). The Matrix thus delineates
four general types of possible entities that can potentially serve as workers or
consumers for businesses and other organizations. These types of entities are:
human agents possessing anthropic characteristics (whom we can refer to
simply as ‘natural human beings’); artificial agents possessing computronic
characteristics (or in other words, conventional ‘computers’); human agents
possessing computronic characteristics (whom we can refer to as ‘cyborgs’);
and artificial agents possessing anthropic characteristics (which, for lack of a
better term, can be referred to as ‘bioroids’
14
). An overview of the four quad-
rants of the Posthuman Management Matrix and the types of entities that they
represent is contained in Figure 1.
The Matrix is then utilized to analyze management theory and practice as
they have existed prior to this emerging age of radical technological posthu-
manization. Beginning from the dawn of human history, the only type of en-
tity relevant to management theory and practice was long that of human
agents who possess anthropic characteristics – or in other words, natural hu-
man beings who have not been modified through the use of technologies such
as neuroprosthetic augmentation or genetic engineering. Only with the arrival
of electronic information-processing systems and simple industrial robots in
the 20th Century did a second type of entity become broadly relevant for or-
ganizational management: that of the artificial agent that possesses compu-
tronic characteristics, or the ‘computer.’
15
Integrating such computerized sys-
tems into an organization of human workers is not an easy task, and manage-
ment disciplines such as enterprise architecture, IT management, and infor-
13
In this text we use the portmanteau ‘computronic’ to refer to physical structures, behaviors,
or other phenomena or characteristics which in recent decades have commonly been associated
with computers and electronic devices. This builds on earlier uses of the word found, e.g., in
Turner, “The right to privacy in a computronic age” (1970), and Rankin, “Business Secrets Across
International Borders: One Aspect of the Transborder Data Flow Debate” (1985).
14
For use of the term ‘bioroid’ in an engineering context, see Novakovi et al., “Artificial Intel-
ligence and Biorobotics: Is an Artificial Human Being our Destiny?” (2009). Regarding the use of
the term in speculative fiction, see, e.g., Pulver, GURPS Robots (1995), pp. 74-81, where ‘bioroid’
is a portmanteau derived explicitly from ‘biological android.’
15
For early examples of workplace robotics explored from the perspective of management theory
and practice, see, e.g., Thompson, “The Man-Robot Interface in Automated Assembly” (1976),
and Goodman & Argote, “New Technology and Organizational Effectiveness” (1984).
162 • Posthuman Management
mation security have emerged that provide conceptual frameworks and prac-
tical tools for successfully coordinating the actions of human and artificial
agents to create effective organizations.
16
The largest portion of this text is dedicated to employing the Matrix as a
means of investigating the remaining two types of entities – ‘cyborgs’ and
‘bioroids’ – that have heretofore received relatively little serious attention
within the field of management but which are set to become ever more prev-
alent as workers, managers, consumers, and other organizational stakehold-
ers, thanks to the accelerating and intensifying processes of technological
posthumanization. We suggest that it will not be possible to adequately un-
derstand and manage the many complex operational, legal, and ethical issues
that arise from adopting such posthuman agents as employees or customers
simply by relying on existing fields such as HR management, IT management,
or enterprise architecture. The radically expanded universe of posthuman
agents that will participate in the life of organizations will require the devel-
opment of new spheres of theory and practice that can address the unique
forms, behaviors, strengths, and weaknesses of such agents, along with the
ways in which they will combine to create rich and complex cybernetic net-
works and digital-physical ecosystems. Our exploration of these questions
concludes by contemplating the sorts of transdisciplinary management ap-
proaches that might be able to successfully account for such organizational
systems in which natural human beings, genetically engineered persons, indi-
viduals possessing extensive neuroprosthetic augmentation, human beings
who spend all of their time dwelling in virtual worlds, social robots, artificially
intelligent software, nanorobot swarms, and sentient or sapient networks
work together in physical and virtual environments to achieve organizational
goals.
17
Through this formulation, application, and discussion of the Posthuman
Management Matrix, we hope to highlight the challenges that await manage-
ment scholars and practitioners in an increasingly posthumanized world and
to suggest one possible conceptual framework that can aid us in making sense
of and responding to these challenges.
16
For a review of enterprise architecture frameworks, see Magoulas et al., “Alignment in Enter-
prise Architecture: A Comparative Analysis of Four Architectural Approaches” (2012), and
Rohloff, “Framework and Reference for Architecture Design” (2008); for a practical overview of
organizational design, see Burton et al., Organizational Design: A Step-by-Step Approach (2015);
for an overview of information security, see Rao & Nayak, The InfoSec Handbook (2014).
17
See Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), pp.
95-96.
Chapter Three: The Posthuman Management Matrix • 163
II. Formulating the Posthuman Management Matrix
We would suggest that it is useful to analyze the impact of posthumanizing
social and technological change on organizational management through a
two-dimensional conceptual framework that creates a coherent tool for iden-
tifying, understanding, and anticipating organizational transformations that
will occur as a result of the convergences described in this text. We can refer
to this proposed framework as the ‘Posthuman Management Matrix.’ Our
hope is that such a model can serve as both a theoretical framework for man-
agement scholars as well as a practical tool for management practitioners. The
Posthuman Management Matrix comprises two dimensions: the horizontal di-
mension is that of an ‘agent’ and the vertical dimension is that of an agent’s
‘characteristics.’ We can consider each of these dimensions in turn.
A. The Matrix’s Horizontal Dimension: The Kind of Agent
There are many types of entities and phenomena that must be managed by
organizations; however, many of them do not possess or manifest their own
agency. Such non-agents include financial assets, land, raw materials, intellec-
tual property, contracts, policies and procedures, and other elements of organ-
izational life that are not capable of gathering data from their environment,
processing information, and selecting a course of action.
18
On the other hand, there are many kinds of agents
19
that may actively par-
ticipate in an organization’s activities; these include typical adult human be-
ings, some kinds of domesticated animals (which, for example, can be em-
ployed in particular roles within the fields of agriculture, law enforcement,
and entertainment), many types of autonomous and semiautonomous robots,
and artificially intelligent software programs that run on particular computing
platforms. Note that in order to qualify as an agent, an entity does not need to
need to possess the same kind of sapience as a typical adult human being;
relatively simple automated systems (such as an assembly-line robot or the
software managing an automated customer-service telephone line) can be de-
scribed as agents, even if they do not possess full human-like artificial general
intelligence. Conversely, not all human beings can be considered agents from
the managerial perspective, even if they are considered to be legal persons and
moral patients; for example, an adult human being who is in a coma and whose
mind is not able to receive sensory input, process information, and select and
18
Within the context of enterprise architecture, for example, both agents and non-agents can be
understood generically as ‘entities’ that play particular ‘roles’ in various ‘activities’ within an
organization; see Caetano et al., “A Role-Based Enterprise Architecture Framework” (2009).
19
For an overview of biological, robotic, and software-based agents and their key characteristics
of autonomy, social ability, reactivity, and proactivity, see Tweedale & Jain, “Agent Oriented
Programming” (2011).
164 • Posthuman Management
act upon particular courses of action would not be considered an ‘agent’ in
the organizational sense employed here.
Much ongoing research and debate is taking place regarding questions of
whether and to what extent collective entities can be considered agents. It is
a matter of contention whether a social organization such as a country or a
swarm of insects can possess its own ‘agency’ distinct from the agency of all
the individuals that constitute it.
20
In some cases, the law recognizes certain
types of social entities (e.g., states or corporations) as possessing a sort of
agency independent of that of their human constituents, although different
conclusions may be formulated when viewing such entities from an ontologi-
cal or moral rather than a legal perspective. Similarly, some automated artifi-
cial agents have been designed in such a way that they are in fact multi-agent
systems composed of a number of smaller subsystems and components that
are themselves agents. In such cases, the agency possessed by a multi-agent
system as a whole is typically of a different sort from that possessed by its
individual components. More complex is the case of large computer-facilitated
networks (e.g., the Internet) that can, in a certain sense, be said to select and
act upon particular courses of action and whose ‘decisions’ are shaped by the
activities of individual human and artificial agents that have access to the net-
work and who participate in its sensorimotor and information-processing ac-
tions.
21
Traditionally, facilities such as office buildings or warehouses would not
in themselves have qualified as ‘agents,’ even though they were home to the
activities of large numbers of agents and contained an extensive technological
infrastructure of mechanical, electrical, and other components that were reg-
ularly manipulated by those agents as part of their work. However, the rise of
the Internet of Things and smart buildings means that in some cases an office
20
Regarding questions about the nature and degree of agency and decision-making responsibil-
ity that can be possessed by robotic swarms or networks, see, e.g., Coeckelbergh, “From Killer
Machines to Doctrines and Swarms, or Why Ethics of Military Robotics Is Not (Necessarily)
About Robots” (2011), pp. 274-75, and Gladden, “The Diffuse Intelligent Other: An Ontology of
Nonlocalizable Robots as Moral and Legal Actors” (2016).
21
Regarding collectively conscious networks and a “post-internet sentient network,” see Calla-
ghan, “Micro-Futures” (2014). Regarding a future Internet that is ‘self-aware’ in a technical and
technological sense, even if it is not subjectively conscious, see Galis et al., “Management Archi-
tecture and Systems for Future Internet Networks” (2009), pp. 112-13. A sentient Internet is also
discussed in Porterfield, “Be Aware of Your Inner Zombie” (2010), p. 19. For a future Internet that
is self-aware as a sort of potentially living entity, see Hazen, “What is life?” (2006). Regarding
the growing prevalence of robotic systems that comprise networks and swarms – rather than
autonomous unitary robots – and the distributed or unclear nature of decision-making and re-
sponsibility in such systems, see Coeckelbergh (2011), pp. 272-75, and Gladden, “The Diffuse
Intelligent Other” (2016).
Chapter Three: The Posthuman Management Matrix • 165
building or production facility that includes sufficient sensory and motor com-
ponents controlled by a computerized system can potentially be understood
as a single coherent ‘agent.’ A similar phenomenon is now occurring with
vehicles, which may be considered agents if they possess self-driving capabil-
ities or other forms of AI.
22
For purposes of the Posthuman Management Matrix, we can divide the
broad spectrum of agents that are relevant to contemporary organizational
management into two main categories: human beings (described below as ‘hu-
man agents’) and robots or other artificially intelligent computing systems
(described below as ‘artificial agents’).
23
1. HUMAN AGENTS
Human agents are intelligent and sapient actors whose agency is grounded
in and exercised through the actions of a biological human brain. Throughout
history, such human agents have been the primary (and often only) agents
constituting human organizations. Human beings possess a distinct set of bi-
ological, psychological, social, and cultural properties that have been exten-
sively studied by disciplines including biology, psychology, anthropology, so-
ciology, economics, history, philosophy, theology, political science, and or-
ganizational management.
2. ARTIFICIAL AGENTS
Artificial agents represent a relatively new kind of intelligent actor that
has emerged during recent decades and which has the potential to carry out
particular tasks or roles within a human organization. Although the universe
of artificial agents comprises a diverse array of entities with a broad variety
of forms and functions, artificial agents are similar in that: 1) they all possess
some means of receiving data from their environment, a means of processing
information, and a means of acting on their environment; and 2) the physical
substrate within which their agency subsists is not a natural biological human
brain.
An artificial agent often takes the form of a piece of software being exe-
cuted by some physical computational substrate such as a desktop computer,
mobile device, server, robot, or network of distributed devices.
24
However,
22
Regarding the ethical implications of creating autonomous driverless vehicles that can exercise
their own agency, see Goodall, “Ethical decision making during automated vehicle crashes”
(2014).
23
The simplified schema presented by the Posthuman Management Matrix thus omits, for ex-
ample, the explicit consideration of domesticated animals as potential workplace agents.
24
Each particular instantiation of such a sensorimotor-cognitive system can be understood as a
unique artificial agent; thus technically, the same piece of AI software run on two different com-
puters (or even on the same computer on two different occasions) can be understood as two
166 • Posthuman Management
other examples exist that do not involve the execution of a conventional soft-
ware program; these include artificial neural networks that are not run as a
software program on a conventional CPU-based computer but which comprise
a network of physical artificial neurons.
25
B. The Matrix’s Vertical Dimension: An Agent’s Characteristics
From the perspective of organizational management, there are two broad
sets of characteristics that a contemporary agent might display: ‘anthropic
characteristics’ are those that are traditionally possessed by human beings,
and ‘computronic characteristics’ are those traditionally possessed by artifi-
cial agents such as robots or artificially intelligent software. We can consider
these two suites of characteristics in greater detail.
1. ANTHROPIC CHARACTERISTICS
Anthropic characteristics constitute that array of traits which throughout
history has been possessed by and associated with human beings. These char-
acteristics are reflected in: 1) an entity’s physical form; 2) its capacity for and
use of intelligence; and 3) its social interaction with other intelligent agents.
Below we use these three perspectives to identify and describe some of the
key anthropic characteristics.
a. Physical Form
The physical form of an agent possessing anthropic characteristics demon-
strates a number of notable traits. Such an agent is:
Composed of biological components. The body of a human being is
naturally composed of biological material and not mechanical or electronic
components. The qualities of such biological material place limits on the kinds
of work that human employees can perform. For example, it is impossible for
human beings to work in areas of extreme heat, cold, or radiation without
extensive protection, nor is it possible for a human employee to work for hun-
dreds of consecutive hours without taking breaks for sleep or meals or to use
the restroom.
different artificial agents. (See Wiener, Cybernetics: Or Control and Communication in the Animal
and the Machine (1961), loc. 2402ff., for the idea that a human brain with all of its short- and
long-term memories are “not the complete analogue of the computing machine but rather the
analogue of a single run on such a machine” – something which, by definition, cannot be dupli-
cated in another substrate.) However, the term ‘artificial agent’ is also used in a looser sense to
refer to a hardware-software platform comprising a particular piece of hardware and the AI
software that it executes rather than to each separate execution of that software.
25
See, e.g., Friedenberg (2008), pp. 17-36, for a discussion of different physical models that do
not necessarily require a conventional Von Neumann computer architecture.
Chapter Three: The Posthuman Management Matrix • 167
Alive. In order to function as an agent within an organization, a human
being (and the biological subsystems that constitute its body) must be alive.
As a living organism, a human being possesses a metabolism that requires a
continual supply of resources (e.g., oxygen, water, and food) from the external
environment as well as the ability to emit waste products into the environ-
ment in order for the individual to survive.
26
Non-engineered. The basic physical form of a particular human being is
determined largely by genotypic factors that are a result of randomized inher-
itance of genetic material from the individual’s biological parents; the individ-
ual’s particular physical characteristics are not intentionally selected or fabri-
cated by a genetic engineer.
27
Non-upgradeable. There are many congenital medical conditions that
can be treated through conventional surgical procedures, medication, the use
of traditional prosthetics, or other therapies. The application of such technol-
ogies could be understood as a form of ‘augmentation’ or ‘enhancement’ of
one’s body as it was naturally formed; however, such technologies are more
commonly understood as ‘restorative’ approaches, insofar as they do not grant
an individual physical elements or capacities that surpass those possessed by
a typical human being.
28
Historically, human beings have not been subject to
the sort of radical physical ‘upgradeability’ that might involve, for example,
the implantation of additional memory capacity into the brain, an alteration
of the rate of electrochemical communication between neurons to increase the
brain’s ‘processing speed,’ the addition of new sensory capacities (e.g., infra-
red vision), or the addition of new or different limbs or actuators (e.g., wheels
instead of legs).
29
This differs from the case of contemporary computers, which
often can easily be upgraded through the addition or replacement of physical
components.
Confined to a limited lifespan. Although the lifespan of a particular hu-
man being can be shortened or extended to some degree as a result of envi-
ronmental, behavioral, or other factors, the human organism is generally un-
26
In considering a definition for artificial life, Friedenberg (2008), pp. 201-03, draws on the cri-
teria for biological life presented in Curtis, Biology (1983): namely, a living being manifests or-
ganization, metabolism, growth, homeostasis, adaptation, response to stimuli, and reproduction.
27
Although, for example, factors such as diet, exercise and training, environmental conditions,
and medicines and medical procedures can extensively modify the form of a human body, the
extent to which an existing biological human body can be restructured before ceasing to function
is nonetheless relatively limited.
28
See Gasson (2012).
29
See Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics: Video Games as Tools for
Posthuman ‘Body Schema (Re)Engineering’” (2015).
168 • Posthuman Management
derstood to possess a finite biological lifespan that cannot be extended indef-
initely through natural biological means.
30
A human being that has exceeded
its maximum lifespan is no longer alive (i.e., it will have expired) and it cannot
be repaired and revived by technological means to make it available once
again for future organizational use.
Manifesting a developmental cycle. The physical structure and capaci-
ties of a human being do not remain unchanged from the moment of an indi-
vidual’s conception to the moment of his or her death; instead, a human be-
ing’s physical form and abilities undergo continuous change as the individual
develops through a cycle of infancy, adolescence, adulthood, and senescence.
31
From the perspective of organizational management, human beings are only
capable of serving as employees, partners, or consumers during particular
phases of this developmental cycle, and the unique strengths and weaknesses
displayed by human workers vary as they move through the developmental
cycle.
Possessing a unitary local body. A particular human being occupies or
comprises a particular physical biological body. Because this body is unitary
– consisting of a single spatially compact unit – a human being is able to in-
habit only one space at a given time; a human being cannot simultaneously be
physically present in multiple cities, for example.
32
Possessing a permanent substrate. Although to some limited extent it is
possible to modify or replace physical components of a human body, it is not
possible for a human being to exchange his or her entire body for another.
33
The body with which a human being was born will – notwithstanding the
natural changes that occur as part of its lifelong developmental cycle or any
minor intentional modifications – serve as a single permanent substrate
within which all of the individual’s information processing and cognition will
occur and in which all of the individual’s sensory and motor activity will take
place until the end of his or her life.
30
For a discussion and comparison of biologically and nonbiologically based efforts at human
life extension, see Koene, “Embracing Competitive Balance: The Case for Substrate-Independent
Minds and Whole Brain Emulation” (2012).
31
See Thornton, Understanding Human Development: Biological, Social and Psychological Pro-
cesses from Conception to Adult Life (2008), and the Handbook of Psychology, Volume 6: Develop-
mental Psychology, edited by Lerner et al. (2003).
32
For a discussion of different types of bodies and their relation to an entity’s degree of locality,
see Gladden, “The Diffuse Intelligent Other” (2016).
33
For complications relating to proposed body-replacement techniques such as mind uploading,
see Proudfoot, “Software Immortals: Science or Faith?” (2012); for particular problems that would
result from the attempt to adopt a nonhuman body, see Gladden, “Cybershells, Shapeshifting,
and Neuroprosthetics” (2015).
Chapter Three: The Posthuman Management Matrix • 169
Unique and identifiable. A human being’s body creates (or at least, plays
a necessary role in creating) a single identity for the individual that persists
over time, throughout the person’s life. The fact that each human body is
unique and is identifiable to other human beings (e.g., such a body is not in-
visible, microscopic, or ‘flickering’ in and out of existence from moment to
moment) means that it is possible to associate human actions with a particular
human being who performed them.
34
b. Intelligence
The information-processing mechanisms and behaviors of an agent pos-
sessing anthropic characteristics demonstrate a number of significant traits.
Such an agent is:
Sapient and self-aware. A typical human adult possesses a subjective
conscious experience that is not simply sensations of physical reality but a
conceptual ‘awareness of’ and ‘awareness that.’ These characteristics are not
found, for example, in infants or in adult human beings suffering from certain
medical conditions. In a sense, a typical adult human being can be said to pos-
sess sapient self-awareness as a capacity even when the individual is uncon-
scious (e.g., during sleep), although in that moment the capacity is latent and
is not being actively utilized or experienced.
35
Autonomous. Broadly speaking, adult human beings are considered to
possess a high degree of autonomy.
36
Through the regular action of its mind
and body, a human being is able to secure energy sources and information
from its external environment, set goals, make decisions, perform actions, and
even (to a limited extent) repair damage that might occur to itself during the
course of its activities, all without direct external guidance or control by other
human agents. Human beings which, for example, are still infants, are suffer-
ing from physical or cognitive impairments (such as being in a coma), or are
operating in a hostile or unfamiliar environment may not be able to function
with the same degree of autonomy.
34
For an overview of philosophical questions relating to personal identity, see Olson, “Personal
Identity” (2015).
35
For a discussion of such issues, see, e.g., Siewert, “Consciousness and Intentionality” (2011);
Fabbro et al., “Evolutionary aspects of self-and world consciousness in vertebrates” (2015); and
Boly et al., “Consciousness in humans and non-human animals: recent advances and future di-
rections” (2013).
36
For a definition of autonomy applicable to agents generally, see Bekey, Autonomous Robots:
From Biological Inspiration to Implementation and Control (2005), p. 1. Regarding ways of classi-
fying different levels of autonomy, see Gladden, “Managerial Robotics: A Model of Sociality and
Autonomy for Robots Managing Human Beings and Machines” (2014).
170 • Posthuman Management
Metavolitional. Volitionality relates to an entity’s ability to self-reflexively
shape the intentions that guide its actions.
37
An entity is nonvolitional when
it possesses no internal goals or ‘desires’ for achieving particular outcomes
nor any expectations or ‘beliefs’ about how performing certain actions would
lead to particular outcomes. An entity is volitional if it combines goals with
expectations: in other words, it can possess an intention,
38
which is a mental
state that comprises both a desire and a belief about how some act that the
entity is about to perform can contribute to fulfilling that desire.
39
Meanwhile,
typical adult human beings can be described as metavolitional: they possess
what scholars have referred to as a ‘second-order volition,’ or an intention
about an intention.
40
In human beings, this metavolitionality manifests itself
in the form of conscience: as a result of possessing a conscience, human agents
are able to determine that they do not wish to possess some of the intentions
that they are currently experiencing, and they can resolve to change those
intentions.
Educated. The cognitive processes and knowledge of a human being are
shaped through an initial process of concentrated learning and formal and in-
formal education that lasts for several years and through an ongoing process
of learning that lasts throughout the individual’s lifetime.
41
Human beings can
learn empirically through the firsthand experience of interacting with their
environment or by being taught factual information or theoretical knowledge.
A human being cannot instantaneously ‘download’ or ‘import’ a large body of
information into his or her memory in the way that a data file can be copied
to a computer’s hard drive.
Processing information through a neural network. Some information
processing takes part in other parts of the body (e.g., the transduction of prox-
imal stimuli into electrochemical signals by neurons in the sensory organs);
however, the majority of a human being’s information processing is per-
formed by the neural network comprising interneurons in the individual’s
brain.
42
The brain constitutes an immensely large and intricate neural net-
work, and despite ongoing advances in the field of neuroscience, profound
37
For a discussion of the volitionality of agents, see Calverley, “Imagining a non-biological ma-
chine as a legal person” (2008), pp. 529-535, and Gladden, “The Diffuse Intelligent Other” (2016).
38
The term ‘intentionality’ is often employed in a philosophical sense to describe an entity’s
ability to possess mental states that are directed toward (or ‘about’) some object; that is a broader
phenomenon than the possession of a particular ‘intention’ as defined here.
39
Calverley (2008), p. 529.
40
Calverley (2008), pp. 533-35.
41
See Thornton (2008), and Handbook of Psychology, Volume 6 (2003).
42
For example, see Gladden, The Handbook of Information Security for Advanced Neuroprosthetics
(2015), pp. 148-49.
Chapter Three: The Posthuman Management Matrix • 171
mysteries remain regarding the structure and behavior of this neural net-
work’s components and of the network as a whole.
43
The mechanisms by
which this neural network processes the data provided by sensory input and
stored memories to generate motor output and new memories are highly non-
linear and complex; they are not directly comparable to the process of a CPU-
based computer running an executable software program.
Emotional. The possession and manifestation of emotions is not an extra-
neous supplement (or obstacle) to the rational decision-making of human be-
ings but is instead an integral component of it. Some researchers suggest that
the possession of emotions is necessary in order for an embodied entity to
demonstrate general intelligence at a human-like level.
44
Cognitively biased. Human beings are subject to a common set of cogni-
tive biases that distort individuals’ perceptions of reality and cause them to
arrive at decisions that are objectively illogical and suboptimal.
45
While in ear-
lier eras such biases may have created an evolutionary advantage that aided
the survival of those beings that possessed them (e.g., by providing them with
heuristics that allowed them to quickly identify and avoid potential sources of
danger), these biases cause contemporary human workers to err when evalu-
ating factual claims or attempting to anticipate future events or manage risk.
To some extent, such biases can be counteracted through conscious aware-
ness, training, and effort.
Possessing a flawed memory. The human mind does not store a perfect
audiovisual record of all the sensory input, thoughts, and imaginings that it
experiences during a human being’s lifetime. The brain’s capacities for both
the retention and recall of information are limited. Not only are memories
stored in a manner which from the beginning is compressed, impressionistic,
and imperfect, but memories also degrade over time.
46
Historically, the only
way to transfer memories stored within one human mind to another human
mind has been for the memories to be described and expressed through some
social mechanism such as oral speech or written text.
Demonstrating unpredictable behavior. All human beings demonstrate
basic similarities in their behavior, and individual human beings possess
43
For example, significant outstanding questions remain about the potentially holonomic nature
of memory storage within the brain and the role of inter- and intraneuronal structures in
memory creation and storage; see, e.g., Longuet-Higgins, “Holographic Model of Temporal Re-
call” (1968); Pribram, “Prolegomenon for a Holonomic Brain Theory” (1990); and Pribram &
Meade, “Conscious Awareness: Processing in the Synaptodendritic Web – The Correlation of
Neuron Density with Brain Size” (1999).
44
See Friedenberg (2008), pp. 179-200.
45
For an overview of human cognitive biases in relation to organizational management, see
Kinicki & Williams, Management: A Practical Introduction (2010), pp. 217-19.
46
See Dudai, “The Neurobiology of Consolidations, Or, How Stable Is the Engram?” (2004).
172 • Posthuman Management
unique personalities, habits, and psychological and medical conditions that
allow their reactions to particular stimuli or future behavior to be predicted
with some degree of likelihood; however, it is not possible to predict with full
precision, accuracy, and certainty the future actions of a particular human be-
ing.
Not capable of being hacked electronically. Because human beings
possess biological rather than electronic components and their minds conduct
information processing through the use of an internal physical neural network
rather than a conventional executable software program stored in binary dig-
ital form, it is not possible for external adversaries or agents to hack into a
human being’s body and information-processing system in order to control
sensory, motor, or cognitive activities or to access, steal, or manipulate the
individual’s thoughts or memories using the same electronic hacking tech-
niques that are applied to the hardware or software of electronic computers
and computer-based systems.
47
c. Social Interaction
An agent possessing anthropic characteristics demonstrates a number of
noteworthy traits relating to social interaction. Such an agent is:
Social. Human beings display social behaviors, engage in isolated and
short-term social interactions, and participate in long-term social relations
that evolve over time and are shaped by society’s expectations for the social
roles to be filled by a particular individual.
48
Although the social content and
nature of complex communicative human actions such as speaking and writ-
ing are obvious, even such basic activities such as standing, walking, and
breathing have social aspects, insofar as they can convey intentions, emotions,
and attitudes toward other human beings.
Cultural. Human beings create and exist within unique cultures that in-
clude particular forms of art, literature, music, architecture, history, sports and
recreation, technology, ethics, philosophy, and theology. Such cultures also
develop and enforce norms regarding the ways in which organizations such
as businesses should or should not operate.
49
47
The human mind is subject to other kinds of ‘hacking’ such as social engineering; see Rao &
Nayak (2014).
48
Regarding the distinction between social behaviors, interactions, and relations, see Vinciarelli
et al., “Bridging the Gap between Social Animal and Unsocial Machine: A survey of Social Signal
Processing” (2012), and Gladden, “Managerial Robotics” (2014).
49
Regarding the critical role that organizational culture plays, e.g., in the management of enter-
prise architecture, see Aier, “The Role of Organizational Culture for Grounding, Management,
Guidance and Effectiveness of Enterprise Architecture Principles” (2014), and Hoogervorst, “En-
terprise Architecture: Enabling Integration, Agility and Change” (2004).
Chapter Three: The Posthuman Management Matrix • 173
Spiritual. Human beings broadly manifest a search for and recognition of
transcendent reality and ultimate purpose of a form that is described by orga-
nized religions and other spiritual and philosophical systems as well as nur-
tured by the idiosyncratic beliefs and sentiments of individual human beings.
Recently researchers have sought to identify biological mechanisms that ena-
ble or facilitate the development and expression of such spirituality.
50
Political. In order to regulate their shared social existence and create con-
ditions that allow for productivity, prosperity, peace, and the common good,
human beings have developed political systems for collective defense, deci-
sion-making, and communal action. Political activity typically involves a kind
and degree of reasoning, debate, strategic thinking, risk assessment, prioriti-
zation of values, and long-term planning that is not found, for example, within
the societies of nonhuman animals.
51
An economic actor. In contemporary societies, an individual human be-
ing is typically not able to personally produce all of the goods and services
needed for his or her survival and satisfaction, and he or she does not have
the desire or ability to personally consume all of the goods or services that he
or she produces. In order to transform the goods and services that a human
being produces into the goods and services that he or she desires to have, hu-
man beings engage in economic exchange with one another. Within contem-
porary societies, businesses and other organizations play critical roles in facil-
itating such economic interaction.
52
A legal person. An adult human being is typically recognized by the law
as being a legal person who bears responsibility for his or her decisions and
actions. In some cases, relevant distinctions exist between legal persons, moral
subjects, and moral patients. For example, an adult human being who is con-
scious and not suffering from psychological or biological impairments would
typically be considered both a legal person who is legally responsible for his
or her actions as well as a moral subject who bears moral responsibility for
those actions. An infant or an adult human being who is in a coma might be
considered a legal person who possesses certain legal rights, even though a
legal guardian may be appointed to make decisions on the person’s behalf;
50
For example, see Emmons, “Is spirituality an intelligence? Motivation, cognition, and the psy-
chology of ultimate concern” (2000).
51
Thus Aristotle’s assertion that “man is by nature a political animal” (Aristotle, Politics, Book
1, Section 1253a). Regarding different perspectives on the organization of animal societies and
the possible evolutionary origins of politics in human societies, see, e.g., Man Is by Nature a
Political Animal: Evolution, Biology, and Politics, edited by Hatemi & McDermott (2011); Alford
& Hibbing, “The origin of politics: An evolutionary theory of political behavior” (2004); Clark,
The Political Animal: Biology, Ethics and Politics (1999); and Primate Politics, edited by Schubert
& Masters (1991).
52
For example, see Samuelson & Marks, Managerial Economics (2012), Chapter 11.
174 • Posthuman Management
such a person is not (at the moment) a moral agent who undertakes actions
for which he or she bears moral responsibility but is still a ‘moral patient’
whom other human beings have an obligation to care for and to not actively
harm.
53
2. COMPUTRONIC CHARACTERISTICS
Computronic characteristics constitute the collection of traits that have
traditionally been possessed by the kinds of computers utilized by organiza-
tions, including mainframes, servers, desktop computers, laptop computers,
and mobile devices, as well as more specialized devices such as supercomput-
ers, satellites, assembly-line robots, automated guided vehicles, and other
computerized systems based on a conventional Von Neumann architecture.
These characteristics are reflected in: 1) an entity’s physical form; 2) its capac-
ity for and use of intelligence; and 3) its social interaction with other intelli-
gent agents. Below we use these three perspectives to identify and describe
some of the key computronic characteristics. It may be noted that in most
cases they are very different from – and frequently the opposite of – the an-
thropic characteristics traditionally associated with human beings.
a. Physical Form
The physical form of an agent possessing computronic characteristics
demonstrates a number of notable traits. Such an agent is:
Composed of electronic components. A conventional computer is typ-
ically composed of mass-produced electronic components that are durable and
readily repairable and whose behavior can easily be analyzed and predicted.
54
Such components are often able to operate in conditions of extreme heat, cold,
pressure, or radiation in which biological matter would not be able to survive
and function. Such components can be built to a large or microscopic scale,
depending on the intended purpose of a particular computer. The ability to
manufacture electronic components to precise specifications with little varia-
tion means that millions of copies of a single artificial agent can be produced
that are functionally identical.
53
Regarding distinctions between legal persons, moral subjects, and moral patients – especially
in the context of comparing human and artificial agents – see, e.g., Wallach & Allen, Moral ma-
chines: Teaching robots right from wrong (2008); Gunkel, The Machine Question: Critical Perspec-
tives on AI, Robots, and Ethics (2012); Sandberg, “Ethics of brain emulations” (2014); and Row-
lands, Can Animals Be Moral? (2012).
54
For an in-depth review of the historical use of electronic components in computers as well as
an overview of emerging possibilities for (non-electronic) biological, optical, and quantum com-
puting, see Null & Lobur, The Essentials of Computer Organization and Architecture (2006). Re-
garding the degree to which the failure of electronic components can be predicted, see Băjenescu
& Bâzu, Reliability of Electronic Components: A Practical Guide to Electronic Systems Manufactur-
ing (1999).
Chapter Three: The Posthuman Management Matrix • 175
Not alive. A conventional computer is not alive: it is not created through
processes of biological reproduction, and its form and basic functionality are
not shaped by a DNA- or RNA-based genotype; nor does the computer itself
grow and reproduce.
55
A computer must typically receive energy from the ex-
ternal environment in the form of an electrical power supply that has been
specifically prepared by its human operators and which meets exact specifica-
tions;
56
the computer does not possess a metabolism that allows it to assimilate
raw materials that it obtains from the environment and convert them into en-
ergy and structural components, repair damage and grow, and emit waste
products into the environment (apart from byproducts such as heat – which
is a significant concern in microprocessor and computer design – and stray
electromagnetic radiation such as radio waves).
57
Intentionally designed. Historically, the structure and basic capacities of
a computer are not the result of the inheritance of randomized genetic code
from biological parents or from other processes of biological reproduction. In-
stead, all elements and aspects of a traditional computer’s physical form and
basic functionality are intentionally planned and constructed by human sci-
entists, engineers, manufacturers, and programmers in order to enable the
computer to successfully perform particular tasks.
58
Upgradeable and expandable. The physical structure and capacities of
computers are easily expandable through the addition of internal components
or external peripheral devices. Such upgrades allow a computer to receive, for
example, new sensory mechanisms, new forms of actuators for manipulating
the external environment, an increase in processing speed, an increase in ran-
dom-access memory, or an increase in the size of a computer’s available space
for the nonvolatile long-term storage of data.
59
Not limited to a maximum lifespan. A typical computer does not possess
a maximum lifespan beyond which it cannot be made to operate. As a practical
matter, individual computers may eventually become obsolete because their
functional capacities are inadequate to perform tasks that the computer’s
owner or operator needs it to perform or because cheaper, faster, and more
55
Curtis (1983) cited seven requisites for a biological entity to be considered alive (organization,
metabolism, growth, homeostasis, adaptation, response to stimuli, and reproduction), which
Friedenberg (2008), pp. 201-03, also considers to be relevant when attempting to determine
whether an artificial entity is alive.
56
Exceptions would include, e.g., solar-powered computing devices.
57
Such emissions by computers also create information security concerns; see, e.g., Gladden, The
Handbook of Information Security for Advanced Neuroprosthetics (2015), p. 116.
58
See, e.g., Dumas, Computer Architecture: Fundamentals and Principles of Computer Design
(2006).
59
See, e.g., Mueller, Upgrading and Repairing PCs, 20th Edition (2012).
176 • Posthuman Management
powerful types of computers have become available to carry out those tasks.
Similarly, the failure of an individual component within a computer may ren-
der it temporarily nonfunctional. However, the ability to repair, replace, up-
grade, or expand a computer’s physical components means that a computer’s
operability can generally be maintained indefinitely, if its owner or operator
wishes to do so.
60
Possessing a stable and restorable form. A computer’s physical form is
highly stable: although a computer’s components can be physically upgraded
or altered by the device’s owner or operator, a computer does not physically
upgrade or alter itself without its owner or operator’s knowledge or permis-
sion.
61
A computer does not undergo the sort of developmental cycle of con-
ception, growth, maturity, and senescence demonstrated by biological organ-
isms. In general, the physical alterations made to a computer are reversible: a
chip that has been installed to increase the computer’s RAM can be removed;
a peripheral device that has been added can be disconnected. This allows a
computer to be restored to a previous physical and functional state.
Potentially multilocal. It is possible for a computer to – like a human being
– possess a body that comprises a single unitary, spatially compact physical
unit: computerized devices such as a typical desktop computer, smartphone,
assembly-line robot, or server may possess a physical form that is clearly dis-
tinct from the device’s surrounding environment and which is located in only
a single place at any given time. However, other computers can – unlike a
human being – possess a body comprising disjoint, spatially dispersed ele-
ments that exist physically in multiple locations at the same time. The creation
of such computerized entities comprising many spatially disjoint and dis-
persed ‘bodies’ has been especially facilitated in recent decades by the devel-
opment of the diverse networking technologies that undergird the Internet
and, now, the nascent Internet of Things.
62
The destruction, disabling, or dis-
connection of one of these bodies that contributes to the form of such an entity
may not cause the destruction of or a significant degradation of functionality
for the computerized entity as a whole.
60
For an overview of issues relating to computer reliability, availability, and lifespan, see
Siewiorek & Swarz, Reliable Computer Systems: Design and Evaluation (1992), and Băjenescu &
Bâzu (1999).
61
An exception would be the case of computer worms or viruses that can cause a computer to
disable or damage some of its internal components or peripheral devices without the owner or
operator’s knowledge. See, for example, Kerr et al., “The Stuxnet Computer Worm: Harbinger of
an Emerging Warfare Capability” (2010).
62
Regarding the Internet of Things, see Evans, “The Internet of Everything: How More Relevant
and Valuable Connections Will Change the World” (2012). For one aspect of the increasingly
networked nature of robotics and AI, see Coeckelbergh (2011). Regarding multilocal computers,
see Gladden, “The Diffuse Intelligent Other” (2016).
Chapter Three: The Posthuman Management Matrix • 177
Possessing an exchangeable substrate. Because they are stored in an
electronic digital form that can easily be read and written, the data that con-
stitute a particular computer’s operating system, applications, configuration
settings, activity logs, and other information that has been received, gener-
ated, or stored by the device can easily be copied to different storage compo-
nents or to a different computer altogether. This means that the computational
substrate or ‘body’ of a given computerized system can be replaced with a new
body without causing any functional changes in the system’s memory or be-
havior. In the case of computerized systems that are typically accessed re-
motely (e.g., a cloud-based storage device accessed through the Internet), a
system’s hardware could potentially be replaced by copying the device’s data
to a new device without remote users or operators ever realizing that the sys-
tem’s physical computational substrate had been swapped.
63
Possessing an unclear basis for identity. It is unclear wherein the unique
identity of a conventional computer or computerized entity subsists, or even
if such an identity exists.
64
A computer’s identity does not appear to be tied to
any critical physical component, as such components can be replaced or al-
tered without destroying the computer. Similarly, a computer’s identity does
not appear to be tied to a particular set of digital data that comprises the com-
puter’s operating system, applications, and user data, as that data can be cop-
ied with perfect fidelity to other devices, creating computers that are func-
tionally clones of one another.
b. Intelligence
The information-processing mechanisms and behaviors of an agent pos-
sessing computronic characteristics demonstrate a number of significant
traits. Such an agent is:
Non-sapient. A conventional computer does not possess sapient self-
awareness or a subjective conscious experience of reality.
65
Semiautonomous or nonautonomous. For computerized devices such as
robots, autonomy can be understood as the state of being “capable of operating
63
The ability to replace or reconfigure remote networked hardware without impacting web-
based end users is widely exploited to offer cloud-based services employing the model of infra-
structure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS); for more
details, see the Handbook of Cloud Computing, edited by Furht & Escalante (2010).
64
For a discussion of philosophical issues relating to personal identity, see Olson (2015); see also
Friedenberg (2008), p. 250.
65
Regarding different perspectives on the characteristics that a computer or other artificial sys-
tem would need to have in order for it to possess sapient self-awareness and a subjective con-
scious experience of reality, see Friedenberg (2008), pp. 163-78.
178 • Posthuman Management
in the real-world environment without any form of external control for ex-
tended periods of time.”
66
Such autonomy does not simply involve the ability
to perform cognitive tasks like setting goals and making decisions; it also re-
quires an entity to successfully perform physical activities such as securing
energy sources and carrying out self-repair without human intervention. Ap-
plying this definition, we can say that current computerized devices are typi-
cally either nonautonomous (e.g., telepresence robots that are fully controlled
by their human operators) or semiautonomous (e.g., robots that require ‘con-
tinuous assistance’ or ‘shared control’ in order to fulfill their intended pur-
pose).
67
Although some contemporary computerized systems can be under-
stood as ‘autonomous’ with regard to fulfilling their intended purpose – in
that they can receive sensory input, process information, make decisions, and
perform actions without direct human control – they are not autonomous in
the full sense of the word, insofar as they are generally not capable of, for
example, securing energy sources within the environment or repairing phys-
ical damage to themselves.
68
Volitional. Many conventional computerized devices are nonvolitional,
meaning that they possess no internal goals or ‘desires’ for achieving partic-
ular outcomes nor any expectations or ‘beliefs’ about how performing certain
actions would lead to such outcomes. However, many contemporary comput-
erized devices – including a wide variety of robots used in commercial con-
texts – are volitional. As noted earlier, an entity is volitional if it combines
goals with expectations; in other words, it can possess an intention, which is
a mental state that comprises both a desire and a belief about how some act
that the agent is about to perform can contribute to fulfilling that desire.
69
For
example, a therapeutic social robot might possess the goal of evoking a posi-
tive emotional response in its human user, and its programming and stored
information tells it that by following particular strategies for social interaction
it is likely to evoke such a response.
70
Programmed. A conventional computer does not ‘learn’ through experi-
ence; it does not undergo a long-term formative process of education in order
to acquire new knowledge or information. Instead, a computer has software
programs and data files copied onto its storage media, thereby instantaneously
gaining new capacities and the possession of new information.
71
Alternatively,
a computer may be directly programmed or configured by a human operator.
66
Bekey (2005), p. 1.
67
See Murphy, Introduction to AI Robotics (2000).
68
Gladden, “The Diffuse Intelligent Other” (2016).
69
Calverley (2008), p. 529.
70
Gladden, “The Diffuse Intelligent Other” (2016).
71
For a discussion of the ways in which the electronic components of traditional computers
Chapter Three: The Posthuman Management Matrix • 179
Processing information by means of a CPU. A conventional contempo-
rary computer (e.g., a desktop computer or smartphone) is based on a Von
Neumann architecture comprising memory, I/O devices, and one or more cen-
tral processing units connected by a communication bus.
72
Although one can
be made to replicate the functioning of the other, the linear method by which
such a CPU-based system processes information is fundamentally different
from the parallel processing method utilized by a physical neural network
such as that constituted by the human brain.
73
Lacking emotion. A traditional computer does not possess emotions that
are grounded in the current state of the computer’s body, are consciously ex-
perienced by the computer, and influence the contents of its decisions and
behavior.
74
Although a piece of software may run more slowly or have some
features disabled when executed on particular computers, the nature of the
software’s decision-making is not influenced by factors of mood, emotion, or
personality that are determined by a computer’s hardware. A software pro-
gram will typically either run or not run on a given computer; if it runs at all,
it will run in a manner that is determined by the internal logic and instructions
contained within the software code and not swayed or distorted by that com-
puter’s particular physical state.
Free from cognitive biases. A conventional computer is not inherently
subject to human-like cognitive biases, as its decisions and actions are deter-
mined by the logic and instructions contained within its operating system and
application code and not by the use of evolved heuristic mechanisms that are
a core element of human psychology.
75
carry out the work of and are controlled by executable programs – as well as an overview of the
ways in which alternative architectures such as that of the neural network can allow computers
to learn through experience – see Null & Lobur (2006). A more detailed presentation of the ways
in which neural networks can be structured and learn is found in Haykin, Neural Networks and
Learning Machines (2009). For a review of forms of computer behavior whose activity can be
hard to predict (e.g., the actions of some forms of evolutionary algorithms or neural networks)
as well as other forms of biological or biologically inspired computing, see Lamm & Unger (2011).
72
See Friedenberg (2008), pp. 27-29.
73
See Friedenberg (2008), pp. 30-32.
74
For the distinction between the relatively straightforward phenomenon of computers pos-
sessing ‘emotion’ simply as a function versus the more doubtful possibility that computers could
undergo ‘emotion’ as a conscious experience, see Friedenberg (2008), pp. 191-200.
75
It is possible, however, for a computer to indirectly demonstrate human-like cognitive biases
if the human programmers who designed a computer’s software were not attentive to such con-
siderations and inadvertently programmed the software to behave in a manner that manifests
such biases. For a discussion of such issues, see, e.g., Friedman & Nissenbaum, “Bias in Computer
Systems” (1997).
180 • Posthuman Management
Possessing nonvolatile digital memory. Many conventional computers
are able to store data in a stable electronic digital form that is practically loss-
less, does not degrade rapidly over time, can be copied to other devices or
media and backed up with full fidelity, and does not require a continuous
power supply in order to preserve the data.
76
Demonstrating predictable and analyzable behavior. Computerized
devices can be affected by a wide range of component failures and bugs re-
sulting from hardware or software defects or incompatibilities. However, be-
cause a typical computer is controlled by discrete linear executable code that
can be easily accessed – and because there exist diagnostic software, software
debugging techniques, established troubleshooting practices, and methods for
simulating a computer’s real-world behaviors in development and testing en-
vironments – it is generally easier to analyze and reliably predict the behavior
of a computer than that of, for example, a human being.
77
Capable of being hacked electronically. Computerized systems are
vulnerable to a wide variety of electronic hacking techniques and other attacks
that can compromise the confidentiality, integrity, and availability of infor-
mation that is received, generated, stored, or transmitted by a system or can
result in unauthorized parties gaining complete control over the system.
78
c. Social Interaction
An agent possessing computronic characteristics demonstrates a number
of noteworthy traits relating to social interaction. Such an agent is:
Nonsocial or semisocial. Conventional computers may display social be-
haviors and engage in short-term, isolated social interactions with human be-
ings or other computers, but they do not participate in long-term social rela-
tions that deepen and evolve over time as a result of their experience of such
engagement and which are shaped by society’s expectations for social roles to
be filled by the participants in such relations.
79
76
Regarding the creation, storage, and transfer of digital data files by computers and other elec-
tronic devices, see, e.g., Austerberry, Digital Asset Management (2013), and Coughlin, Digital
Storage in Consumer Electronics: The Essential Guide (2008).
77
Even the behavior of sophisticated ‘artificially intelligent’ computerized systems can be easy
to predict and debug, if it is controlled by a conventional executable program rather than, e.g.,
the actions of a physical artificial neural network. For a discussion of different models for gen-
erating artificial intelligence through hardware and software platforms, see Friedenberg (2008),
pp. 27-36.
78
For an overview of such possibilities (as well as related preventative practices and responses),
see Rao & Nayak (2014).
79
Although there already exist telepresence robots (e.g., Ishiguro’s Geminoids) that manifest
highly sophisticated, human-like levels of sociality, such sociality is technically possessed not
Chapter Three: The Posthuman Management Matrix • 181
Lacking culture. Although a large number of computers can be linked to
form networks that may constitute a form of computerized society, such ag-
gregations of conventional computers do not create their own cultures.
80
Lacking spirituality. Conventional computers do not search for a connec-
tion with some transcendental truth or reality in order to provide meaning or
purpose to their existence; they do not engage in contemplation, meditation,
or prayer.
81
Apolitical. Conventional computers do not directly participate as members
of human or artificial political systems. Some computerized systems (e.g.,
some swarm robots as components in multi-agent systems) participate in so-
cial interactions, and even social relations and group governance structures,
but they do not generally create political systems of the sort common among
human populations.
82
An economic participant. Conventional computers typically do not func-
tion independently within the real-world human economy as autonomous
economic actors, although they participate in the economy in many other
ways. Computers do not own or exchange their own financial or other assets,
nor do they purchase goods or services for their own consumption, although
computers may serve as agents that initiate and execute transactions on behalf
of human beings or organizations.
83
by the robot itself but by the hybrid human-robotic system that it forms with its human operator.
Regarding such issues, see Vinciarelli et al. (2012) and Gladden, “Managerial Robotics” (2014).
80
Regarding prerequisites for artificial entities or systems to produce their own culture (or col-
laborate with human beings in the production of a shared human-artificial culture), see, e.g.,
Payr & Trappl, “Agents across Cultures” (2003).
81
Regarding elements that would need to be present in order for a computerized device to de-
velop its own spirituality (rather than to simply have some spiritual value attributed to it by
human beings), see, e.g., Geraci, “Spiritual robots: Religion and our scientific view of the natural
world” (2006); Nahin, “Religious Robots” (2014); Section 6.2.3.2 on “Religion for Robots” in Yam-
polskiy, Artificial Superintelligence: A Futuristic Approach (2015); and Kurzweil, The Age of Spir-
itual Machines: When Computers Exceed Human Intelligence (2000).
82
Regarding ways in which advanced multi-agent systems (such as those found in swarm robot-
ics) might potentially implement patterns of social interaction and organization that resemble or
are explicitly based on human political behaviors and structures, see, e.g., McBurney & Parsons,
“Engineering democracy in open agent systems” (2003); Ferber et al., “From agents to organiza-
tions: an organizational view of multi-agent systems” (2004); and Sorbello et al., “Metaphor of
Politics: A Mechanism of Coalition Formation” (2004).
83
For example, regarding the increasing sophistication of automated trading systems that are
capable of teaching themselves and improving their investment strategies over time, without
direct instruction from human beings, and the growing use of ‘robo-advisors’ to manage finan-
cial assets on behalf of human owners, see Scopino, “Do Automated Trading Systems Dream of
Manipulating the Price of Futures Contracts? Policing Markets for Improper Trading Practices
by Algorithmic Robots” (2015), and Sharf, “Can Robo-Advisors Survive A Bear Market?” (2015).
182 • Posthuman Management
Property, not a legal person. A conventional computer is a piece of prop-
erty that is typically owned by a specific human being or organization; a com-
puter is not itself a legal person that possesses a recognized set of rights and
responsibilities.
84
III. Using the Matrix to Analyze the Traditional Practice of
Organizational Management
Our two-dimensional Posthuman Management Matrix contains quadrants
that describes four types of entities that could potentially be participants in or
objects of the activities of organizations such as businesses and which – if they
exist – would need to be accounted for by management theory and practice.
As illustrated in Figure 1, these four potential types of entities are:
Human agents possessing anthropic characteristics, which we can
refer to as ‘natural human beings,’ insofar as they have not been sig-
nificantly enhanced or modified through the use of technologies such
as neuroprosthetics or genetic engineering.
Artificial agents possessing computronic characteristics, which
we can refer to simply as ‘computers.’ Such entities include conven-
tional desktop and laptop computers, mainframes, web servers, and
smartphones and other mobile devices whose software allows them
to exercise a limited degree of agency.
Human agents possessing computronic characteristics, which we
can refer to as ‘cyborgs.’ In the sense in which the term is employed
in this text, a cyborg is a human being whose body includes some ‘ar-
tificial components,’
85
however these components do not necessarily
need to be electromechanical in nature (as in the case of contemporary
neuroprosthetic devices); the artificial elements could be structures or
systems composed of biological material that are not typically found
in natural human beings and which are the result of genetic engineer-
ing.
Artificial agents possessing human characteristics, which we can
refer to as ‘bioroids.’ Terms such as ‘android’ or ‘humanoid robot’
84
Stahl suggests that a kind of limited ‘quasi-responsibility’ can be attributed to conventional
computers and computerized systems. In this model, it is a computer’s human designers, pro-
grammers, or operators who are typically responsible for the computer’s actions; declaring a
particular computer to be ‘quasi-responsible’ for some action that it has performed serves as a
sort of moral and legal placeholder, until the computer’s human designers, programmers, and
operators can be identified and ultimate responsibility for the computer’s actions assigned to the
appropriate human parties. See Stahl, “Responsible Computers? A Case for Ascribing Quasi-
Responsibility to Computers Independent of Personhood or Agency” (2006).
85
See Novakovi et al. (2009).
Chapter Three: The Posthuman Management Matrix • 183
could potentially be employed to describe such entities, however
these terms are often used to imply that a robot has a human-like
physical form, without necessarily possessing human-like psychol-
ogy, cognitive capacities, or biological components. Similarly, the
term ‘biorobot’ could be employed, but it is often used to refer to ro-
bots that mimic animals like insects or fish whose physical form and
cognitive capacities have little in common with those of human be-
ings. We choose to employ the term ‘bioroid’ (whose origins lie pri-
marily in the field of science fiction rather than engineering)
86
insofar
as it evokes the image of an artificially engineered agent that pos-
sesses human-like cognitive capacities and psychology, biological or
biologically inspired components, and a physical form that allows it
to engage in human-like social behaviors and interactions but which
is not necessarily humanoid.
Prior to the development of computers as a practical organizational tech-
nology in the 20th Century, it was historically only the lower left quadrant of
the Posthuman Management Matrix that was of relevance to organizational
managers. Indeed, not only were natural human beings as a practical matter
the only available employees and customers, but they were also generally con-
sidered to be the only potential employees and customers with which the
scholarly discipline of management would ever need to concern itself. The
possibility that organizations might someday employ and serve entities that
were not human agents possessing anthropic characteristics was not studied
as a theoretical possibility; the theory and practice of management were con-
cerned only with understanding and managing the activities of natural human
beings. Within that context, fields such as economics, organizational psychol-
ogy, and human resource management played key roles.
Eventually, with the development of increasingly sophisticated computers
over the course of the 20th Century and up through the present day, manage-
ment scholars and practitioners began to realize the need to expand the theo-
retical and practical scope of management to include new subdisciplines that
could guide the creation, implementation, and management of artificial agents
86
For uses of the term ‘bioroid’ in science fiction literature and roleplaying games, see, e.g.,
Pulver (1995), pp. 74-81, where ‘bioroid’ is used explicitly as a portmanteau derived from ‘bio-
logical android’; Surbrook, Kazei-5 (1998), pp. 64, 113; Pulver, Transhuman Space (2002), p. 12,
where ‘bioroid’ refers to “living beings functionally similar to humans, but assembled using tis-
sue engineering and ‘biogenesis’ nanotechnology, and educated using accelerated learning tech-
niques”; Appleseed, directed by Aramaki (2010); Martinez, “Bodies of future memories: the Japa-
nese body in science fiction anime” (2015); Litzsinger, Android: Netrunner (2012); and Duncan,
“Mandatory Upgrades: The Evolving Mechanics and Theme of Android: Netrunner” (2014). For
a reference to the fictional use of the term ‘bioroid’ in an engineering context, see Novakovi et
al. (2009).
184 • Posthuman Management
such as manufacturing robots or server farms controlled by load-balancing
software.
87
Because such artificial agents possessed structures, behaviors, and
organizational roles that were quite different from those of human agents, ex-
isting disciplines such as psychology and HR management did not provide
adequate or relevant tools for the oversight of such systems; instead, new
fields such as computer science, electronics engineering, robotics, and IT man-
agement began to aid organizational managers in designing, implementing,
and maintaining such systems that comprise artificial agents possessing com-
putronic characteristics. As a result of such developments, a second quadrant
of the Posthuman Management Matrix became not only relevant but critical
for the successful management of contemporary organizations.
Despite this experience in which a previously disregarded quadrant of the
Posthuman Management Matrix quickly assumed major theoretical and prac-
tical importance for organizations, the remaining two quadrants of the Matrix
have remained largely neglected within the field of organizational manage-
ment – as though there existed an implicit presumption that these areas define
sets that would continue to remain empty or that these quadrants would only
become relevant for organizational management at a date so far in the future
that it would be a misallocation of time and resources for management schol-
ars and practitioners to concern themselves with such possibilities now.
Figure 2 thus depicts the field of management as it largely exists today: a
field in which centuries-old management traditions relating to natural human
beings have recently been supplemented by new theory and practice that ad-
dress the rise of conventional computers – but in which the possibility and
organizational significance of cyborgs and bioroids remain, from a manage-
ment perspective, largely unexplored.
88
We can now consider in more detail these four types of entities described
by the Posthuman Management Matrix as they have been understood by the
field of organizational management from its historical origins up to the pre-
sent day.
87
The development of such disciplines and practices was spurred in part by the experience of
organizations that made large investments in IT systems in the 1980s, only to discover that
simply purchasing exotic new IT equipment would not, in itself, generate desired gains in
productivity unless such equipment were thoughtfully aligned with and integrated into an or-
ganization’s larger business plan, strategies, and processes. See Magoulas et al. (2012), p. 89, and
Hoogervorst (2004), p. 16.
88
For some time, the design, implementation, and implications of human agents possessing com-
putronic characteristics and artificial agents possessing anthropic characteristics have been the
subject of intense research and contemplation across a broad range of fields, from computer
science and robotics to philosophy of mind and philosophy of technology, ethics, and science
fiction; here we are only noting that – notwithstanding the work of a small number of future-
oriented management scholars – the field of management has not yet taken up such topics as
subjects worthy of (or even demanding) serious consideration.
Chapter Three: The Posthuman Management Matrix • 185
Fig. 2: The Posthuman Management Matrix displaying the two types of entities that have been
relevant in recent decades for the theory and practice of organizational management, along
with two types of entities that historically have not been considered relevant.
186 • Posthuman Management
A. Human Agents with Anthropic Characteristics (‘Natural
Human Beings’)
The actions of natural human beings – and the knowledge of how to an-
ticipate and guide their activities – have formed the critical foundation upon
which all human organizations have historically been built. Even before the
dawn of artificial intelligence and the creation of the first artificial agents,
nonhuman agents such as domesticated farm animals have played a support-
ing role in the activities of some human organizations. However, the over-
whelming majority of roles within such organizations – including all of those
leadership and management roles requiring strategic thinking and long-term
planning, ethical and legal sensitivity, negotiation skills, risk management ap-
proaches, and the use of oral and written communication – have historically
been filled by human beings, who have always been (and been understood as)
human agents who possess anthropic characteristics. Human organizations
such as businesses have relied on such human beings as their CEOs and exec-
utives, midlevel managers, frontline employees, consultants, partners and sup-
pliers, competitors, and actual or potential customers and clients.
In order to plan, organize, lead, and control
89
the activities of such natural
human beings that are found both within and outside of organizations, a num-
ber of academic disciplines and practices have been developed over the last
century and more that can facilitate and support the management of organi-
zations. Such disciplines include HR management, marketing, and organiza-
tion development, along with other disciplines such as psychology, sociology,
economics, anthropology, cultural studies, and ergonomics that have broader
aims and applications but which can help inform organizational management.
B. Artificial Agents with Computronic Characteristics
(‘Computers’)
Over the last half-century, computers have taken on critical roles within
the lives of many organizations. Such agents comprise assembly-line robots
used for painting or welding, flexible manufacturing systems, automated se-
curity systems, and a broad range of software that possesses some degree of
artificial intelligence and runs as part of an operating system or application
on servers, desktop computers, mobile devices, and other computerized equip-
89
Planning, organizing, leading, and controlling are recognized as the four key functions that
must be performed by managers. See Daft, Management (2011).
Chapter Three: The Posthuman Management Matrix • 187
ment. Such artificial agents may schedule tasks and optimize the use of phys-
ical and electronic resources;
90
transport materials within production facili-
ties;
91
assemble components to produce finished products;
92
interact directly
with customers on automated customer-service phone lines, through online
chat interfaces, and at physical kiosks to initiate and perform transactions and
offer information and support;
93
monitor systems and facilities to detect phys-
ical or electronic intrusion attempts;
94
initiate and execute financial transac-
tions within online markets;
95
and carry out data mining in order to evaluate
an applicant’s credit risk, identify suspected fraud, and decide what personal-
ized offers and advertisements to display to a website’s visitors.
96
In order to
manage the activities of artificial agents possessing computronic characteris-
tics, one can draw on insights from a number of disciplines and practices that
have been developed over the last few decades, including computer science,
electronics engineering, robotics, and IT management.
While human beings still play key roles as leaders, strategists, and manag-
ers within organizations, in many cases they are no longer capable of carrying
out their work without the engagement and support of the artificial agents
that permeate an organization’s structures, processes, and systems in so many
90
For an overview of methods that can be employed for such purposes, see Pinedo, Scheduling:
Theory, Algorithms, and Systems (2012). For more specific discussions of the use of artificial
agents (and especially multi-agent systems) for such ends, see, e.g., Ponsteen & Kusters, “Classi-
fication of Human and Automated Resource Allocation Approaches in Multi-Project Manage-
ment” (2015); Merdan et al., “Workflow scheduling using multi-agent systems in a dynamically
changing environment” (2013); and Xu et al., “A Distributed Multi-Agent Framework for Shared
Resources Scheduling” (2012).
91
See, e.g., Ullrich, Automated Guided Vehicle Systems: A Primer with Practical Applications
(2015), and The Future of Automated Freight Transport: Concepts, Design and Implementation, ed-
ited by Priemus & Nijkamp (2005).
92
See, e.g., Agent-Based Manufacturing: Advances in the Holonic Approach, edited by Deen (2003);
Intelligent Production Machines and Systems, edited by Pham et al. (2006); and Industrial Applica-
tions of Holonic and Multi-Agent Systems, edited by Mařík et al. (2015).
93
See, e.g., Ford, Rise of the Robots: Technology and the Threat of a Jobless Future (2015), and
McIndoe, “Health Kiosk Technologies” (2010).
94
Regarding the automation of intrusion detection and prevention systems, see Rao & Nayak
(2014), pp. 226, 235, 238.
95
See Philips, “How the Robots Lost: High-Frequency Trading’s Rise and Fall” (2012); Scopino
(2015); and Sharf (2015).
96
Giudici, Applied Data Mining: Statistical Methods for Business and Industry (2003); Provost &
Fawcett, Data Science for Business (2013), p. 7; and Warkentin et al., “The Role of Intelligent
Agents and Data Mining in Electronic Partnership Management” (2012), p. 13282.
188 • Posthuman Management
ways.
97
For many organizations, the sudden disabling or loss of such artificial
agents would be devastating, as the organizations have become dependent on
artificial agent technologies to perform critical tasks that cannot be performed
by human beings with the same degree of speed, efficiency, or power.
C. Human Agents with Computronic Characteristics
(‘Cyborgs’)
Historically, all human beings have been human agents that possess an-
thropic characteristics. From the perspective of organizational management,
the set of human agents possessing computronic characteristics has been seen
as empty; such beings are not yet understood to widely exist, and it is pre-
sumed that there is no special need to take them into account as potential
employees, partners, or clients when considering a business’s short-term ob-
jectives and operations. Although emerging posthumanizing technologies are
beginning to create cases of human agents who indeed possess limited com-
putronic characteristics, the number, nature, and scope of such cases of the
‘cyborgization’ of human agents is still relatively small, and from the mana-
gerial perspective most organizations have been able to simply ignore such
cases, as though the category of the cyborg were not yet applicable or relevant
to their organizational mission and objectives.
98
Because human agents pos-
sessing extensive computronic characteristics do not yet exist as a large pop-
ulation of beings who can serve as employees, partners, or customers for or-
ganizations, it is not surprising that organizations do not yet possess special-
ized practices or academic disciplines that they can rely on to aid them in the
management of such entities.
D. Artificial Agents with Anthropic Characteristics (‘Bioroids’)
The artificial agents that have been broadly deployed and which are rele-
vant for organizational management are generally artificial agents possessing
computronic characteristics. While scientists and engineers are making great
97
Within the ‘congruence model’ of organizational architecture developed by Nadler and Tush-
man, structures, processes, and systems constitute the three main elements of an organization
that must be considered. See Nadler & Tushman, Competing by Design: The Power of Organiza-
tional Architecture (1997), p. 47, and the discussion of these elements within a posthumanized
organizational context in Part Two of this volume, on “Organizational Posthumanism.”
98
Fleischmann argues, for example, that within human society there is an inexorable trend that
will eventually result in full cyborg-cyborg interaction in the form of social relations among
beings who are human-electronic hybrids – human beings whose biological organism possesses
extensive and intimate internal interfaces with neuroprosthetic devices. Current phenomena like
the widespread interaction of human beings who are dependent on (and interact through) mobile
devices such as smartphones are one step along that trajectory. See Fleischmann, “Sociotechnical
Interaction and Cyborg–Cyborg Interaction: Transforming the Scale and Convergence of HCI”
(2009).
Chapter Three: The Posthuman Management Matrix • 189
strides toward developing artificial agents that possess anthropic characteris-
tics, at present such systems are experimental and exist largely in laboratory
settings.
99
As a practical matter, within most organizations the category of
bioroids is still treated as though it were an empty set; organizations have
generally not seen the need to consider such entities when planning their ob-
jectives and operations. As with the cyborgs described above, because bioroids
have historically not existed as potential employees, partners, or customers
for organizations, it is unsurprising that organizations do not yet have spe-
cialized disciplines that they can rely on to aid them in managing such entities.
IV. Using the Matrix to Predict and Shape the Future
Practice of Organizational Management
In the sections above, we have considered the situation that has existed up
to now – with organizations’ sole agents being natural human beings and
computers. We can now explore the ways in which the situation is rapidly
changing due to the emergence of new posthumanizing technologies.
A. The Converging Characteristics of Human and Artificial
Agents in the Posthuman Age
Below we review once more the set of variables that define an agent’s char-
acteristics and, for each of the characteristics, discuss ways in which the ad-
vent of various posthumanizing technologies will result in a growing variety
of cyborgs and bioroids. Studies focusing on these two types of entities are
emerging as new fields in which ongoing innovation will expand the kinds of
workers, partners, and consumers that are available to organizations and
which are expected to become crucial loci for management theory and practice
in the coming years. We can consider in turn the physical form, intelligence,
and social interaction that will be demonstrated by such new types of human
and artificial agents .
1. PHYSICAL FORM
The range of physical forms available to human and artificial agents is ex-
pected to evolve and expand significantly. Such changes will be visible in the
manner in which a number of key characteristics are expressed (or not ex-
pressed); these characteristics are described below.
99
See Friedenberg (2008) for an in-depth review of efforts to develop robots and other artificial
beings that possess human-like perception, learning, memory, thought, language use, intelli-
gence, creativity, motivation, emotions, decision-making capacities and free will, consciousness,
biological structures and processes, and social behaviors.
190 • Posthuman Management
a. Components
It is anticipated that the bodies of human agents will increasingly include
electronic components in the form of artificial organs, artificial limbs and ex-
oskeletons, artificial sense organs, memory implants, and other kinds of neu-
roprosthetic devices;
100
the major obstacle to the expansion of such technology
may be the fact that the natural biological brain (or at least, significant por-
tions of the brain) of a human being will need to remain intact and functional
in order for an agent to be considered ‘human.’
Conversely, expected developments in genetic engineering technologies,
soft robotics, and artificial life will increasingly allow the bodies of artificial
agents to include components formed from biological material.
101
In cases that
involve extensive engineering and modification of the genome (and especially
in ‘second-generation’ entities that are the result of natural reproductive pro-
cesses between biological parents rather than cloning or other direct engineer-
ing), it may be difficult conceptually and practically to specify whether an en-
tity is an ‘artificial agent’ composed entirely of biological components or a
‘human agent’ whose biological substrate has been intentionally designed.
The legal, ethical, ontological, and even theological questions involved with
such potential practices are serious and wide-ranging.
b. Animation
Currently, only those human beings that are alive are capable of serving as
employees or customers of an organization. Techniques such as ‘mind upload-
ing’ and the development of artificial neurons that can replace or replicate the
actions of neurons in the brain of a living human being may someday allow
human agents that are no longer ‘alive’ in a biological sense to have their
unique memories, knowledge, cognitive patterns, and social relations utilized
by agents that function as employees, partners, or customers for organiza-
tions. The extent to which such nonbiological human agents can be identified
with the biological human beings from whom they are derived depends on
issues that are philosophically controversial and complex.
102
100
See Gasson, “ICT implants” (2008); Gasson et al., “Human ICT Implants: From Invasive to
Pervasive” (2012); McGee (2008); Merkel et al., “Central Neural Prostheses” (2007); Gladden, The
Handbook of Information Security for Advanced Neuroprosthetics (2015), pp. 32-33; and Gladden,
“Cybershells, Shapeshifting, and Neuroprosthetics” (2015).
101
See Berner (2004), pp. 15, 18, 31, 61-62. For a discussion of the possibilities of using DNA as a
mechanism for the storage or processing of data, see Church et al. (2012) and Friedenberg (2008),
p. 244.
102
See Koene (2012); Proudfoot (2012); Pearce, “The Biointelligence Explosion” (2012); Hanson,
“If uploads come first: The crack of a future dawn” (1994); Moravec, Mind Children: The Future
of Robot and Human Intelligence (1990); Ferrando (2013), p. 27; and Gladden, The Handbook of
Information Security for Advanced Neuroprosthetics (2015), pp. 98-100, for a discussion of such
issues from various perspectives.
Chapter Three: The Posthuman Management Matrix • 191
Meanwhile, the development of biological components for use in robots
and other artificial agents and ongoing advances in the development of non-
biological artificial life (e.g., autonomous evolvable computer worms or vi-
ruses that satisfy standard scientific definitions of life-forms) can result in ar-
tificial agents that are considered to be alive, insofar as they constitute a viable
system that demonstrate a physical metabolism, the ability to maintain home-
ostasis, reproduction, reaction and adaptation to the environment, and other
key characteristics.
103
c. Design
The growing possibilities for genetic engineering, gene therapy, and the
augmentation of human agents through the implantation of neuroprosthetic
devices or other synthetic components means that the body possessed by a
human agent will no longer necessarily be a natural substrate that is produced
through the randomized inheritance of genetic material from biological par-
ents and that is free from intentional design by institutions or individual hu-
man engineers.
104
Besides the major moral and legal questions raised by such
possibilities, there are also operational issues that would confront organiza-
tions whose pool of potential employees or customers includes human agents
who have been designed in such ways; for example, forms of genetic engi-
neering that create synthetic characteristics shared broadly across a popula-
tion and which reduce genotypic diversity may render the population more
vulnerable to biological or electronic hacking attempts (and may make such
attempts more profitable and attractive for would-be adversaries), although
such standardization may also make it easier for effective anti-hacking secu-
rity mechanisms to be developed and deployed across the population.
105
At the same time, artificial agents may no longer be products of explicit
design and engineering by human manufacturers. Some artificial life-forms
that exist within the digital-physical ecosystem primarily as physical robots
possessing some degree of AI or as digital life-forms that temporarily occupy
103
See the discussion of essential elements of artificial life in Friedenberg (2008), pp. 201-03,
which is based on the criteria for biological life presented by Curtis (1983). See also Gladden,
“The Artificial Life-Form as Entrepreneur: Synthetic Organism-Enterprises and the Reconceptu-
alization of Business” (2014).
104
For different perspectives on such possibilities, see, e.g., De Melo-Martín (2015); Regalado,
“Engineering the perfect baby” (2015); Lilley, Transhumanism and Society: The Social Debate over
Human Enhancement (2013); Nouvel (2015); Section B (“Enhancement”) in The Future of Bioethics:
International Dialogues, edited by Akira Akabayashi (2014); Mehlman, Transhumanist Dreams
and Dystopian Nightmares: The Promise and Peril of Genetic Engineering (2012); and Bostrom
(2012).
105
For the relationship between the heterogeneity of information systems and their information
security, see Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015),
p. 296, and NIST SP 800-53 (2013), p. F-204.
192 • Posthuman Management
physical substrates may manifest structures and behaviors that are the result
of randomized evolutionary processes that lie beyond the control of human
designers or which are the result of intentional design efforts conducted by
other artificial agents whose nature is such that they are inscrutable to human
understanding – in which case, from the human perspective, the engineered
agents would essentially lack a comprehensible design.
106
In other cases, hu-
man designers may have intentionally engineered an artificial agent’s basic
structures (such as a physical neural network), but the exact nature of the be-
haviors and other traits eventually developed and demonstrated by those
structures may lie beyond the reach of human engineering.
107
d. Upgradeability
The growing use of technologies for somatic cell gene therapy and neuro-
prosthetic augmentation may increasingly allow the physical components and
cognitive capacities of human agents to be upgraded and expanded even after
the agents have reached a stage of physical and cognitive maturity.
108
Conversely, it may be difficult or impossible to upgrade, expand, or replace
the physical components of artificial agents that are composed of biological
material in the way that components of an electronic computer can be up-
graded. In the case of especially complex or fragile artificial agents, efforts to
upgrade or otherwise modify an agent’s physical components after its creation
may result in the impairment or death of such biological material or of the
agent as a whole. Similarly, after an artificial agent that possesses a holonomic
physical neural network has been created and achieved intellectual maturity
through experience and learning, it may not be possible to intervene directly
in the neural network’s physical structure or processes to upgrade its capaci-
ties or edit its contents without irreparably harming the agent.
109
e. Lifespan
A human agent whose bodily components can be easily replaced with bio-
logical or electronic substitutes after deteriorating or becoming damaged or
whose components can be (re)engineered to prevent them from undergoing
106
Regarding evolutionary robotics and evolvable robot hardware, see Friedenberg (2008), pp.
206-10.
107
Regarding the relationship of artificial life and evolutionary robotics, see Friedenberg (2008),
pp. 201-16.
108
See, e.g., Panno, Gene Therapy: Treating Disease by Repairing Genes (2005); Gene Therapy of
the Central Nervous System: From Bench to Bedside, edited by Kaplitt & During (2006); and
Bostrom (2012).
109
Regarding the potentially holonomic nature of memory storage within the brain, see, e.g.,
Longuet-Higgins (1968); Pribram (1990); Pribram & Meade (1999); and Gladden, The Handbook
of Information Security for Advanced Neuroprosthetics (2015), pp. 200-01.
Chapter Three: The Posthuman Management Matrix • 193
damage or deterioration in the first place could potentially experience an ex-
tended or even indefinite lifespan, although such engineering might result in
side-effects that are detrimental to the agent and which would render such
lifespan extension undesirable as a practical matter.
110
As in other cases, the
moral and legal questions involved with such activities are serious.
At the same time, artificial agents whose bodies include or comprise bio-
logical components or whose cognitive processes follow an irreversible devel-
opmental cycle (e.g., in which the neural network of an agent’s ‘brain’ pos-
sesses a maximum amount of information that it can accumulate over the
course of the agent’s lifespan) might possess a limited and predetermined
lifespan that cannot be extended after the agent’s creation.
111
f. Operational Cycle
Genetic engineering could potentially speed the natural biological pro-
cesses that contribute to physical growth and cognitive development or slow
or block processes of physical and cognitive decline. Scholars also envision
the possibility of neuroprosthetic technologies being used to allow human be-
ings to instantly acquire new knowledge or skills through the implantation of
memory chips or the downloading of files into one’s brain; if feasible, this
could allow human cognitive capacities to be instantaneously upgraded in a
manner similar to that of installing new software on a computer, thereby by-
passing typical human processes of cognitive development and learning.
112
At the same time, the integration into artificial agents of biological compo-
nents and physical neural networks whose structure and behavior render
them difficult to control externally after their deployment means that it may
become impossible to simply ‘reset’ artificial agents and restore them to an
earlier physical and informational state.
113
110
Regarding issues with technologically facilitated life extension or the replacement of a human
being’s original biological body, see Proudfoot (2012); Pearce (2012); Hanson (1994); and Gladden,
“‘Upgrading’ the Human Entity: Cyberization as a Path to Posthuman Utopia or Digital Annihi-
lation?” (2015).
111
As early as the 1940s, Wiener speculated that a physical neural network that is incapable of
adding new neurons or creating new synapses but which instead stores memories through in-
creases to the input threshold that triggers the firing of existing neurons may display an irre-
versible process of creating memories through which its finite available storage capacity is grad-
ually exhausted, after which point a sort of senescence occurs that degrades the neural network’s
functioning and disrupts the formation of new memories. See Wiener (1961), loc. 2467ff.
112
See, e.g., McGee (2008).
113
Regarding the difficulty of detecting and understanding the current state of an artificially
intelligent system (let alone restoring it to a previous state), especially that of a distributed arti-
ficial intelligence (DAI) displaying emergent behavior, see Friedenberg (2008), pp. 31-32.
194 • Posthuman Management
g. Locality
The use of neuroprosthetic devices and virtual reality technologies may
effectively allow a human agent to occupy different and multiple bodies that
are either physical or virtual and are potentially of a radically nonhuman na-
ture.
114
In this way, a human agent could be extremely multilocal by being
present in many different environments simultaneously.
115
At the same time, an artificial agent whose cognitive processes are tied to
a single body comprising biological components or a single physical artificial
neural network that possesses limited sensorimotor and I/O mechanisms may
be confined to exercising its agency within the location in which that cogni-
tive substrate is located.
116
h. Permanence of Substrate
Historically, a particular human agent has been tied to a particular physical
substrate or body; the dissolution of that body entails the end of that human
being’s ability to act as an agent within the environment. Ontologically and
ethically controversial practices such as the development of artificial neurons
to replace the natural biological neurons of a human brain and mind uploading
may allow a single human agent’s agency to exist and act beyond the physical
confines of the agent’s original biological physical substrate – but only under
certain definitions of ‘agent’ and ‘agency’ that remain strongly contested.
117
Similarly, the use of genetic engineering or neuroprosthetically mediated cy-
bernetic networks to create hive minds or other forms of collective agency
involving human agents might allow such multi-agent systems or ‘super-
agents’ to survive and function despite a continual addition and loss of bio-
logical substrates which mean that the entity’s substrate at one moment in
time shares no components in common with its substrate at a later point in
time.
114
Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics” (2015).
115
See Gladden, “The Diffuse Intelligent Other” (2016) for a discussion of multilocality.
116
Regarding different fundamental architectures for the design of artificially intelligent systems
– from a CPU-based Von Neumann architecture and software-based artificial neural network to
models utilizing grid computing and distributed AI – see Friedenberg (2008), pp. 27-32. Regarding
the extent to which a human-like AI may necessarily be tied to a single body that interacts with
a particular environment, see Friedenberg (2008), pp. 32-33, and the literature on embodied em-
bedded cognition – e.g., Wilson, “Six views of embodied cognition” (2002); Anderson, “Embodied
cognition: A field guide” (2003); Sloman, “Some Requirements for Human-like Robots: Why the
recent over-emphasis on embodiment has held up progress” (2009); and Garg, “Embodied Cog-
nition, Human Computer Interaction, and Application Areas” (2012).
117
Regarding such issues, see Koene (2012); Proudfoot (2012); Pearce (2012); Hanson (1994); Mo-
ravec (1990); and Gladden, The Handbook of Information Security for Advanced Neuroprosthetics
(2015), pp. 99-100.
Chapter Three: The Posthuman Management Matrix • 195
Just as certain posthumanizing technologies might – according to their
proponents – free human agency from its historic link to a particular biologi-
cal body, other technologies might increasingly bind artificial agency to a par-
ticular permanent physical substrate. For example, an artificial agent whose
cognitive processes are executed by biological components or a physical arti-
ficial neural network and whose memories and knowledge are stored within
such components may not be capable of exchanging its body or migrating to
a new substrate without losing its agency.
118
i. Identity
If a human agent’s agency is no longer irrevocably tied to a particular bio-
logical body, it may become difficult or impossible to attribute actions to a
specific human agent or even to identify which human agent is occupying and
utilizing a particular physical body in a given moment – since a single elec-
tronic sensor or actuator could simultaneously belong to the bodies of multi-
ple human agents. The ability of neuroprosthetically mediated cybernetic net-
works to create hive minds and other forms of collective consciousness among
human and artificial agents may also make it difficult to identify which human
agent, if any, is present in a particular physical or virtual environment and is
carrying out the behaviors observed there.
119
Conversely, if an artificial agent is tied to a particular physical body (e.g.,
because the agent’s cognitive processes cannot be extracted or separated from
the biological components or physical artificial neural network that execute
them), this may provide it with a uniqueness and identity similar to that his-
torically enjoyed by individual human beings.
120
On the other hand, an artifi-
cial agent that possesses a spatially dispersed or nonlocalizable body may pos-
sess even less of a clear identity than is possessed today by conventional hard-
ware-software computing platforms.
2. INTELLIGENCE
The range of information-processing mechanisms and behaviors available
to human and artificial agents is expected to evolve significantly as a result of
118
It is not yet clear, for example, whether an artificial intelligence possessing human-like levels
of intelligence could potentially exist in the form of a computer worm or virus that can move or
copy itself from computer to computer, or whether the nature of human-like intelligence renders
such a scenario theoretically impossible. Regarding the significance of a body for artificial intel-
ligence, see, e.g., Friedenberg (2008), pp. 32-33, 179-234.
119
Regarding such issues, see Gladden, “Utopias and Dystopias as Cybernetic Information Sys-
tems: Envisioning the Posthuman Neuropolity” (2015), and Gladden, “‘Upgrading’ the Human
Entity” (2015).
120
For an overview of issues of personal identity from a philosophical perspective, see Olson
(2015). For an exploration of questions of physicality and identity in robots, see Friedenberg
(2008), pp. 179-234.
196 • Posthuman Management
posthumanizing technological and social change. Such changes will be ex-
pressed through the possession (or lack) of a number of key characteristics,
which are described below.
a. Sapience
By interfering with or altering the biological mechanisms that support con-
sciousness and self-awareness within the brain, neuroprosthetic devices could
deprive particular human agents of sapience, even if those agents outwardly
appear to remain fully functional as human beings; for example, a human
agent might retain its ability to engage in social interactions with longtime
friends – not because the agent’s mind is conscious and aware of such inter-
actions, but because a sufficiently sophisticated artificially intelligent neuro-
prosthetic device is orchestrating the agent’s sensorimotor activity.
121
Genetic
engineering could also potentially be employed in an attempt to create human
agents that lack sapience (and could be subject to claims by their producers
that they should be considered property rather than legal persons and moral
agents) or human agents whose transhuman sapience is of such an unusual
and ‘advanced’ sort that it is unfathomable – and perhaps even undetectable
– to natural human beings.
122
Much research from a philosophical and engineering perspective has been
dedicated to considering whether sufficiently sophisticated artificial agents
might be capable of achieving sapience and possessing self-awareness and a
subjective conscious experience of reality. Controversy surrounds not only
the theoretical questions of whether artificial agents can potentially possess
sapience (and, if so, what types of artificial agents) but also the practical ques-
tion of how outside observers might determine whether a particular artificial
agent possesses conscious self-awareness or simply simulates the possession
of such self-awareness.
123
Regardless of how these questions are answered by
philosophers, theologians, scientists, engineers, and legislators, emerging pop-
ular conceptions of artificial agents and their potential for sapience may re-
quire organizations to treat certain kinds of artificial agents as though they
121
See Gladden, “‘Upgrading’ the Human Entity” (2015).
122
See Abrams, “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shusterman,
Rorty, Foucault” (2004); McGee (2008), pp. 214-16; Warwick, “The cyborg revolution” (2014), p.
271; Rubin, “What Is the Good of Transhumanism?” (2008); and Gladden, The Handbook of Infor-
mation Security for Advanced Neuroprosthetics (2015), pp. 166-67.
123
On the possibility that efforts to ascertain the levels of intelligence or consciousness of arti-
ficial entities might be distorted by human beings’ anthropomorphizing biases, see Yampolskiy
& Fox, “Artificial General Intelligence and the Human Mental Model” (2012), pp. 130-31. On the
distinction between intelligence, consciousness, and personhood in such a context, see, e.g.,
Proudfoot (2012), pp. 375-76. For a broader discussion of such issues, see, e.g., The Turing Test:
The Elusive Standard of Artificial Intelligence, edited by Moor (2003).
Chapter Three: The Posthuman Management Matrix • 197
possessed a degree of sapience comparable, if not identical, to that possessed
by human beings.
b. Autonomy
Some kinds of neuroprosthetic devices or genetic modification may
weaken the desires or strategic planning capacities of human agents or subject
them to the control of external agents, thereby reducing their autonomy. New
kinds of social network topologies that link the minds of human agents to
create hive minds or other forms of merged consciousness can also reduce the
autonomy of the individual members of such networks.
124
Neuroprosthetic
augmentation, genetic modification, and other uses of posthumanizing tech-
nology that renders human agents dependent on corporations or other organ-
izations for ongoing hardware or software upgrades or medical support simi-
larly reduce the autonomy of those agents.
125
On the other hand, technologies
that allow human agents to survive and operate in hostile environments or to
reduce or repair physical damage to their bodies would enhance such agents’
autonomy.
The development of synthetic systems that possess human-like levels of
artificial general intelligence would result in the appearance of artificial
agents that do not function autonomously with regard to carrying out some
specific task that they are expected to perform but which function autono-
mously at a more general level in deciding their own aims, aspirations, and
strategies.
126
The development of robots that can obtain energy from their en-
vironment, for example, by consuming the same kinds of foods that are edible
for human beings
127
or which possess biological components that can heal
wounds that they have suffered will also result in artificial agents with in-
creased autonomy.
c. Volitionality
Researchers have already observed ways in which certain kinds of neuro-
prosthetic devices and medications can affect their human host’s capacity to
possess desires, knowledge, and belief;
128
insofar as technologies disrupt or
124
See Gladden, “Utopias and Dystopias as Cybernetic Information Systems” (2015).
125
See Gladden, “Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman
Socioeconomic Interaction” (2016).
126
See, e.g., Yampolskiy & Fox (2012).
127
See, e.g., the discussion of artificial digestive systems in Friedenberg (2008), p. 214-15.
128
Regarding the possibility of developing neuroprosthetics that affect emotions and perceptions
of personal identity and authenticity, see Soussou & Berger, “Cognitive and Emotional Neuro-
prostheses” (2008); Hatfield et al., “Brain Processes and Neurofeedback for Performance En-
hancement of Precision Motor Behavior” (2009); Kraemer, “Me, Myself and My Brain Implant:
Deep Brain Stimulation Raises Questions of Personal Authenticity and Alienation” (2011); Van
198 • Posthuman Management
control such abilities, they may impair their human host’s exercise of his or
her conscience, which depends on the possession of these capacities. This may
result in the existence of human agents that are no longer fully metavolitional
but instead merely volitional or nonvolitional.
129
The use of neuroprosthetics,
virtual reality, and other technologies to create hive minds and other forms of
collective consciousness among human agents may also impair the volitional-
ity of human agents participating in such systems and reduce them to a state
that is less than metavolitional; each agent may no longer possess its own
individual conscience but instead help to form (and be guided by) the con-
science of the multi-agent system as a whole.
Meanwhile, advances toward the development of human-like artificial gen-
eral intelligence point at the eventual creation of artificial agents that possess
a capacity for knowledge, belief, personal desires, and self-reflexive thought –
in short, the components necessary for an entity to be metavolitional and to
possess a conscience.
130
The existence of conscience within artificial agents
would have significant ramifications for the ways in which such agents could
possibly be employed by organizations. Organizations that have metavoli-
tional artificial agents as employees or customers could motivate them to act
in certain ways by appealing to their conscience – to their sense of morality,
justice, mercy, and the common good. At the same time, metavolitional artifi-
cial agents serving as employees within organizations could not be expected
to automatically carry out instructions that have been given to them without
first weighing them against the demands of their conscience. In the case of
metavolitional artificial agents serving in roles that have a critical impact on
human safety (e.g., robots serving as soldiers, police officers, surgeons, or the
pilots of passenger vehicles) this could have positive or negative conse-
quences.
131
For example, a robotic police officer who had been given an illegal
and immoral command by its corrupt human supervisor to conceal evidence
might decide to ignore that command as a result of its conscience; on the other
hand, a robotic soldier could be manipulated by skilled ‘conscience hackers’
belonging to an opposing army who present the robot with fabricated evi-
den Berg, “Pieces of Me: On Identity and Information and Communications Technology Im-
plants” (2012); McGee (2008), p. 217; and Gladden, The Handbook of Information Security for Ad-
vanced Neuroprosthetics (2015), pp. 26-27.
129
For a discussion of different levels of volitionality, see Gladden, “The Diffuse Intelligent
Other” (2016).
130
See Calverley (2008) and Gladden, “The Diffuse Intelligent Other” (2016), for an explanation
of the relationship of various cognitive capacities to the possession of second-order volitions (or
metavolitions) on the part of artificially intelligent entities.
131
Regarding the moral and practical implications of the possession of a conscience by artificial
agents such as robots, see Wallach & Allen (2008).
Chapter Three: The Posthuman Management Matrix • 199
dence of atrocities that appeal to known weaknesses or bugs within the ro-
bot’s metavolitional mechanisms and which persuade the robot to desert its
post and join that opposing army.
d. Knowledge Acquisition
The use of genetic engineering to alter the basic cognitive structures and
processes of human agents and, especially, the use of neuroprosthetic devices
to monitor, control, or bypass the natural cognitive activity of a human agent
may result in agents that do not need to be trained or educated but which can
simply be ‘programmed’ to perform certain tasks or even remotely controlled
by external systems to guide them in the performance of those tasks.
132
At the same time, there will be growing numbers and kinds of artificial
agents that cannot simply be ‘programmed’ to carry out particular tasks in the
manner of earlier conventional computers but which must be trained, edu-
cated, and allowed to learn through trial and error and firsthand interaction
with and exploration of their world.
133
e. Information-processing Locus
Increasingly the information processing performed by and within a human
agent may occur not within the physical neural network that comprises nat-
ural biological neurons in the agent’s brain but in other electronic or biological
substrates, including neuroprosthetic devices and implantable computers that
utilize traditional CPU-based technologies.
134
Meanwhile, artificial agents’ information processing may increasingly be
performed within electronic or biological physical neural networks that do not
rely on conventional CPU-based computing architectures, which do not pos-
sess a traditional operating system or the ability to run standard executable
software programs, and which may be immune to many traditional electronic
hacking techniques.
135
132
Regarding the ‘programming’ of human beings through the intentional, targeted modification
of their memories and knowledge, see, e.g., McGee (2008); Pearce (2012); and Spohrer, “NBICS
(Nano-Bio-Info-Cogno-Socio) Convergence to Improve Human Performance: Opportunities and
Challenges” (2002). Regarding the remote control of human bodies by external systems, see Glad-
den, “Neural Implants as Gateways” (2016), and Gladden, The Handbook of Information Security
for Advanced Neuroprosthetics (2015).
133
See, e.g., Friedenberg (2008), pp. 55-72, 147-200; Haykin (2009); and Lamm & Unger (2011).
134
See, e.g., Warwick & Gasson, “Implantable Computing” (2008), and the discussion of cognitive
neuroprosthetics in Gladden, The Handbook of Information Security for Advanced Neuroprosthet-
ics (2015), pp. 26-27.
135
See, e.g., Friedenberg (2008), pp. 17-146.
200 • Posthuman Management
f. Emotionality
The use of advanced neuroprosthetic devices that can heighten, suppress,
or otherwise modify the emotions of human beings may result in populations
of human agents whose programmatically controlled emotional behavior – or
lack of emotional behavior – more closely resembles the functioning of com-
puters than that of natural human beings.
136
Meanwhile, the creation of autonomous robots with increasingly sophisti-
cated and human-like social capacities and emotional characteristics – per-
haps generated by the internal action of a complex physical neural network –
may yield new types of artificial agents that cannot simply be programmed or
configured to perform certain actions by their human operators but which
must instead be motivated and persuaded to perform such actions through an
application of psychological principles, negotiation techniques, and other
practices typically employed with human beings.
137
g. Cognitive Biases
Genetic engineering could potentially be used to create new designer types
of cognitively engineered human beings whose brains do not develop cogni-
tive biases. Alternatively, a neuroprosthetic device could be used to monitor
the cognitive processes of a human mind and to alert the mind whenever the
device detects that the individual is about to undertake a decision or action
that is flawed or misguided because the mind’s cognitive processes have been
influenced by a cognitive bias; beyond directly intervening to prevent the ef-
fects of cognitive biases in this manner, such a device could potentially also
train the mind over time to recognize and avoid cognitive biases on its own.
138
Artificial agents that are patterned after human models of cognition and
which display human-like levels of intelligence, emotion, sociality, and other
traits may be subject to many of the same cognitive biases as human beings;
139
highly sophisticated artificial agents (e.g., superintelligences) might also suffer
from their own idiosyncratic forms of cognitive biases that may be hard for
their designers to recognize or anticipate.
140
136
For the possibility of developing emotional neuroprosthetics, see Soussou & Berger (2008);
Hatfield et al. (2009); Kraemer (2011); and McGee (2008), p. 217.
137
See Friedenberg (2008), pp. 179-200.
138
See Gladden, “Neural Implants as Gateways” (2016).
139
Regarding the potential for emotionally driven biases in artificial intelligences, see Frieden-
berg (2008), pp. 180-85, 197-98.
140
For cognitive biases, mental illnesses, and other potentially problematic psychological condi-
tions that may be manifested by advanced AIs, see, e.g., Chapter 4, “Wireheading, Addiction, and
Mental Illness in Machines,” in Yampolskiy, Artificial Superintelligence: A Futuristic Approach
(2015).
Chapter Three: The Posthuman Management Matrix • 201
h. Memory
Genetic engineering could potentially be used to enhance or otherwise al-
ter the natural neural mechanisms for the encoding, storage, and retrieval of
memories within the brain of a human agent. The use of neuroprosthetic de-
vices to control, supplement, or replace the brain’s natural memory mecha-
nisms could result in human agents that possess memory that is effectively
lossless, does not degrade over time, and can be easily copied to or from ex-
ternal systems.
141
At the same time, the use of biological components or physical artificial
neural networks as a substrate for the cognitive processes of artificial agents
could result in agents whose memories are stored in a highly compressed form
that degrades unreliably over time and which makes individual memories dif-
ficult to recall, even when they are retained within the memory system.
142
i. Predictability
Human agents whose actions are influenced or controlled by neuropros-
thetic devices or whose range of possible behaviors has been constrained
through genetic engineering may produce behavior that is more predictable
and is easily ‘debugged’ in a straightforward and precise manner that has tra-
ditionally been possible only when dealing with computers.
143
Meanwhile, artificial agents that possess human-like cognitive capacities –
including emotion and sociality – may generate behavior that is difficult to
reliably predict, analyze, or control, especially if the agents’ cognitive pro-
cesses take place within a physical neural network whose activities and cur-
rent state cannot easily be determined by outside observers.
144
j. Vulnerability to Hacking
Human agents that possess electronic neuroprosthetic devices would be
vulnerable to electronic hacking attempts similar to those employed against
141
Regarding genetic and neuroprosthetic technologies for memory alteration in biological or-
ganisms, see Han et al., “Selective Erasure of a Fear Memory” (2009); Josselyn, “Continuing the
Search for the Engram: Examining the Mechanism of Fear Memories” (2010); and Ramirez et al.,
“Creating a False Memory in the Hippocampus” (2013). Regarding the use of neuroprosthetic
systems to store memories as effectively lossless digital exograms, see Gladden, “Neural Implants
as Gateways” (2016), and Gladden, The Handbook of Information Security for Advanced Neuro-
prosthetics (2015), pp. 156-57.
142
Regarding memory mechanisms for artificial agents, including those involving neural net-
works, see Friedenberg (2008), pp. 55-72.
143
Regarding the testing and debugging of neuroprosthetic devices (especially in relation to in-
formation security), see Gladden, The Handbook of Information Security for Advanced Neuropros-
thetics (2015), pp. 176-77, 181-84, 213-14, 248-19, 242-43, 262.
144
For an overview of issues relating to the social behavior of artificial agents, see Friedenberg
(2008), pp. 217-34.
202 • Posthuman Management
conventional computers. Moreover, advanced technologies for genetic engi-
neering and the production of customized biopharmaceuticals and biologics
may allow the biohacking even of human agents that do not possess electronic
neuroprosthetic components.
145
At the same time, artificial agents that include or wholly comprise biolog-
ical components rather than electronic components might thereby reduce or
eliminate their vulnerability to traditional methods of electronic hacking.
However, such artificial agents may be vulnerable to biohacking approaches
that are based on genetic engineering or biopharmaceutical technologies as
well as to psychologically based social engineering attacks.
146
3. SOCIAL INTERACTION
The forms of social engagement and belonging available to human and ar-
tificial agents are expected to be transformed by the advent of posthumanizing
technologies. Such change will be manifested through the possession (or ab-
sence) of a number of key characteristics, which are described below.
a. Sociality
Neuroprosthetic devices or genetic modifications that affect long-term
memory processes could make it difficult or impossible for human agents to
engage in friendships and other long-term social relationships with other in-
telligent agents. Such human agents would no longer be fully social but in-
stead semisocial or even nonsocial.
147
Ongoing immersion in virtual worlds or
neuroprosthetically enabled cybernetic networks with other human minds or
other kinds of intelligent agents could potentially also lead to the atrophying
or enhancement of human agents’ social capacities.
At the same time, an increasing number of artificial agents may possess
fully human-like sociality, including the ability to participate in long-term so-
cial relations that deepen and evolve over time as a result of the agents’ expe-
rience of such engagement and which are shaped by society’s expectations for
the social roles to be filled by the relations’ participants. This would poten-
tially allow artificial agents to serve as charismatic leaders of human beings
145
Regarding the possibility of hybrid biological-electronic computer viruses and other attacks,
see Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), p. 53.
146
For a discussion of social engineering attacks, see Rao & Nayak (2014), pp. 307-23, and Sasse
et al., “Transforming the ‘weakest link’—a human/computer interaction approach to usable and
effective security” (2001).
147
For ways of describing and classifying degrees of sociality of artificial entities, see Vinciarelli
et al. (2012) and Gladden, “Managerial Robotics” (2014).
Chapter Three: The Posthuman Management Matrix • 203
who guide and manage the activities of their followers not through threats or
intimidation but by inspiring or seducing them.
148
b. Culture
Human agents whose thoughts, dreams, and aspirations have been attenu-
ated or even eliminated or whose physical sensorimotor systems are con-
trolled through the use of genetic engineering, neuroprosthetic devices, or
other advanced technologies may no longer possess a desire or ability to per-
ceive or generate cultural artifacts. If a single centralized system (e.g., a server
providing a shared virtual reality experience to large numbers of individuals)
maintains and controls all of the sensorimotor channels through which human
agents are able to create and experience culture, then that automated system
may generate all of the aspects of culture within that virtual world, without
the human agents who dwell in that world being able to contribute meaning-
fully to the process.
149
Artificial agents already play important roles in supporting the creation,
maintenance, and dissemination of human culture(s), and some artificial
agents are already capable of acting autonomously to generate works of art,
poetry, music, content for computer games, webpages, Internet memes, and
other kinds of cultural artifacts.
150
It is expected that in the future, artificial
agents will not only play a role in contributing to predominantly human cul-
tures or act in symbiosis with human agents to create hybrid human-artificial
cultures that are truly shared; they will also create among themselves entirely
new synthetic cultures whose art, music, architecture, literature, philosophy,
and way of life could never have been developed by human beings (and per-
haps cannot even be observed or comprehended by human beings), due to the
physical and cognitive differences between human agents and the artificial
agents that create such cultures.
151
148
See Gladden, “The Social Robot as ‘Charismatic Leader’: A Phenomenology of Human Sub-
mission to Nonhuman Power” (2014). For an exploration of the potential social behavior of ad-
vanced artificial agents, see Friedenberg (2008), pp. 217-34.
149
Regarding the possibilities of a centralized computerized system shaping culture by mediating
and influencing or controlling the communications among neuroprosthetically enabled human
minds, see Gladden, “Utopias and Dystopias as Cybernetic Information Systems” (2015), and
Gladden, “From Stand Alone Complexes to Memetic Warfare: Cultural Cybernetics and the En-
gineering of Posthuman Popular Culture” (2016).
150
See Friedenberg (2008), pp. 127-46, and Gladden, “From Stand Alone Complexes to Memetic
Warfare” (2016).
151
See Payr & Trappl (2003); regarding the creation of hybrid human-artificial cultures in an
organizational setting, see Gladden, “Leveraging the Cross-Cultural Capacities of Artificial
Agents as Leaders of Human Virtual Teams” (2014). For a philosophical analysis of digital-phys-
ical ecosystems in which human and artificial agents may interact symbiotically to generate
shared cognitive and cultural artifacts (and in which such artifacts may even exist as actors that
204 • Posthuman Management
c. Spirituality
Researchers have raised concerns that the use of neuroprosthetic devices
to replace or dramatically alter the structures and activities of the body and
mind of human agents may result in the loss of those fundamental character-
istics that make such agents human. While this can be analyzed from purely
biological and psychological perspectives,
152
it may alternatively be under-
stood from philosophical and theological perspectives as a dissolution of the
‘soul’ or ‘essence’ of such human agents.
153
The use of genetic engineering in
transhumanist efforts to design beings that possess superior (and even trans-
cendent) intelligence and morality raises similarly significant questions about
the nature of humanity and future human beings.
At the same time, artificial agents that possess sufficiently sophisticated
and human-like cognitive capacities may be subject to instinctive desires to
seek out and experience some transcendent truth and reality and may engage
in behaviors such as meditation, contemplation, and even prayer.
154
d. Political Engagement
Human agents that have been neuroprosthetically augmented may form
social and technological networks that demonstrate new kinds of network to-
pologies and may engage in new forms of cybernetic relations with similarly
augmented human agents and with artificial entities; such human agents may
dwell (virtually, if not physically) in societies in which traditional human po-
litical systems and structures are not meaningful or relevant.
155
Such human
agents may find themselves disconnected from political life and institutions of
the ‘real’ world and instead immerse themselves in new kinds of structures
that might resemble traditional computer networks more than political sys-
tems.
At the same time, artificial agents that possess intelligence and sociality
that are human-like (or which surpass the capacities of human beings) may
create political systems and structures to govern their relations with one an-
other or may seek to participate in human political systems.
156
can propagate themselves), see, e.g., Kowalewska, “Symbionts and Parasites – Digital Ecosys-
tems” (2016).
152
For a discussion of, e.g., the psychological impact of neuroprosthetic devices upon a user’s
perceptions of authenticity and identity, see Kraemer (2011) and Van den Berg (2012).
153
E.g., see Gladden, “‘Upgrading’ the Human Entity” (2015).
154
For a discussion of such possibilities, see Kurzweil (2000).
155
Regarding the possible fragmentation of human societies as a result of posthuman neuropros-
thetics, see Gladden, “Utopias and Dystopias as Cybernetic Information Systems” (2015); McGee
(2008), pp. 214-16; Warwick (2014), p. 271; Rubin (2008); Koops & Leenes, “Cheating with Im-
plants: Implications of the Hidden Information Advantage of Bionic Ears and Eyes” (2012), p.
127; and Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015),
166-67.
156
For the possibility of social robots exercising referent power or charismatic authority within
Chapter Three: The Posthuman Management Matrix • 205
e. Economic Engagement
The adoption of posthumanizing technologies may weaken the ability of
human beings to serve as autonomous economic actors. Depending on the
precise terms under which such components were acquired, a human agent
whose body has been subject to extensive neuroprosthetic augmentation and
is largely composed of electronic components may not even ‘own’ its own
body or the products generated by that body, including intellectual property
such as thoughts and memories. Such a human agent may for practical pur-
poses be wholly dependent on and economically subjugated to the corpora-
tion(s), government agencies, or other institutions that provide maintenance
services for its synthetic components and legally or practically barred from
purchasing goods or services from competing enterprises.
157
The use of neu-
roprosthetic devices or other technologies that directly affect a human agent’s
cognitive processes may also impair that agent’s ability to make free choices
as an autonomous economic actor.
Conversely, artificial agents may gain new abilities to function as inde-
pendent economic actors. Some forms of artificial life may be able to function
as autonomous organism-enterprises that acquire resources from within the
digital-physical ecosystem shared with human beings, process the resources
to generate goods and services, and then exchange those goods and services
with human beings or other artificial agents to generate revenue, including
profit that the artificial life-form can use for purposes of growth, reproduction,
or risk management.
158
Such artificial life-forms could compete directly with
human enterprises within the real-world economy or offer new kinds of goods
and services that human agents are incapable of offering.
f. Legal Status
Human agents that have been intentionally engineered by other human
beings or organizations (e.g., biological clones or custom-designed human be-
ings) may be subject to claims that they are not full-fledged legal persons but
rather wards or even property of those who have created them – especially if
the agents have been engineered to possess characteristics that clearly distin-
guish them from ‘normal’ human beings.
159
human social or political institutions, see Gladden, “The Social Robot as ‘Charismatic Leader’”
(2014).
157
See Gladden, “Neural Implants as Gateways” (2016), and Gladden, The Handbook of Infor-
mation Security for Advanced Neuroprosthetics (2015).
158
For an approach to modelling entrepreneurship on the part of artificial agents, see Ihrig,
“Simulating Entrepreneurial Opportunity Recognition Processes: An Agent-Based and
Knowledge-Driven Approach” (2012). For an innovative exploration of the possibility of creating
fully autonomous systems for entrepreneurship, see Rijntjes, “On the Viability of Automated
Entrepreneurship” (2016). See also Gladden, “The Artificial Life-Form as Entrepreneur” (2014).
159
See, e.g., Cesaroni, “Designer Human Embryos as a Challenge for Patent Law and Regulation”
206 • Posthuman Management
Conversely, sufficiently sophisticated artificial agents that possess human-
like cognitive capacities or biological components may not be considered in-
animate objects or property from a legal perspective but either moral patients
possessing rights that must be protected or even moral subjects that can be
held legally responsible for their own actions.
160
B. The Four Types of Beings Relevant for Technologically
Posthumanized Organizations
The only two quadrants of the Posthuman Management Matrix that have
historically been considered relevant objects for management scholarship and
practice are those of natural human beings and, more recently, computers.
However, the advent of new posthumanizing technologies will create a vari-
ety of entities that fall within the remaining two quadrants and which can
serve as potential employees, partners, and customers for businesses and other
organizations. This will require the field of management to directly address
those two quadrants – to create theoretical frameworks for understanding the
activities and organizational potential of such entities and to develop new
practices for managing them. Figure 3 reflects the fact that during the dawning
Posthuman Age, all four quadrants of the Matrix will at last be relevant for
management.
We can now consider in more detail the future roles that all four types of
entities may play for future posthumanized organizations, along with the ac-
ademic disciplines and practical bodies of knowledge that can contribute to
their effective management.
1. NATURAL HUMAN BEINGS
At least during the early stages of the emerging Posthuman Age, human
agents with anthropic characteristics will remain the key leaders and decision-
makers within businesses and other organizations. This will not necessarily
be due to the fact that such natural human beings are more capable than arti-
ficial agents or technologically modified human beings when it comes to per-
forming the actions involved with managing others; it will instead likely be
due to legal, political, and cultural considerations. For example, even after suf-
ficiently sophisticated social robots have been developed that are capable of
(2012); Pereira, “Intellectual Property and Medical Biotechnologies” (2013); Bera, “Synthetic Bi-
ology and Intellectual Property Rights” (2015); Camenzind, “On Clone as Genetic Copy: Critique
of a Metaphor” (2015); Section B (“Enhancement”) and Section D (“Synthetic Biology and Chi-
mera”) in The Future of Bioethics: International Dialogues, edited by Akabayashi (2014); and Singh,
Biotechnology and Intellectual Property Rights: Legal and Social Implications (2014). For perspec-
tives on the ways in which such issues have been explored within fiction, see, e.g., Pérez, “Sym-
pathy for the Clone: (Post) Human Identities Enhanced by the ‘Evil Science’ Construct and its
Commodifying Practices in Contemporary Clone Fiction” (2014).
160
Regarding such questions see, e.g., Wallach & Allen (2008) and Calverley (2008).
Chapter Three: The Posthuman Management Matrix • 207
serving effectively as CEOs of businesses, it may take many years before the
ethical and political questions surrounding such practices have been resolved
to the point that human legislators and regulators allow the human businesses
and other institutions that are subject to their oversight to legally employ such
artificial agents as CEOs.
161
It appears likely that human agents that possess at least limited compu-
tronic characteristics will achieve positions of formal leadership within organ-
izations before artificial agents accomplish that feat. This can be anticipated
due to the fact that current law and cultural tradition already allow human
beings to fill such roles: while existing laws would generally need to be ex-
plicitly changed in order to allow artificial agents to serve, for example, as
CEOs of publically traded corporations, those same laws would need to be
explicitly changed in order to bar human agents who possess computronic
characteristics from filling such roles. Indeed, declining to offer a human being
a position as an executive within a business because he or she possesses a
pacemaker, defibrillator, cochlear implant, robotic artificial limb, or other de-
vice that endows him or her with limited computronic characteristics would,
in many cases, be considered a form of unlawful employment discrimination,
and even simply attempting to ascertain whether a potential employee pos-
sesses such traits could in itself be illicit.
162
Although human agents who possess extensive computronic characteris-
tics and artificial agents are expected to gradually fill a broader range of posi-
tions within organizations, there will likely remain a number of professions
or specific jobs which – at least in the early stages of the Posthuman Age –
can only be filled by natural, unmodified human agents.
163
For example, some
positions within the military, police forces, or intelligence services may ini-
tially be restricted to natural human beings, in order to avoid the possibility
of external adversaries hacking the minds or bodies of such agents and gaining
control of them and the information that they possess. Roles as judges, arbi-
trators, and regulators might be restricted to natural human beings on ethical
grounds, to ensure that such officials’ decisions are being made on the basis
of human wisdom, understanding, and conscience (including the known biases
of the human mind), rather than executed by software programs that might
possess unknown bugs or biases or be surreptitiously manipulated. Some roles
– such as those of priest, therapist, poet, or existentialist philosopher – might
as a practical matter be restricted to natural human beings, because the work
performed by persons in such positions is considered to derive unique value
from the fact that it is performed by a human being rather than a machine.
161
The question arises of whether such artificial agents will voluntarily allow themselves to be
subject to human laws or will instead seek to formulate their own.
162
See Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), pp.
93-94.
163
See Gladden, “Neural Implants as Gateways” (2016).
208 • Posthuman Management
Fig. 3: The Posthuman Management Matrix displaying the two types of entities (in the lower
left and upper right quadrants) that have long been relevant for the theory and practice of
organizational management, joined by two types of entities (in the upper left and lower right
quadrants) that are becoming newly relevant in the dawning Posthuman Age.
Chapter Three: The Posthuman Management Matrix • 209
The adoption of posthumanizing technologies across the world will likely
be highly uneven, as differences in economic resources and systems, political
systems, and philosophical, religious, and cultural traditions combine in
unique ways in different parts of the world to either spur or restrain the adop-
tion of such technologies. The role of natural human beings as workers and
consumers may maintain greater importance in some regions and industries
than in others. Wherever such beings fill places as workers or consumers, the
traditional disciplines of psychology, sociology, economics, anthropology, cul-
tural studies, marketing, organization development, HR management, and er-
gonomics will continue to be relevant for theorists and practitioners of organ-
izational management.
2. COMPUTERS
It is expected that artificial agents with computronic characteristics will
continue to play a fundamental – and ever-growing – role as backbone ele-
ments within the increasingly ubiquitous networked systems that constitute
the digital-physical infrastructure within which human beings will dwell. Ar-
tificial systems that can be quickly and reliably programmed to perform cer-
tain tasks without any worry that a system might become bored or annoyed
or object to its assigned tasks on moral grounds will remain highly useful and
desirable.
164
Although the theory and practice used to design, implement, and manage
such systems will likely continue to evolve rapidly, even during the near-fu-
ture Posthuman Age such disciplines will likely be recognizable as heirs of our
164
One can consider, for example, the case of autonomous military robots. Serious efforts have
been undertaken to create morally aware autonomous military robots that can be programmed
with a knowledge of and obedience to relevant national and international legal obligations gov-
erning the conduct of war, as well as a knowledge of relevant ethical principles and even a ‘con-
science’ that allows a robot to assimilate all available information, evaluate the propriety of var-
ious courses of action, and select an optimal ethically and legally permissible course of action.
However, scholars have noted the possibility for cynical manipulation of such technologies –
e.g., perhaps the creation of robots who possess a ‘conscience’ that is sufficiently developed to
reassure the public about the ethicality of such devices while not being restrictive or powerful
enough to actually block the robot from performing any activities desired by its human overse-
ers. See Sharkey, “Killing Made Easy: From Joysticks to Politics” (2012), pp. 121-22. On the other
hand, if a robot’s conscience is such that the robot becomes a conscientious objector and refuses
to participate in any military actions at all, then the robot becomes operationally useless from
the perspective of its intended purpose.
210 • Posthuman Management
contemporary fields of computer science, electronics engineering, robotics,
and IT management.
3. CYBORGS
As described in earlier sections, the increasing use of neuroprosthetic en-
hancement, genetic engineering, and other posthumanizing technologies is
expected to result in a growing number of human agents that no longer pos-
sess the full suite of traditional anthropic characteristics but instead reflect
some degree of computronic characteristics. Such agents might include human
employees or customers whose artificial sense organs or limbs mediate their
experience of their physical environment;
165
human beings who never physi-
cally leave their bedroom but instead engage with the world through long-
term immersion in virtual worlds and digital ecosystems;
166
groups of human
beings whose minds are neuroprosthetically linked to create a hive mind with
165
For discussions of particular types of neuroprosthetic mediation of sensory experience of
one’s environment, see, e.g., Ochsner et al., “Human, non-human, and beyond: cochlear implants
in socio-technological environments” (2015), and Stiles & Shimojo, “Sensory substitution: A new
perceptual experience” (2016). On ways in which the absence of mediation transforms teleoper-
ation into telepresence in the case of noninvasive brain-computer interfaces, see Salvini et al.,
“From robotic tele-operation to tele-presence through natural interfaces” (2006).
166
Regarding the implications of long-term immersion in virtual reality environments, see, e.g.,
Bainbridge, The Virtual Future (2011); Heim, The Metaphysics of Virtual Reality (1993); Geraci,
Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality (2010); and
Koltko-Rivera, “The potential societal impact of virtual reality” (2005). Regarding psychological,
social, and political questions relating to repetitive long-term inhabitation of virtual worlds
through a digital avatar, see, e.g., Castronova, “Theory of the Avatar” (2003). On the risks of
potentially ‘toxic immersion’ in a virtual world, see Castronova, Synthetic Worlds: The Business
and Culture of Online Games (2005). On implantable systems for augmented or virtual reality, see
Sandor et al., “Breaking the Barriers to True Augmented Reality” (2015), pp. 5-6. For a conceptual
analysis of the interconnection between physical and virtual reality and different ways in which
beings and objects can move between these worlds, see Kedzior, “How Digital Worlds Become
Material: An Ethnographic and Netnographic Investigation in Second Life” (2014).
Chapter Three: The Posthuman Management Matrix • 211
a collective consciousness;
167
human beings who are temporarily or perma-
nently joined in symbiotic relationships with robotic exoskeletons,
168
compan-
ions,
169
or supervisors;
170
or genetically augmented human beings whose phys-
ical structures and cognitive capacities have been intentionally engineered to
make them especially well-suited (or poorly suited) to perform particular roles
within society.
171
Because such technological modification may dramatically affect human
agents’ physical and cognitive traits, their behavior can no longer be under-
stood, predicted, or managed simply by relying on historical disciplines such
as psychology, sociology, or HR management. Established and evolving fields
167
Regarding the possibility of hive minds, see, e.g., McIntosh, “The Transhuman Security Di-
lemma” (2010), and Gladden, “Utopias and Dystopias as Cybernetic Information Systems” (2015).
For more detailed taxonomies and classification systems for different kinds of potential hive
minds, see Chapter 2, “Hive Mind,” in Kelly, Out of control: the new biology of machines, social
systems and the economic world (1994); Kelly, “A Taxonomy of Minds” (2007); Kelly, “The Land-
scape of Possible Intelligences” (2008); Yonck, “Toward a standard metric of machine intelli-
gence” (2012); and Yampolskiy, “The Universe of Minds” (2014). For the idea of systems whose
behavior resembles that of a hive mind but without a centralized controller, see Roden, Posthu-
man Life: Philosophy at the Edge of the Human (2014), p. 39. For critical perspectives on the idea
of hive minds, see, e.g., Bendle, “Teleportation, cyborgs and the posthuman ideology” (2002), and
Heylighen, “The Global Brain as a New Utopia” (2002). Regarding the need for society to debate
the appropriateness of neuroprosthetic technologies that facilitate hive minds, see Maguire &
McGee, “Implantable brain chips? Time for debate” (1999).
168
For examples of such systems currently under development, see Wearable Robots: Biomecha-
tronic Exoskeletons, edited by Pons (2008); Guizzo & Goldstein, “The rise of the body bots [robotic
exoskeletons]” (2005); and Contreras-Vidal & Grossman, “NeuroRex: A clinical neural interface
roadmap for EEG-based brain machine interfaces to a lower body robotic exoskeleton” (2013).
For a discussion of the extent to which the form of an exoskeleton can differ from that of the
human body before it becomes impossible for its human operator to interface with the exoskel-
eton, see Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics” (2015).
169
See Dautenhahn, “Robots we like to live with?! - A Developmental Perspective on a Person-
alized, Life-long Robot Companion” (2004); Van Oost and Reed, “Towards a Sociological Under-
standing of Robots as Companions” (2011); Shaw-Garlock, “Loving machines: Theorizing human
and sociable-technology interaction” (2011); Whitby, “Do You Want a Robot Lover? The Ethics
of Caring Technologies” (2012); and Social Robots and the Future of Social Relations, edited by
Seibt et al. (2014).
170
See, e.g., Samani & Cheok, “From human-robot relationship to robot-based leadership” (2011);
Samani et al., “Towards robotics leadership: An analysis of leadership characteristics and the
roles robots will inherit in future human society” (2012); Gladden, “Leveraging the Cross-Cul-
tural Capacities of Artificial Agents” (2014); and Gladden, “The Social Robot as ‘Charismatic
Leader’” (2014).
171
Regarding such possibilities, see Converging Technologies for Improving Human Performance:
Nanotechnology, Biotechnology, Information Technology and Cognitive Science, edited by Bain-
bridge (2003); Canton (2004), pp. 186-98; and Khushf, “The use of emergent technologies for en-
hancing human performance: Are we prepared to address the ethical and policy issues” (2005).
212 • Posthuman Management
such as genetic engineering, neural engineering, neurocybernetics, and biocy-
bernetics will offer resources for management theorists and practitioners who
must account for the existence and activity of such agents. However, it is
likely that entirely new disciplines will arise – and will need to arise – in order
to fill the conceptual and practical gaps that exist between those structures
and dynamics that will be manifested by cyborgs and those that are addressed
by existing disciplines. In particular, new disciplines may study and manage
computronic human agents using many of the same techniques that have pre-
viously been employed with artificial agents. Such hypothetical new fields
might include disciplines such as:
Psychological engineering, which would apply practices from fields
like electronics engineering to the design of a human psyche.
172
It
might involve the use of genetic engineering and gene therapy, neu-
roprosthetic devices, immersive virtual reality, and other technologies
to create and maintain human beings who possess particular (and po-
tentially non-natural) cognitive structures, processes, and behaviors.
Cyborg psychology and cyberpsychology, which would apply the
knowledge and methods of traditional psychology to understand the
cognitive structures and processes of human beings whose psychol-
ogy is atypical as a result of neuroprosthetic augmentation, long-term
immersion in virtual reality environments, or other factors.
173
Subdis-
ciplines might include cyberpathology,
174
for example.
172
For earlier uses of the term ‘psychological engineering’ in different contexts, see, e.g., Doyle,
“Big problems for artificial intelligence" (1988), p. 22, which employs the term in the context of
artificial intelligence, with psychological engineering’s goal being “parallel to the aim of any
engineering field, namely to find economical designs for implementing or mechanizing agents
with specified capacities or behaviors,” and Yagi, “Engineering psychophysiology in Japan”
(2000), p. 361, which defines psychological engineering to be “engineering relating to human
psychological activities” and include themes such as “the development of new systems between
the human mind and machines” that yield not only convenience but comfort, “the development
of the technology to measure psychological effects in industrial settings,” and “the development
of new types of human-machine systems incorporating concepts and procedures utilizing virtual
reality.”
173
For other use of the term ‘cyborg psychology,’ see, e.g., Plowright, “Neurocomputing: some
possible implications for human-machine interfaces” (1996). For earlier use of the term ‘cy-
berpsychology’ in various contexts, see, e.g., Cyberpsychology, edited by Gordo-López & Parker
(1999); Riva & Galimberti. Towards CyberPsychology: Mind, Cognition, and Society in the Internet
Age (2001); Cyberpsychology: Journal of Psychosocial Research, founded in 2007; and Norman,
Cyberpsychology: An Introduction to Human-Computer Interaction (2008).
174
See, e.g., Chapter 4, “Wireheading, Addiction, and Mental Illness in Machines,” in Yampolskiy,
Artificial Superintelligence: A Futuristic Approach (2015).
Chapter Three: The Posthuman Management Matrix • 213
Human technology management (or ‘anthropotech manage-
ment’
175
), which would apply the knowledge and practices of tradi-
tional IT management to the management of organizational resources
(e.g., human employees) whose neuroprosthetic or genetic augmenta-
tion or intimate cybernetic integration with computerized systems at
a structural or behavioral level allows them to be managed in ways
similar to those utilized with traditional IT assets.
4. BIOROIDS
As described in earlier sections, organizations will increasingly need to
deal with the existence of artificial agents that possess anthropic characteris-
tics as both potential workers and consumers of the goods and services that
organizations produce. Such bioroids might include social robots that resem-
ble human beings in their physical form and cognitive capacities,
176
artificial
general intelligences
177
that process information using complex physical neu-
ral networks rather than CPU-based platforms,
178
robots possessing biological
175
For the use of such terminology, see, e.g., the Anthropotech project of the University of the
West of England and University of Bristol that has studied the philosophical and ethical impli-
cations of “Anthropotech: the technological alteration of the body for the purpose of augmenting
existing capacities, introducing new ones, or aesthetically improving the body” and which has
drawn its inspiration explicitly from Jérôme Goffette's Naissance de l'anthropotechnie: De la mé-
decine au modelage de l’humain (2006). See “Anthropotech” (2013).
176
For an overview of different perspectives on social robots that behaviorally resemble and can
interact with human beings, see, e.g., Breazeal (2003); Gockley et al., “Designing Robots for Long-
Term Social Interaction” (2005); Kanda & Ishiguro (2013); Social Robots and the Future of Social
Relations, edited by Seibt et al. (2014); Social Robots from a Human Perspective, edited by Vincent
et al. (2015); and Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov (2016).
177
Regarding challenges inherent in the development of artificial general intelligence and po-
tential paths toward that objective, see, e.g., Artificial General Intelligence, edited by Goertzel &
Pennachin (2007); Theoretical Foundations of Artificial General Intelligence, edited by Wang &
Goertzel (2012); and Artificial General Intelligence: 8th International Conference, AGI 2015: Berlin,
Germany, July 22-25, 2015: Proceedings, edited by Bieger et al. (2015).
178
Regarding AIs that utilize physical neural networks, see, e.g., Snider (2008); Versace & Chan-
dler (2010); and Advances in Neuromorphic Memristor Science and Applications, edited by Kozma
et al. (2012). For a discussion of such technologies from the perspective of information security,
see Pino & Kott, “Neuromorphic Computing for Cognitive Augmentation in Cyber Defense”
(2014), and Lohn et al., “Memristors as Synapses in Artificial Neural Networks: Biomimicry Be-
yond Weight Change” (2014).
214 • Posthuman Management
components,
179
and robots that exist in permanent symbiosis with human
agents to whom they serve as bodies, colleagues, or guides.
180
The physical forms and processes, cognitive capacities, and social engage-
ment of such bioroids will likely differ in their underlying structures and dy-
namics from those of human beings, no matter how closely they outwardly
resemble them. Thus traditional human-focused disciplines such as psychol-
ogy, economics, and HR management cannot be applied directly and without
modification to analyze, predict, or manage the behavior of bioroids. On the
other hand, traditional disciplines such as computer science, electronics engi-
neering, and IT management will not in themselves prove adequate for shap-
ing the behavior of such unique anthropic artificial agents.
Emerging fields such as synthetic biology and social robotics provide a
starting point for the development and management of bioroids. As research-
ers attempt to create new theoretical and practical frameworks for managing
such agents, we might expect to witness the development of new fields that
study and manage artificial agents utilizing approaches that have traditionally
been applied to human agents; these new fields might include disciplines like:
Artificial psychology, which is already being formulated as a disci-
pline
181
and which applies the extensive knowledge and techniques
179
See, e.g., Ummat et al. (2005); Andrianantoandro et al. (2006); Lamm & Unger (2011); Cheng
& Lu (2012); and Kawano et al., “Finding and defining the natural automata acting in living
plants: Toward the synthetic biology for robotics and informatics in vivo” (2012).
180
Regarding robots that exist in symbiotic relationships with human beings as their physical
bodies (i.e., constituting a cyborg), see, e.g., Tomas, “Feedback and Cybernetics: Reimaging the
Body in the Age of the Cyborg” (1995); Clark, Natural-born cyborgs: Minds, Technologies, and the
Future of Human Intelligence (2004); and Anderson “Augmentation, symbiosis, transcendence:
technology and the future(s) of human identity” (2003). For discussions of robots serving as col-
leagues to human workers, see, e.g., Ablett et al., “A Robotic Colleague for Facilitating Collabo-
rative Software Development” (2006); Vänni and Korpela, “Role of Social Robotics in Supporting
Employees and Advancing Productivity” (2015); and Gladden, “Leveraging the Cross-Cultural
Capacities of Artificial Agents” (2014). For a notable early allusion to the possibility of robotic
colleagues, see Thompson (1976). For robotic systems that serve as ‘guides’ to human beings in
a very practical and functional sense, see, e.g., Chella et al., “A BCI teleoperated museum robotic
guide” (2009), and Vogiatzis et al., “A conversant robotic guide to art collections” (2008). For
robots that serve as charismatic leaders (and perhaps even spiritual guides) for human beings,
see Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
181
Friedenberg has introduced the concept of ‘artificial psychology’ as a new branch of psychol-
ogy that addresses the cognitive behavior of synthetic agents; see Friedenberg (2008). ‘Artificial
psychology’ is not simply a form of computer programming or IT management. It is psychology:
just as complex and mysterious a discipline as when directed to the cognitive structures and
processes of human beings, except that in this case it is directed to the cognitive structures and
processes of robots or AIs.
Chapter Three: The Posthuman Management Matrix • 215
developed through the academic study of human psychology to un-
derstanding, designing, and controlling the psychology of synthetic
beings such as artificial general intelligences or social robots.
Artificial marketing, which would address the design, production,
sale, and distribution of goods and services targeted at consumers who
are not human beings but artificial entities.
AI resource management, which would deal with the management
of artificial entities within an organizational context not as though
they were conventional IT assets like desktop computers but as hu-
man-like employees, drawing on the knowledge and practices devel-
oped in the field of human resource management.
Artificial organization development, which would seek to bring
about long-term systemic improvements in the performance of organ-
izations whose members are synthetic entities – not by directly repro-
gramming them or updating their software but through the use of in-
tervention techniques such as coaching and mentoring, surveys,
teambuilding exercises, changes to workplace culture, and the design
of strategic plans and incentive structures. This would adapt the ex-
plicitly ‘humanistic’ approaches of the existing field of organization
development to serve new constituencies of nonhuman agents.
182
C. Exploring the ‘Fifth Quadrant’: Hybrid Agents within Hybrid
Systems
While it is true that management theory and practice must be capable of
separately addressing each of the four types of entities described above, within
real-world organizations it will in practice be difficult to extricate one kind of
entity from its relationships with those of other kinds – just as it is already
difficult to consider the performance of human workers apart from the per-
formance of the computerized technologies that they use in carrying out their
tasks.
In practice, the four types of entities described above will frequently work
intimately with one another, either as elements in hybrid systems that have
been intentionally designed or as members of systems whose participants can
voluntarily join and leave and which can include any types of agents. For ex-
ample, a company might maintain a persistent virtual world in which all of its
182
Regarding the goals and practices of organization development, see, e.g., Anderson, Organi-
zation Development: The Process of Leading Organizational Change (2015), and Bradford & Burke,
Reinventing Organization Development: New Approaches to Change in Organizations (2005). For
the humanistic foundations of organization development, see, e.g., Bradford & Burke (2005); “The
International Organization Development Code of Ethics” of The OD Institute; the OD Network’s
“Organization and Human Systems Development Credo”; IAGP’s “Ethical Guidelines and Pro-
fessional Standards for Organization Development and Group Process Consultants”; and the OD
Network’s “Principles of OD Practice.”
216 • Posthuman Management
human and artificial personnel come together to work rather than meeting in
a physical workplace, or a firm might operate an online marketplace in which
human and artificial agents of all types are welcomed to purchase or consume
the company’s products and services – without the firm necessarily knowing
or caring whether a particular consumer is a human or artificial agent. In such
cases, the focus of an organization’s management efforts is not on specific
agents that participate in or constitute a system but on the management of the
system as a whole.
Fig. 4: The ‘fifth quadrant’ of the Posthuman Management Matrix, which spans and ties to-
gether all four types of entities that will be of relevance for organizational management in the
Posthuman Age.
Systems that incorporate or comprise multiple types of agents might in-
clude digital-physical ecosystems; persistent immersive virtual worlds that are
home to both human and artificial inhabitants; and hybrid human-robotic hive
minds, workplace teams, and multi-agent systems. Moreover, after having
Chapter Three: The Posthuman Management Matrix • 217
evolved into the Internet of Things and eventually comprising all objects as
the ‘Internet of Everything,’
183
the Internet as a whole might come to encom-
pass all subjects – all sapient minds and persons – thanks to the wearable and
implantable computers and neuroprosthetic devices that will increasingly
serve as gateways, vehicles, and virtualizing bodies that provide their human
hosts and users with a permanent link to and presence in the world’s digital-
physical ecosystems. In this way, we can expect the growth of a lush, dense,
complex, unruly, all-embracing digital-physical cyber-jungle that is not
simply the Internet of Everything but the Internet of Everyone, the Internet of
Life, the Internet of Being. Together these kinds of systems can be seen as
occupying a ‘fifth quadrant’ that lies at the heart of the Posthuman Manage-
ment Matrix and which reaches into and joins all of the other four quadrants,
as reflected in Figure 4.
The kinds of rich and sophisticated human-artificial systems that exist
within the fifth quadrant cannot be effectively managed simply by drawing
insights from an array of disciplines that focus exclusively on either human
agents or artificial agents. Instead, disciplines will be required whose theory
and practice holistically embrace both the forms and behaviors of human and
artificial agents as well as anthropic and computronic characteristics and
which occupy themselves with systems in which the four possible types of
entities are closely integrated or even inextricably merged.
Already, existing disciplines such as cybernetics and systems theory at-
tempt to provide a universal conceptual framework that can account for the
structures and dynamics of all kinds of viable systems, whether they be hu-
man, artificial, hybrid, or of some previously unknown form. The fields of hu-
man-computer interaction, human-robot interaction, and information secu-
rity focus on the characteristics of such hybrid systems in a more specialized
way. Some management disciplines such as organizational architecture and
enterprise architecture have the potential – if thoughtfully and creatively
elaborated – to provide conceptual and practical frameworks for the develop-
ment and maintenance of such hybrid human-artificial systems, although ef-
forts to develop those disciplines in the direction of posthumanized human-
artificial systems have not yet been robustly pursued.
184
183
See, e.g., Evans (2012).
184
For examples of some initial efforts, see, e.g., Gladden, “Leveraging the Cross-Cultural Ca-
pacities of Artificial Agents” (2014) and sources cited therein. Organizational theory may also be
able to draw on contemporary work in the field of philosophy; for example, see Kowalewska
(2016) for an analysis of technologically facilitated digital-physical ecosystems that draws on
Actor-Network Theory (ANT) to explore the manner in which nonhuman and human actors
within such ecosystems may create “hierarchies, symbioses, chains and balances” (p. 74) that do
not simply resemble the structures and relations of biological ecosystems in a metaphorical sense
but truly instantiate the forms and dynamics of such ecologies within a hybrid biological-syn-
thetic system full of diverse types of actors.
218 • Posthuman Management
ARTIFICIAL, XENO- AND META- STUDIES
As hybrid human-robotic organizations and environments become more
common, we can expect to see the development of new disciplines that at-
tempt to understand the unique physical structures, behaviors, advantages
and capacities, and weaknesses and vulnerabilities displayed by such systems.
Just as ‘artificial psychology’ focuses on the cognitive activity of beings that
are human-like in their behavior but synthetic in their construction – and
‘xenopsychology’ or ‘exopsychology’ studies the cognitive activity of agents
that are radically nonhuman (e.g., hypothetical extraterrestrial intelligences)
and whose behavior is not intended or expected to replicate that of human
beings,
185
so the prefix ‘meta-’ or words such as ‘post-anthropocentric,’ ‘agent-
independent,’ or ‘cybernetic’ might be employed to refer to efforts at develop-
ing universal conceptual frameworks that are sufficiently abstract to be able
to account for the structures and dynamics found in the activities of human
agents, artificial agents resembling human beings, radically nonhuman syn-
thetic agents, and any other kinds of agents. For example, attempts to identify
the essential structures and processes that must be present in any type of
agent in order for it to be considered ‘cognitive’ – and to explore the full spec-
trum of ways in which those structures and processes can manifest themselves
across different types of agents – could be understood alternatively as ‘meta-
psychology,’ ‘post-anthropocentric psychology,’ ‘agent-independent psychol-
ogy,’ or ‘psychological cybernetics.’ Similarly, a term like ‘metaeconomics’
might be used to refer to generalized conceptual frameworks that can account
equally well for the economic activity of all kinds of entities, both human and
artificial.
186
185
For a history of such use of ‘xeno-’ in both literary and scholarly contexts, see the “Preface
and Acknowledgements for the First Edition” in Freitas, Xenology: An Introduction to the Scientific
Study of Extraterrestrial Life, Intelligence, and Civilization (1979), where “[…] xenology may be
defined as the scientific study of all aspects of extraterrestrial life, intelligence, and civilization.
Similarly, xenobiology refers to the study of the biology of extraterrestrial lifeforms not native
to Earth, xenopsychology refers to the higher mental processes of such lifeforms if they are
intelligent, xenotechnology refers to the technologies they might possess, and so forth.” For the
use of ‘exopsychology’ in connection with potential extraterrestrial intelligences, see Harrison
& Elms, “Psychology and the search for extraterrestrial intelligence” (1990), p. 207, where “The
proposed field of exopsychology would involve the forecast, study, and interpretation of the
cognitive, affective, and behavioral aspects of extraterrestrial organisms. Exopsychological re-
search would encompass search, contact, and post-contact activities, and would include study
and work with humans as well as with any extraterrestrials that might be encountered.”
186
We note that some of the terms suggested above have already been utilized by other scholars
in different contexts. For example, the understanding of ‘metapsychology’ formulated here is
different from the specialized sense in which Freud used that term; our envisioned use of the
prefix ‘meta-’ is more closely related to the contemporary philosophical use of the term to refer
to an abstracted or second-order phenomenon. Some scholars have used the prefix ‘meta-’ in
ways that are closely aligned with our proposed use. For example, building on earlier questions
Chapter Three: The Posthuman Management Matrix • 219
V. Conclusion
A transformative convergence is underway within contemporary organi-
zations, as human workers integrate computers ever more closely into their
minds and bodies and computers themselves become ever more ‘human.’ Such
developments create both opportunities and threats that must be carefully
evaluated from ethical, legal, and managerial perspectives. In order to aid with
such endeavors, in this text we have formulated the Posthuman Management
Matrix, a model in which an organization’s employees, consumers, and other
stakeholders are divided into two different kinds of agents (human and artifi-
cial) who may possess either of two sets of characteristics (anthropic or com-
putronic), thus defining four types of entities. Until now, the only types that
have been of relevance for management theory and practice were those of
human agents who possess anthropic characteristics (i.e., ordinary human be-
ings) and artificial agents that possess computronic characteristics (as exem-
plified by assembly-line robots or artificially intelligent software running on
desktop computers).
Management theory and practice have traditionally not addressed the re-
maining two types of agents that are theoretically possible, largely because
such agents did not exist to serve as employees or consumers for organiza-
tions. However, we have argued that ongoing advances in neuroprosthetics,
genetic engineering, virtual reality, robotics, and artificial intelligence are now
giving rise to new kinds of human agents that demonstrate computronic char-
acteristics and artificial agents that possess anthropic characteristics. If organ-
izations are to successfully resolve the complex issues that appear when such
posthumanized agents are adopted as workers or customers, new spheres of
management theory and practice will need to be pioneered. A starting point
may be found in existing fields such as cybernetics, systems theory, organiza-
tional design, and enterprise architecture that already offer tools for integrat-
ing human and artificial agents into the multi-agent system that constitutes
an organization. Such fields will likely be complemented through the devel-
opment of new disciplines such as psychological engineering, cyborg psychol-
ogy, human technology management, artificial organization development, AI
resource management, metapsychology, and metaeconomics that are specifi-
cally intended to confront the issues that will accompany the arrival of new
kinds of posthumanized agents as organizational stakeholders. Although we
posed by Kant, legal scholar Andrew Haley attempted to identify fundamental principles of law
and ethics that are not specific to human biology, psychology, sociality, and culture but which
would be relevant to and binding on all intelligent beings, regardless of their physical form or
cognitive dynamics; such universal and legal principles could govern humanity’s potential en-
counter with an extraterrestrial intelligence. Haley proposed ‘The Great Rule of Metalaw,’ which
demands that all intelligent beings should “Do unto others as they would have you do unto
them”; see Michaud, Contact with Alien Civilizations: Our Hopes and Fears about Encountering
Extraterrestrials (2007), p. 374.
220 • Posthuman Management
cannot yet know the exact paths that such developments will take, our hope
is that the framework presented in this text can prove useful in highlighting
the new areas that wait to be explored and in informing the work of those
management scholars and practitioners who choose to embrace that chal-
lenge.
References
Ablett, Ruth, Shelly Park, Ehud Sharlin, Jörg Denzinger, and Frank Maurer. “A Ro-
botic Colleague for Facilitating Collaborative Software Development.” Proceedings
of Computer Supported Cooperative Work (CSCW 2006). ACM, 2006.
Abrams, Jerold J. “Pragmatism, Artificial Intelligence, and Posthuman Bioethics:
Shusterman, Rorty, Foucault.” Human Studies 27, no. 3 (September 1, 2004): 241-
58. doi:10.1023/B:HUMA.0000042130.79208.c6.
Advances in Neuromorphic Memristor Science and Applications, edited by Robert
Kozma, Robinson E. Pino, and Giovanni E. Pazienza. Dordrecht: Springer Sci-
ence+Business Media, 2012.
Agent-Based Manufacturing: Advances in the Holonic Approach, edited by S.M. Deen.
Springer Berlin Heidelberg, 2003.
Aier, Stephan. “The Role of Organizational Culture for Grounding, Management,
Guidance and Effectiveness of Enterprise Architecture Principles.” Information
Systems and E-Business Management 12, no. 1 (2014): 43-70.
Alford, John R., and John R. Hibbing. “The origin of politics: An evolutionary theory
of political behavior.” Perspectives on Politics 2, no. 04 (2004): 707-23.
Anderson, Donald L. Organization Development: The Process of Leading Organiza-
tional Change, 3e. SAGE Publications, 2015.
Anderson, Michael L. “Embodied cognition: A field guide.” Artificial intelligence 149,
no. 1 (2003): 91-130.
Anderson, Walter Truett. “Augmentation, symbiosis, transcendence: technology and
the future(s) of human identity.” Futures 35, no. 5 (2003): 535-46.
Andrianantoandro, Ernesto, Subhayu Basu, David K. Karig, and Ron Weiss. “Syn-
thetic biology: new engineering rules for an emerging discipline.” Molecular Sys-
tems Biology 2, no. 1 (2006).
“Anthropotech.” http://www.anthropotech.org.uk/. 2013. Accessed January 29, 2016.
Appleseed [アップルシード / Appurushīdo]. Directed by Shinji Aramaki. 2004. Hou-
ston: Sentai Selects, 2010. Blu-Ray.
Aristotle, Politics, Book 1, Section 1253a. In Aristotle in 23 Volumes, Vol. 21, translated
by H. Rackham. Cambridge, MA: Harvard University Press, 1944.
http://www.perseus.tufts.edu/hopper/text?doc=Per-
seus%3Atext%3A1999.01.0058%3Abook%3D1%3Asection%3D1253a. Accessed
March 4, 2016.
Chapter Three: The Posthuman Management Matrix • 221
Arkin, Ronald C., and J. David Hobbs. “Dimensions of communication and social or-
ganization in multi-agent robotic systems.” In From Animals to Animats 2: Pro-
ceedings of the Second International Conference on Simulation of Adaptive Behavior,
edited by Jean-Arcady Meyer, H. L. Roitblat and Stewart W. Wilson, pp. 486-93.
Cambridge, MA: The MIT Press, 1993.
Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin. Springer
Berlin Heidelberg, 2007.
Artificial General Intelligence: 8th International Conference, AGI 2015: Berlin, Germany,
July 22-25, 2015: Proceedings, edited by Jordi Bieger, Ben Goertzel, and Alexey
Potapov. Springer International Publishing, 2015.
Austerberry, David. Digital Asset Management, second edition. Burlington, MA: Focal
Press, 2013.
Badmington, Neil. “Cultural Studies and the Posthumanities.” In New Cultural Stud-
ies: Adventures in Theory, edited by Gary Hall and Claire Birchall, pp. 260-72. Ed-
inburgh: Edinburgh University Press, 2006.
Bainbridge, William Sims. The Virtual Future. London: Springer, 2011.
Băjenescu, Titu-Marius, and Marius I. Bâzu. Reliability of Electronic Components: A
Practical Guide to Electronic Systems Manufacturing. Springer Berlin Heidelberg,
1999.
Barca, Jan Carlo, and Y. Ahmet Sekercioglu. “Swarm robotics reviewed.” Robotica 31,
no. 03 (2013): 345-59.
Bekey, G.A. Autonomous Robots: From Biological Inspiration to Implementation and
Control. Cambridge, MA: MIT Press, 2005.
Bendle, Mervyn F. “Teleportation, cyborgs and the posthuman ideology.” Social Semi-
otics 12, no. 1 (2002): 45-62.
Bera, Rajendra K. “Synthetic Biology and Intellectual Property Rights.” In Biotechnol-
ogy, edited by Deniz Ekinci. Rijeka: InTech, 2015.
Berner, Georg. Management in 20XX: What Will Be Important in the Future – A Holis-
tic View. Erlangen: Publicis Corporate Publishing, 2004.
Boly, Melanie, Anil K. Seth, Melanie Wilke, Paul Ingmundson, Bernard Baars, Steven
Laureys, David B. Edelman, and Naotsugu Tsuchiya. “Consciousness in humans
and non-human animals: recent advances and future directions.” Frontiers in Psy-
chology 4 (2013).
Bostrom, Nick. “Human Genetic Enhancements: A Transhumanist Perspective.” In
Arguing About Bioethics, edited by Stephen Holland, pp. 105-15. New York:
Routledge, 2012.
Bostrom, Nick. “Why I Want to Be a Posthuman When I Grow Up.” In Medical En-
hancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp.
107-37. The International Library of Ethics, Law and Technology 2. Springer
Netherlands, 2008.
222 • Posthuman Management
Bradford, David L., and W. Warner Burke. Reinventing Organization Development:
New Approaches to Change in Organizations. John Wiley & Sons, 2005.
Brambilla, Manuele, Eliseo Ferrante, Mauro Birattari, and Marco Dorigo. “Swarm ro-
botics: a review from the swarm engineering perspective.” Swarm Intelligence 7,
no. 1 (2013): 1-41.
Breazeal, Cynthia. “Toward sociable robots.” Robotics and Autonomous Systems 42
(2003): 167-75.
Burton, Richard M., Børge Obel, and Dorthe Døjbak Håkonsson. Organizational De-
sign: A Step-by-Step Approach. Cambridge University Press, 2015.
Caetano, Artur, António Rito Silva, and José Tribolet. “A Role-Based Enterprise Ar-
chitecture Framework.” In Proceedings of the 2009 ACM Symposium on Applied
Computing, pp. 253-58. ACM, 2009.
Callaghan, Vic. “Micro-Futures.” Presentation at Creative-Science 2014, Shanghai,
China, July 1, 2014.
Calverley, D.J. “Imagining a non-biological machine as a legal person.” AI & SOCIETY
22, no. 4 (2008): 523-37.
Camenzind, Samuel. “On Clone as Genetic Copy: Critique of a Metaphor.” NanoEth-
ics 9, no. 1 (2015): 23-37.
Canton, James. “Designing the future: NBIC technologies and human performance
enhancement.” Annals of the New York Academy of Sciences vol. 1013, (2004): 186-
98.
Castronova, Edward. Synthetic Worlds: The Business and Culture of Online Games.
Chicago: The University of Chicago Press, 2005.
Castronova, Edward. “Theory of the Avatar.” CESifo Working Paper No. 863, Febru-
ary 2003. http://www.cesifo.de/pls/guestci/download/CESifo+Working+Pa-
pers+2003/CESifo+Working+Papers+February+2003+/cesifo_wp863.pdf. Accessed
January 25, 2016.
Cesaroni, John L. “Designer Human Embryos as a Challenge for Patent Law and Reg-
ulation.” Quinnipiac Law Review 30, no. 4 (2012).
Chella, Antonio, Enrico Pagello, Emanuele Menegatti, Rosario Sorbello, Salvatore
Maria Anzalone, Francesco Cinquegrani, Luca Tonin, F. Piccione, K. Prifitis, C.
Blanda, E. Buttita, and E. Tranchina. “A BCI teleoperated museum robotic guide.”
In International Conference on Complex, Intelligent and Software Intensive Systems,
2009 (CISIS'09), pp. 783-88. IEEE, 2009.
Cheng, Allen A., and Timothy K. Lu. “Synthetic biology: an emerging engineering
discipline.” Annual Review of Biomedical Engineering 14 (2012): 155-78.
Church, George M., Yuan Gao, and Sriram Kosuri. “Next-generation digital infor-
mation storage in DNA.” Science 337, no. 6102 (2012): 1628.
Clark, Andy. Natural-born cyborgs: Minds, Technologies, and the Future of Human In-
telligence. Oxford: Oxford University Press, 2004.
Chapter Three: The Posthuman Management Matrix • 223
Clark, Stephen R.L. The Political Animal: Biology, Ethics and Politics. London:
Routledge, 1999.
Coeckelbergh, Mark. “From Killer Machines to Doctrines and Swarms, or Why Ethics
of Military Robotics Is Not (Necessarily) About Robots.” Philosophy & Technology
24, no. 3 (2011): 269-78.
Communication in the Age of Virtual Reality, edited by Frank Biocca and Mark R.
Levy. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers, 1995.
Contreras-Vidal, Jose L., and Robert G. Grossman. “NeuroRex: A clinical neural inter-
face roadmap for EEG-based brain machine interfaces to a lower body robotic ex-
oskeleton.” In 2013 35th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society (EMBC), pp. 1579-82. IEEE, 2013.
Converging Technologies for Improving Human Performance: Nanotechnology, Biotech-
nology, Information Technology and Cognitive Science, edited by William Sims
Bainbridge. Dordrecht: Springer Science+Business Media, 2003.
Coughlin, Thomas M. Digital Storage in Consumer Electronics: The Essential Guide.
Burlington, MA: Newnes, 2008.
Curtis, H. Biology, 4th edition. New York: Worth, 1983.
Cyberpsychology, edited by Ángel J. Gordo-López and Ian Parker. New York:
Routledge, 1999.
Cyberpsychology: Journal of Psychosocial Research 1, no. 1 (2007) and subsequent is-
sues. http://www.cyberpsychology.eu/index.php. Accessed January 29, 2016.
Cybersociety 2.0: Revisiting Computer-Mediated Communication and Community, ed-
ited by Steven G. Jones. Thousand Oaks: Sage Publications, 1998.
Daft, Richard L. Management. Mason, OH: South-Western / Cengage Learning, 2011.
Dautenhahn, Kerstin. “Robots we like to live with?! - A Developmental Perspective
on a Personalized, Life-long Robot Companion.” In Proceedings of the 2004 IEEE
International Workshop on Robot and Human Interactive Communication, pp. 17-
22. IEEE, 2004.
De Melo-Martín, Inmaculada. “Genetically Modified Organisms (GMOs): Human Be-
ings.” In Encyclopedia of Global Bioethics, edited by Henk ten Have. Springer Sci-
ence+Business Media Dordrecht. Version of March 13, 2015. doi: 10.1007/978-3-
319-05544-2_210-1. Accessed January 21, 2016.
Doyle, Jon. “Big problems for artificial intelligence.” AI Magazine 9, no. 1 (1988): 19-
22.
Dudai, Yadin. “The Neurobiology of Consolidations, Or, How Stable Is the Engram?"
Annual Review of Psychology 55 (2004): 51-86.
Dumas II, Joseph D. Computer Architecture: Fundamentals and Principles of Computer
Design. Boca Raton: CRC Press, 2006.
Duncan, Sean C. “Mandatory Upgrades: The Evolving Mechanics and Theme of An-
droid: Netrunner.” Presentation at Well-Played Summit @ DiGRA 2014, Salt Lake
City, August 3-6, 2014.
224 • Posthuman Management
Emmons, Robert A. “Is spirituality an intelligence? Motivation, cognition, and the
psychology of ultimate concern.” The International Journal for the psychology of
Religion 10, no. 1 (2000): 3-26.
“Ethical Guidelines and Professional Standards for Organization Development and
Group Process Consultants.” IAGP - International Association for Group Psycho-
therapy and Group Processes. http://www.iagp.com/docs/IAGPOrgEthicalguideli-
nesEnglishv1.0.pdf. Accessed December 20, 2014.
Evans, Dave. “The Internet of Everything: How More Relevant and Valuable Connec-
tions Will Change the World.” Cisco Internet Solutions Business Group: Point of
View, 2012. https://www.cisco.com/web/about/ac79/docs/innov/IoE.pdf. Accessed
December 16, 2015.
Fabbro, Franco, Salvatore M. Aglioti, Massimo Bergamasco, Andrea Clarici, and Jaak
Panksepp. “Evolutionary aspects of self-and world consciousness in verte-
brates.” Frontiers in human neuroscience 9 (2015).
Ferber, Jacques, Olivier Gutknecht, and Fabien Michel. “From agents to organiza-
tions: an organizational view of multi-agent systems.” In Agent-Oriented Software
Engineering IV, pp. 214-30. Springer Berlin Heidelberg, 2004.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts 8, no. 2 (Fall 2013): 26-32.
Fleischmann, Kenneth R. “Sociotechnical Interaction and Cyborg–Cyborg Interac-
tion: Transforming the Scale and Convergence of HCI.” The Information Society
25, no. 4 (2009): 227-35. doi:10.1080/01972240903028359.
Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. New
York: Basic Books, 2015.
Freitas Jr., Robert A. “Preface and Acknowledgements for the First Edition.” In Xen-
ology: An Introduction to the Scientific Study of Extraterrestrial Life, Intelligence,
and Civilization. Sacramento: Xenology Research Institute, 1979. http://www.xen-
ology.info/Xeno/PrefaceFirstEdition.htm, last updated October 22, 2009. Accessed
January 30, 2016.
Friedenberg, Jay. Artificial Psychology: The Quest for What It Means to Be Human,
Philadelphia: Psychology Press, 2008.
Friedman, Batya, and Helen Nissenbaum. “Bias in Computer Systems.” In Human
Values and the Design of Computer Technology, edited by Batya Friedman, pp. 21-
40. CSL Lecture Notes 72. Cambridge: Cambridge University Press, 1997.
Fukuyama, Francis. Our Posthuman Future: Consequences of the Biotechnology Revolu-
tion. New York: Farrar, Straus, and Giroux, 2002.
The Future of Automated Freight Transport: Concepts, Design and Implementation, ed-
ited by Hugo Priemus and Peter Nijkamp. Cheltenham: Edward Elgar Publishing,
2005.
The Future of Bioethics: International Dialogues, edited by Akira Akabayashi, Oxford:
Oxford University Press, 2014.
Chapter Three: The Posthuman Management Matrix • 225
Galis, Alex, Spyros G. Denazis, Alessandro Bassi, Pierpaolo Giacomin, Andreas Berl,
Andreas Fischer, Hermann de Meer, J. Srassner, S. Davy, D. Macedo, G. Pujolle, J.
R. Loyola, J. Serrat, L. Lefevre, and A. Cheniour. “Management Architecture and
Systems for Future Internet Networks.” In Towards the Future Internet: A Euro-
pean Research Perspective, edited by Georgios Tselentis, John Domingue, Alex Ga-
lis, Anastasius Gavras, David Hausheer, Srdjan Krco, Volkmar Lotz, and Theo-
dore Zahariadis, pp. 112-22. IOS Press, 2009.
Garg, Anant Bhaskar. “Embodied Cognition, Human Computer Interaction, and Ap-
plication Areas.” In Computer Applications for Web, Human Computer Interaction,
Signal and Image Processing, and Pattern Recognition, pp. 369-74. Springer Berlin
Heidelberg, 2012.
Gasson, M.N. “Human ICT Implants: From Restorative Application to Human En-
hancement.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 11-28. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
Gasson, M.N. “ICT implants.” In The Future of Identity in the Information Society, ed-
ited by S. Fischer-Hübner, P. Duquenoy, A. Zuccato, and L. Martucci, pp. 287-95.
Springer US, 2008.
Gasson, M.N., Kosta, E., and Bowman, D.M. “Human ICT Implants: From Invasive to
Pervasive.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 1-8. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
Gene Therapy of the Central Nervous System: From Bench to Bedside, edited by Michael
G. Kaplitt and Matthew J. During. Amsterdam: Elsevier, 2006.
Geraci, Robert M. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence,
and Virtual Reality. New York: Oxford University Press, 2010.
Geraci, Robert M. “Spiritual robots: Religion and our scientific view of the natural
world.” Theology and Science 4, issue 3 (2006). doi: 10.1080/14746700600952993.
Giudici, P. Applied Data Mining: Statistical Methods for Business and Industry. Wiley,
2003.
Gladden, Matthew E. “The Artificial Life-Form as Entrepreneur: Synthetic Organism-
Enterprises and the Reconceptualization of Business.” In Proceedings of the Four-
teenth International Conference on the Synthesis and Simulation of Living Systems,
edited by Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lip-
son, pp. 417-18. Cambridge, MA: The MIT Press, 2014.
Gladden, Matthew E. “Cybershells, Shapeshifting, and Neuroprosthetics: Video
Games as Tools for Posthuman ‘Body Schema (Re)Engineering’.” Keynote presen-
tation at the Ogólnopolska Konferencja Naukowa Dyskursy Gier Wideo, Facta
Ficta / AGH, Kraków, June 6, 2015.
Gladden, Matthew E. “The Diffuse Intelligent Other: An Ontology of Nonlocalizable
Robots as Moral and Legal Actors.” In Social Robots: Boundaries, Potential, Chal-
lenges, edited by Marco Nørskov, pp. 177-98. Farnham: Ashgate, 2016.
226 • Posthuman Management
Gladden, Matthew E. “From Stand Alone Complexes to Memetic Warfare: Cultural
Cybernetics and the Engineering of Posthuman Popular Culture.” Presentation at
the 50 Shades of Popular Culture International Conference. Facta Ficta / Uniwer-
sytet Jagielloński, Kraków, February 19, 2016.
Gladden, Matthew E. The Handbook of Information Security for Advanced Neuropros-
thetics, Indianapolis: Synthypnion Academic, 2015.
Gladden, Matthew E. “Leveraging the Cross-Cultural Capacities of Artificial Agents
as Leaders of Human Virtual Teams.” Proceedings of the 10th European Conference
on Management Leadership and Governance, edited by Visnja Grozdani, pp. 428-
35. Reading: Academic Conferences and Publishing International Limited, 2014.
Gladden, Matthew E. “Managerial Robotics: A Model of Sociality and Autonomy for
Robots Managing Human Beings and Machines.” International Journal of Contem-
porary Management 13, no. 3 (2014), pp. 67-76.
Gladden, Matthew E. “Neural Implants as Gateways to Digital-Physical Ecosystems
and Posthuman Socioeconomic Interaction.” In Digital Ecosystems: Society in the
Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp.
85-98. Warsaw: Digital Economy Lab, University of Warsaw, 2016.
Gladden, Matthew E. “The Social Robot as ‘Charismatic Leader’: A Phenomenology
of Human Submission to Nonhuman Power.” In Sociable Robots and the Future of
Social Relations: Proceedings of Robo-Philosophy 2014, edited by Johanna Seibt,
Raul Hakli, and Marco Nørskov, pp. 329-39. Frontiers in Artificial Intelligence and
Applications 273. IOS Press, 2014.
Gladden, Matthew E. “‘Upgrading’ the Human Entity: Cyberization as a Path to
Posthuman Utopia or Digital Annihilation?” Lecture in the Arkana Fantastyki lec-
ture cycle, Centrum Informacji Naukowej i Biblioteka Akademicka (CINiBA), Ka-
towice, May 27, 2015.
Gladden, Matthew E. “Utopias and Dystopias as Cybernetic Information Systems: En-
visioning the Posthuman Neuropolity.” Creatio Fantastica nr 3 (50) (2015).
Gockley, Rachel, Allison Bruce, Jodi Forlizzi, Marek Michalowski, Anne Mundell,
Stephanie Rosenthal, Brennan Sellner, Reid Simmons, Kevin Snipes, Alan C.
Schultz, and Jue Wang. “Designing Robots for Long-Term Social Interaction.” In
2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS
2005), pp. 2199-2204. 2005.
Goffette, Jérôme. Naissance de l'anthropotechnie: De la médecine au modelage de l’hu-
main. Paris: Vrin, 2006.
Goodall, Noah. “Ethical decision making during automated vehicle crashes.” Trans-
portation Research Record: Journal of the Transportation Research Board 2424
(2014): 58-65.
Goodman, Paul S., and Linda Argote. “New Technology and Organizational Effec-
tiveness.” Carnegie-Mellon University, April 27, 1984.
Graham, Elaine. Representations of the Post/Human: Monsters, Aliens and Others in
Popular Culture. Manchester: Manchester University Press, 2002.
Chapter Three: The Posthuman Management Matrix • 227
Gross, Dominik. “Traditional vs. modern neuroenhancement: notes from a medico-
ethical and societal perspective.” In Implanted minds: the neuroethics of intracere-
bral stem cell transplantation and deep brain stimulation, edited by Heiner
Fangerau, Jörg M. Fegert, and Thorsten Trapp, pp. 291-312. Bielefeld: transcript
Verlag, 2011.
Guizzo, Erico, and Harry Goldstein. “The rise of the body bots [robotic exoskele-
tons].” IEEE Spectrum 42, no. 10 (2005): 50-56.
Gunkel, David J. The Machine Question: Critical Perspectives on AI, Robots, and Ethics.
Cambridge, MA: The MIT Press, 2012.
Han, J.-H., S.A. Kushner, A.P. Yiu, H.-W. Hsiang, T. Buch, A. Waisman, B. Bontempi,
R.L. Neve, P.W. Frankland, and S.A. Josselyn. “Selective Erasure of a Fear
Memory.” Science 323, no. 5920 (2009): 1492-96.
Handbook of Cloud Computing, edited by Borko Furht and Armando Escalante. New
York: Springer, 2010. DOI 10.1007/978-1-4419-6524-0.
Handbook of Psychology, Volume 6: Developmental Psychology, edited by Richard M.
Lerner, M. Ann Easterbrooks, and Jayanthi Mistry. Hoboken: John Wiley & Sons,
Inc., 2003.
Hanson, R. “If uploads come first: The crack of a future dawn.” Extropy 6, no. 2
(1994): 10-15.
Harrison, Albert A., and Alan C. Elms. “Psychology and the search for extraterres-
trial intelligence.” Behavioral Science 35, no. 3 (1990): 207-18.
Hatfield, B., A. Haufler, and J. Contreras-Vidal. “Brain Processes and Neurofeedback
for Performance Enhancement of Precision Motor Behavior.” In Foundations of
Augmented Cognition. Neuroergonomics and Operational Neuroscience, edited by
Dylan D. Schmorrow, Ivy V. Estabrooke, and Marc Grootjen, pp. 810-17. Lecture
Notes in Computer Science 5638. Springer Berlin Heidelberg, 2009.
Haykin, Simon. Neural Networks and Learning Machines, third edition. New York:
Pearson Prentice Hall, 2009.
Hazen, Robert. “What is life?”, New Scientist 192, no. 2578 (2006): 46-51.
Heim, Michael. The Metaphysics of Virtual Reality. New York: Oxford University
Press, 1993.
Heinrichs, Jan-Hendrik. “The promises and perils of non-invasive brain stimula-
tion.” International journal of law and psychiatry 35, no. 2 (2012): 121-29.
Herbrechter, Stefan. Posthumanism: A Critical Analysis. London: Bloomsbury, 2013.
[Kindle edition.]
Heylighen, Francis. “The Global Brain as a New Utopia.” In Renaissance der Utopie.
Zukunftsfiguren des 21. Jahrhunderts, edited by R. Maresch and F. Rötzer. Frank-
furt: Suhrkamp, 2002.
Hoogervorst, Jan. “Enterprise Architecture: Enabling Integration, Agility and
Change.” International Journal of Cooperative Information Systems 13, no. 03
(2004): 213-33.
228 • Posthuman Management
Ihrig, M. “Simulating Entrepreneurial Opportunity Recognition Processes: An Agent-
Based and Knowledge-Driven Approach.” In Advances in Intelligent Modelling and
Simulation: Simulation Tools and Applications, edited by A. Byrski, Z. Oplatková,
M. Carvalho, and M. Kisiel-Dorohinicki, pp. 27-54. Berlin: Springer-Verlag, 2012.
Industrial Applications of Holonic and Multi-Agent Systems, edited by Vladimír Mařík,
Arnd Schirrmann, Damien Trentesaux, and Pavel Vrba. Lecture Notes in Artificial
Intelligence 9266. Springer International Publishing AG Switzerland, 2015.
Intelligent Production Machines and Systems, edited by Duc T. Pham, Eldaw E. Eldu-
khri, and Anthony J. Soroka. Amsterdam: Elsevier, 2006.
“The International Organization Development Code of Ethics.” Organizational Devel-
opment English Library, The OD Institute. http://www.theodinstitute.org/od-li-
brary/code_of_ethics.htm. Accessed December 17, 2014.
Josselyn, Sheena A. “Continuing the Search for the Engram: Examining the Mecha-
nism of Fear Memories.” Journal of Psychiatry & Neuroscience : JPN 35, no. 4
(2010): 221-28.
Kanda, Takayuki, and Hiroshi Ishiguro. Human-Robot Interaction in Social Robotics.
Boca Raton: CRC Press, 2013.
Kawano, Tomonori, François Bouteau, and Stefano Mancuso. “Finding and defining
the natural automata acting in living plants: Toward the synthetic biology for ro-
botics and informatics in vivo.” Communicative & Integrative Biology 5, no. 6
(2012): 519-26.
Kedzior, Richard. How Digital Worlds Become Material: An Ethnographic and
Netnographic Investigation in Second Life. Economics and Society: Publications of
the Hanken School of Economics Nr. 281. Helsinki: Hanken School of Economics,
2014.
Kelly, Kevin. “The Landscape of Possible Intelligences.” The Technium, September 10,
2008. http://kk.org/thetechnium/the-landscape-o/. Accessed January 25, 2016.
Kelly, Kevin. Out of Control: The New Biology of Machines, Social Systems and the Eco-
nomic World. Basic Books, 1994.
Kelly, Kevin. “A Taxonomy of Minds.” The Technium, February 15, 2007.
http://kk.org/thetechnium/a-taxonomy-of-m/. Accessed January 25, 2016.
Kerr, Paul K., John Rollins, and Catherine A. Theohary. “The Stuxnet Computer
Worm: Harbinger of an Emerging Warfare Capability.” Congressional Research
Service, 2010.
Khushf, George. “The use of emergent technologies for enhancing human perfor-
mance: Are we prepared to address the ethical and policy issues.” Public Policy
and Practice 4, no. 2 (2005): 1-17.
Kinicki, Angelo, and Brian Williams. Management: A Practical Introduction, 5th edi-
tion. New York: McGraw Hill, 2010.
Koene, Randal A. “Embracing Competitive Balance: The Case for Substrate-Inde-
pendent Minds and Whole Brain Emulation.” In Singularity Hypotheses, edited by
Chapter Three: The Posthuman Management Matrix • 229
Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart, pp. 241-
67. The Frontiers Collection. Springer Berlin Heidelberg, 2012.
Koltko-Rivera, Mark E. “The potential societal impact of virtual reality.” Advances in
virtual environments technology: Musings on design, evaluation, and applications 9
(2005).
Koops, B.-J., and R. Leenes. “Cheating with Implants: Implications of the Hidden In-
formation Advantage of Bionic Ears and Eyes.” In Human ICT Implants: Technical,
Legal and Ethical Considerations, edited by Mark N. Gasson, Eleni Kosta, and Di-
ana M. Bowman, pp. 113-34. Information Technology and Law Series 23. T. M. C.
Asser Press, 2012.
Kowalewska, Agata. “Symbionts and Parasites – Digital Ecosystems.” In Digital Eco-
systems: Society in the Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz,
and Renata Włoch, pp. 73-84. Warsaw: Digital Economy Lab, University of War-
saw, 2016.
Kraemer, Felicitas. “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises
Questions of Personal Authenticity and Alienation.” Neuroethics 6, no. 3 (May 12,
2011): 483-97. doi:10.1007/s12152-011-9115-7.
Kurzweil, Ray. The Age of Spiritual Machines: When Computers Exceed Human Intelli-
gence. New York: Penguin Books, 2000.
Lamm, Ehud, and Ron Unger. Biological Computation, Boca Raton: CRC Press, 2011.
Lilley, Stephen. Transhumanism and Society: The Social Debate over Human Enhance-
ment. Springer Science & Business Media, 2013.
Litzsinger, Lukas. Android: Netrunner [card game]. Roseville, MN: Fantasy Flight
Games, 2012.
Lohn, Andrew J., Patrick R. Mickel, James B. Aimone, Erik P. Debenedictis, and Mat-
thew J. Marinella. “Memristors as Synapses in Artificial Neural Networks: Bio-
mimicry Beyond Weight Change.” In Cybersecurity Systems for Human Cognition
Augmentation, edited by Robinson E. Pino, Alexander Kott, and Michael
Shevenell, pp. 135-50. Springer International Publishing, 2014.
Longuet-Higgins, H.C. “Holographic Model of Temporal Recall.” Nature 217, no. 5123
(1968): 104. doi:10.1038/217104a0.
Lyon, David. “Beyond Cyberspace: Digital Dreams and Social Bodies.” In Education
and Society, third edition, edited by Joseph Zajda, pp. 221-38. Albert Park: James
Nicholas Publishers, 2001.
Magoulas, Thanos, Aida Hadzic, Ted Saarikko, and Kalevi Pessi. “Alignment in En-
terprise Architecture: A Comparative Analysis of Four Architectural Ap-
proaches.” Electronic Journal Information Systems Evaluation 15, no. 1 (2012).
Maguire, Gerald Q., and Ellen M. McGee. “Implantable brain chips? Time for de-
bate.” Hastings Center Report 29, no. 1 (1999): 7-13.
Man Is by Nature a Political Animal: Evolution, Biology, and Politics, edited by Peter K.
Hatemi and Rose McDermott. Chicago: The University of Chicago Press, 2011.
230 • Posthuman Management
Martinez, Dolores. “Bodies of future memories: the Japanese body in science fiction
anime.” Contemporary Japan 27, no. 1 (2015): 71-88.
McBurney, Peter, and Simon Parsons. “Engineering democracy in open agent sys-
tems.” In ESAW vol. 3071, pp. 66-80. 2003.
McGee, E.M. “Bioelectronics and Implanted Devices.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 207-24. The Inter-
national Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
McIndoe, Robert S. “Health Kiosk Technologies.” In Ethical Issues and Security Moni-
toring Trends in Global Healthcare: Technological Advancements: Technological Ad-
vancements, edited by Steven A. Brown and Mary Brown, pp. 66-71. Hershey:
Medical Information Science Reference, 2010.
McIntosh, Daniel. “The Transhuman Security Dilemma.” Journal of Evolution and
Technology 21, no. 2 (2010): 32-48.
Medical Enhancement and Posthumanity, edited by Bert Gordijn and Ruth Chadwick.
The International Library of Ethics, Law and Technology 2. Springer Netherlands,
2008.
Mehlman, Maxwell J. Transhumanist Dreams and Dystopian Nightmares: The Promise
and Peril of Genetic Engineering. Baltimore: The Johns Hopkins University Press,
2012.
Merdan, Munir, Thomas Moser, W. Sunindyo, Stefan Biffl, and Pavel Vrba. “Work-
flow scheduling using multi-agent systems in a dynamically changing environ-
ment.” Journal of Simulation 7, no. 3 (2013): 144-58.
Merkel, R., G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. “Cen-
tral Neural Prostheses.” In Intervening in the Brain: Changing Psyche and Society,
117-60. Ethics of Science and Technology Assessment 29. Springer Berlin Heidel-
berg, 2007.
Michaud, M.A.G. Contact with Alien Civilizations: Our Hopes and Fears about Encoun-
tering Extraterrestrials. New York: Springer, 2007.
Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence. Cam-
bridge: Harvard University Press, 1990.
Mueller, Scott. Upgrading and Repairing PCs, 20th Edition. Indianapolis: Que, 2012.
Murphy, Robin. Introduction to AI Robotics. Cambridge, MA: The MIT Press, 2000.
Nadler, David, and Michael Tushman. Competing by Design: The Power of Organiza-
tional Architecture. Oxford University Press, 1997. [Kindle edition.]
Nahin, Paul J. “Religious Robots.” In Holy Sci-Fi!, pp. 69-94. Springer New York, 2014.
Naufel, Stephanie. “Nanotechnology, the Brain, and Personal Identity.” In Nanotech-
nology, the Brain, and the Future, edited by Sean A. Hays, Jason Scott Robert,
Clark A. Miller, and Ira Bennett, pp. 167-78. Dordrecht: Springer Science+Busi-
ness Media, 2013.
Chapter Three: The Posthuman Management Matrix • 231
NIST Special Publication 800-53, Revision 4: Security and Privacy Controls for Federal
Information Systems and Organizations. Joint Task Force Transformation Initia-
tive. Gaithersburg, Maryland: National Institute of Standards & Technology, 2013.
Norman, Kent L. Cyberpsychology: An Introduction to Human-Computer Interaction.
New York: Cambridge University Press, 2008.
Nouvel, Pascal. “A Scale and a Paradigmatic Framework for Human Enhancement.”
In Inquiring into Human Enhancement, edited by Simone Bateman, Jean Gayon,
Sylvie Allouche, Jérôme Goffette, and Michela Marzano, pp. 103-18. Palgrave
Macmillan UK, 2015.
Novakovi, Branko, Dubravko Majeti, Josip Kasa, and Danko Brezak. “Artificial In-
telligence and Biorobotics: Is an Artificial Human Being our Destiny?” In Annals
of DAAAM for 2009 & Proceedings of the 20th International DAAAM Symposium
“Intelligent Manufacturing & Automation: Focus on Theory, Practice and Educa-
tion,” edited by Branko Katalinic, pp. 121-22. Vienna: DAAAM International,
2009.
Null, Linda, and Julia Lobur. The Essentials of Computer Organization and Architec-
ture, second edition. Sudbury, MA: Jones and Bartlett Publishers, 2006.
Ochsner, Beate, Markus Spöhrer, and Robert Stock. “Human, non-human, and be-
yond: cochlear implants in socio-technological environments.” NanoEthics 9, no. 3
(2015): 237-50.
Olson, Eric T. “Personal Identity.” The Stanford Encyclopedia of Philosophy (Fall 2015
Edition), edited by Edward N. Zalta. http://plato.stanford.edu/ar-
chives/fall2015/entries/identity-personal/. Accessed January 17, 2016.
“Organization and Human Systems Development Credo.” OD Network.
http://www.odnetwork.org/?page=ODCredo. Accessed December 17, 2014.
Panno, Joseph. Gene Therapy: Treating Disease by Repairing Genes. New York: Facts
on File, 2005.
Payr, S., and R. Trappl. “Agents across Cultures.” In Intelligent Virtual Agents, pp.
320-24. Lecture Notes in Computer Science no. 2792. Springer Berlin Heidelberg,
2003.
Pearce, David. “The Biointelligence Explosion.” In Singularity Hypotheses, edited by
A.H. Eden, J.H. Moor, J.H. Søraker, and E. Steinhart, pp. 199-238. The Frontiers
Collection. Berlin/Heidelberg: Springer, 2012.
Pereira, Alexandre L.D. “Intellectual Property and Medical Biotechnologies.” In Legal
and Forensic Medicine, edited by Roy G. Beran, pp. 1735-54. Springer Berlin Hei-
delberg, 2013.
Pérez, Jimena Escudero. “Sympathy for the Clone: (Post) Human Identities Enhanced
by the ‘Evil Science’ Construct and its Commodifying Practices in Contemporary
Clone Fiction.” Between 4, no. 8 (2014).
Philips, M. “How the Robots Lost: High-Frequency Trading’s Rise and Fall.” Bloom-
bergView June 6, 2012. http://www.bloomberg.com/bw/articles/2013-06-06/how-
the-robots-lost-high-frequency-tradings-rise-and-fall. Accessed April 29, 2015.
232 • Posthuman Management
Pinedo, Michael L. Scheduling: Theory, Algorithms, and Systems, fourth edition. New
York: Springer Science+Business Media, 2012.
Pino, Robinson E., and Alexander Kott. “Neuromorphic Computing for Cognitive
Augmentation in Cyber Defense.” In Cybersecurity Systems for Human Cognition
Augmentation, edited by Robinson E. Pino, Alexander Kott, and Michael
Shevenell, pp. 19-46. Springer International Publishing, 2014.
Plowright, Stephen. “Neurocomputing: some possible implications for human-ma-
chine interfaces.” ASHB News 8, no. 2 (1996): pp. 4-6.
Ponsteen, Albert, and Rob J. Kusters. “Classification of Human and Automated Re-
source Allocation Approaches in Multi-Project Management.” Procedia – Social
and Behavioral Sciences 194 (2015): 165-73.
Porterfield, Andrew. “Be Aware of Your Inner Zombie.” ENGINEERING & SCIENCE
(Fall 2010): 14-19.
Pribram, K.H. “Prolegomenon for a Holonomic Brain Theory.” In Synergetics of Cog-
nition, edited by Hermann Haken and Michael Stadler, pp. 150-84. Springer Series
in Synergetics 45. Springer Berlin Heidelberg, 1990.
Pribram, K.H., and S.D. Meade. “Conscious Awareness: Processing in the Synap-
todendritic Web – The Correlation of Neuron Density with Brain Size.” New Ideas
in Psychology 17, no. 3 (December 1, 1999): 205-14. doi:10.1016/S0732-
118X(99)00024-0.
Primate Politics, edited by Glendon Schubert and Roger D. Masters. Carbondale:
Southern Illinois University Press, 1991.
“Principles of OD Practice.” OD Network. http://www.odnetwork.org/?page=Princi-
plesOfODPracti. Accessed December 17, 2014.
Proudfoot, Diane. “Software Immortals: Science or Faith?” In Singularity Hypotheses,
edited by Amnon H. Eden, James H. Moor, Johnny H. Søraker, and Eric Steinhart,
pp. 367-92. The Frontiers Collection. Springer Berlin Heidelberg, 2012.
Provost, Foster, and Tom Fawcett. Data Science for Business. Sebastopol, CA: O’Reilly
Media, Inc., 2013.
Pulver, David L. GURPS Robots. Austin, TX: Steve Jackson Games, 1995.
Pulver, David L. Transhuman Space, second edition. Austin, TX: Steve Jackson
Games, 2002.
Ramirez, S., X. Liu, P.-A. Lin, J. Suh, M. Pignatelli, R.L. Redondo, T.J. Ryan, and S.
Tonegawa. “Creating a False Memory in the Hippocampus.” Science 341, no. 6144
(2013): 387-91.
Rankin, T. Murray. “Business Secrets Across International Borders: One Aspect of
the Transborder Data Flow Debate.” Canadian Business Law Journal 10 (1985):
213.
Rao, Umesh Hodeghatta, and Umesha Nayak. The InfoSec Handbook. New York:
Apress, 2014.
Chapter Three: The Posthuman Management Matrix • 233
Regalado, Antonio. “Engineering the perfect baby.” MIT Technology Review 118, no. 3
(2015): 27-33.
Rijntjes, Tom. “On the Viability of Automated Entrepreneurship.” Presentation of
Media Technology MSc program graduation project, Universiteit Leiden, Leiden,
January 22, 2016.
Riva, Giuseppe, and Carlo Galimberti. Towards CyberPsychology: Mind, Cognition,
and Society in the Internet Age, edited by Giuseppe Riva and Carlo Galimberti.
Amsterdam: IOS Press, 2001.
Roden, David. Posthuman Life: Philosophy at the Edge of the Human. Abingdon:
Routledge, 2014.
Rohloff, Michael. “Framework and Reference for Architecture Design.” In AMCIS
2008 Proceedings, 2008. http://citeseerx.ist.psu.edu/viewdoc/down-
load?doi=10.1.1.231.8261&rep=rep1&type=pdf.
Rowlands, Mark. Can Animals Be Moral? Oxford: Oxford University Press, 2012.
Rubin, Charles T. “What Is the Good of Transhumanism?” In Medical Enhancement
and Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 137-56. The In-
ternational Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
Salvini, Pericle, Cecilia Laschi, and Paolo Dario. “From robotic tele-operation to tele-
presence through natural interfaces.” In The First IEEE/RAS-EMBS International
Conference on Biomedical Robotics and Biomechatronics, 2006. BioRob 2006, pp. 408-
13. IEEE, 2006.
Samani, Hooman Aghaebrahimi, and Adrian David Cheok. “From human-robot rela-
tionship to robot-based leadership.” In 2011 4th International Conference on Hu-
man System Interactions (HSI), pp. 178-81. IEEE, 2011.
Samani, Hooman Aghaebrahimi, Jeffrey Tzu Kwan Valino Koh, Elham Saadatian, and
Doros Polydorou. “Towards Robotics Leadership: An Analysis of Leadership
Characteristics and the Roles Robots Will Inherit in Future Human Society.” In
Intelligent Information and Database Systems, edited by Jeng-Shyang Pan, Shyi-
Ming Chen, and Ngoc Thanh Nguyen, pp. 158-65. Lecture Notes in Computer Sci-
ence 7197. Springer Berlin Heidelberg, 2012.
Samuelson, William F., and Stephen G. Marks. Managerial Economics, seventh edi-
tion. Hoboken: John Wiley & Sons, 2012.
Sandberg, Anders. “Ethics of brain emulations.” Journal of Experimental & Theoretical
Artificial Intelligence 26, no. 3 (2014): 439-57.
Sandor, Christian, Martin Fuchs, Alvaro Cassinelli, Hao Li, Richard Newcombe,
Goshiro Yamamoto, and Steven Feiner. “Breaking the Barriers to True Aug-
mented Reality.” arXiv preprint, arXiv:1512.05471 [cs.HC], December 17, 2015.
http://arxiv.org/abs/1512.05471. Accessed January 25, 2016.
Sasse, Martina Angela, Sacha Brostoff, and Dirk Weirich. “Transforming the ‘weak-
est link’—a human/computer interaction approach to usable and effective secu-
rity.” BT technology journal 19, no. 3 (2001): 122-31.
234 • Posthuman Management
Scopino, G. “Do Automated Trading Systems Dream of Manipulating the Price of Fu-
tures Contracts? Policing Markets for Improper Trading Practices by Algorithmic
Robots.” Florida Law Review, vol. 67 (2015): 221-93.
Sharf, S. “Can Robo-Advisors Survive A Bear Market?” Forbes, January 28, 2015.
http://www.forbes.com/sites/samanthasharf/2015/01/28/can-robo-advisors-sur-
vive-a-bear-market/. Accessed April 29, 2015.
Sharkey, Noel. “Killing Made Easy: From Joysticks to Politics.” In Robot Ethics: The
Ethical and Social Implications of Robotics, edited by Patrick Lin, Keith Abney, and
George A. Bekey, pp. 111-28. Cambridge, MA: The MIT Press, 2012.
Shaw-Garlock, Glenda. “Loving machines: Theorizing human and sociable-technol-
ogy interaction.” In Human-Robot Personal Relationships, edited by Maarten H.
Lamers and Fons J. Verbeek, pp. 1-10. Springer Berlin Heidelberg, 2011.
Siewert, Charles. “Consciousness and Intentionality.” The Stanford Encyclopedia of
Philosophy (Fall 2011 Edition), edited by Edward N. Zalta. http://plato.stan-
ford.edu/archives/fall2011/entries/consciousness-intentionality/.
Siewiorek, Daniel, and Robert Swarz. Reliable Computer Systems: Design and Evalua-
tion, second edition. Burlington: Digital Press, 1992.
Singh, Kshitij Kumar. Biotechnology and Intellectual Property Rights: Legal and Social
Implications. Springer, 2014.
Sloman, Aaron. “Some Requirements for Human-like Robots: Why the recent over-
emphasis on embodiment has held up progress.” In Creating brain-like intelli-
gence, pp. 248-77. Springer Berlin Heidelberg, 2009.
Snider, Greg S. “Cortical Computing with Memristive Nanodevices.” SciDAC Review
10 (2008): 58-65.
Social Robots and the Future of Social Relations, edited by Johanna Seibt, Raul Hakli,
and Marco Nørskov. Amsterdam: IOS Press, 2014.
Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov. Farnham:
Ashgate Publishing, 2016.
Social Robots from a Human Perspective, edited by Jane Vincent, Sakari Taipale, Bar-
tolomeo Sapio, Giuseppe Lugano, and Leopoldina Fortunati. Springer Interna-
tional Publishing, 2015.
Sorbello, R., A. Chella, and R. C. Arkin. “Metaphor of Politics: A Mechanism of Coali-
tion Formation.” vectors 1, no. 40 (2004): 20.
Soussou, Walid V., and Theodore W. Berger. “Cognitive and Emotional Neuropros-
theses.” In Brain-Computer Interfaces, pp. 109-23. Springer Netherlands, 2008.
Spohrer, Jim. “NBICS (Nano-Bio-Info-Cogno-Socio) Convergence to Improve Human
Performance: Opportunities and Challenges.” In Converging Technologies for Im-
proving Human Performance: Nanotechnology, Biotechnology, Information Technol-
ogy and Cognitive Science, edited by M.C. Roco and W.S. Bainbridge, pp. 101-17.
Arlington, Virginia: National Science Foundation, 2002.
Chapter Three: The Posthuman Management Matrix • 235
Stahl, B. C. “Responsible Computers? A Case for Ascribing Quasi-Responsibility to
Computers Independent of Personhood or Agency.” Ethics and Information Tech-
nology 8, no. 4 (2006): 205-13.
Stiles, Noelle R.B., and Shinsuke Shimojo. “Sensory substitution: A new perceptual
experience.” In The Oxford Handbook of Perceptual Organization, edited by Johan
Wagemans. Oxford: Oxford University Press, 2016.
Surbrook, Michael. Kazei-5. Hero Games, 1998.
Theoretical Foundations of Artificial General Intelligence, edited by Pei Wang and Ben
Goertzel. Paris: Atlantis Press, 2012.
Thompson, David A. “The Man-Robot Interface in Automated Assembly.” In Moni-
toring Behavior and Supervisory Control, edited by Thomas B. Sheridan and Gun-
nar Johannsen, pp. 385-91. NATO Conference Series 1. New York: Plenum Press,
1976.
Thornton, Stephanie. Understanding Human Development: Biological, Social and Psy-
chological Processes from Conception to Adult Life. New York: Palgrave Macmillan,
2008.
Tomas, David. “Feedback and Cybernetics: Reimaging the Body in the Age of the Cy-
borg.” In Cyberspace, Cyberbodies, Cyberpunk: Cultures of Technological Embodi-
ment, edited by Mike Featherstone and Roger Burrows, pp. 21-43. London: SAGE
Publications, 1995.
The Turing Test: The Elusive Standard of Artificial Intelligence, edited by James H.
Moor. Studies in Cognitive Systems 30. Springer Netherlands, 2003.
Turner, John N. “The right to privacy in a computronic age.” In Position Papers of the
Conference on Computers, Privacy and Freedom of Information. Kingston: Queen’s
University, 1970.
Tweedale, Jeffrey W., and Lakhmi C. Jain. “Agent Oriented Programming.” In Embed-
ded Automation in Human-Agent Environment, pp. 105-24. Adaptation, Learning,
and Optimization 10. Springer Berlin Heidelberg, 2011.
Ullrich, Günter. Automated Guided Vehicle Systems: A Primer with Practical Applica-
tions, translated by Paul A. Kachur. Springer Berlin Heidelberg, 2015.
Ummat, Ajay, Atul Dubey, and Constantinos Mavroidis. “Bionanorobotics: A Field
Inspired by Nature." In Biomimetics: Biologically Inspired Technologies, edited by
Yoseph Bar-Cohen, pp. 201-26. Boca Raton: CRC Press, 2005.
Van den Berg, Bibi. “Pieces of Me: On Identity and Information and Communications
Technology Implants.” In Human ICT Implants: Technical, Legal and Ethical Con-
siderations, edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp.
159-73. Information Technology and Law Series 23. T. M. C. Asser Press, 2012.
Van Oost, Ellen, and Darren Reed. “Towards a Sociological Understanding of Robots
as Companions.” In Human-Robot Personal Relationships, edited by Maarten H.
Lamers and Fons J. Verbeek, pp. 11-18. Springer Berlin Heidelberg, 2011.
236 • Posthuman Management
Vänni, Kimmo J., and Annina K. Korpela. “Role of Social Robotics in Supporting Em-
ployees and Advancing Productivity.” In Social Robotics, pp. 674-83. Springer In-
ternational Publishing, 2015.
Versace, Massimiliano, and Ben Chandler. “The Brain of a New Machine.” IEEE spec-
trum 47, no. 12 (2010): 30-37.
Viirre, Erik, Françoise Baylis, and Jocelyn Downie. “Promises and perils of cognitive
performance tools: A dialogue.” Technology 11, no. 1 (2008): 9-25.
Vinciarelli, A., M. Pantic, D. Heylen, C. Pelachaud, I. Poggi, F. D’Errico, and M.
Schröder. “Bridging the Gap between Social Animal and Unsocial Machine: A
survey of Social Signal Processing.” IEEE Transactions on Affective Computing 3:1
(January-March 2012): 69-87.
Vogiatzis, D., Dimitrios Galanis, V. Karkaletsis, Ion Androutsopoulos, and C. D. Spy-
ropoulos. “A conversant robotic guide to art collections.” In Proceedings of the 2nd
Workshop on Language Technology for Cultural Heritage Data, Language Resources
and Evaluation Conference, Marrakech, Morocco, pp. 55-60. 2008.
Wallach, Wendell, and Colin Allen. Moral machines: Teaching robots right from
wrong. Oxford University Press, 2008.
Warkentin, Merrill, Vijayan Sugumaran, and Robert Sainsbury. “The Role of Intelli-
gent Agents and Data Mining in Electronic Partnership Management.” Expert Sys-
tems with Applications 39, no. 18 (2012): 13277-88.
Warwick, K. “The Cyborg Revolution.” Nanoethics 8 (2014): 263-73.
Warwick, K., and M. Gasson. “Implantable Computing.” In Digital Human Modeling,
edited by Y. Cai, pp. 1-16. Lecture Notes in Computer Science 4650. Berlin/Heidel-
berg: Springer, 2008.
Wearable Robots: Biomechatronic Exoskeletons, edited by Jose L. Pons. Chichester:
John Wiley & Sons, 2008.
Whitby, Blay. “Do You Want a Robot Lover? The Ethics of Caring Technologies.”
In Robot Ethics: The Ethical and Social Implications of Robotics, edited by Patrick
Lin, Keith Abney, and George A. Bekey, pp. 233-48. Cambridge, MA: The MIT
Press, 2012.
Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the
Machine, second edition. Cambridge, MA: The MIT Press, 1961. [Quid Pro ebook
edition for Kindle, 2015.]
Wilson, Margaret. “Six views of embodied cognition.” Psychonomic bulletin & re-
view 9, no. 4 (2002): 625-36.
Xu, J., B. Archimede, and A. Letouzey. “A Distributed Multi-Agent Framework for
Shared Resources Scheduling.” Information Control Problems in Manufacturing 14,
no. 1 (2012): 775-80.
Yagi, Akihiro. “Engineering psychophysiology in Japan.” In Engineering psychophysi-
ology: Issues and applications, edited by Richard W. Backs and Wolfram Boucsein,
pp. 361-68. Mahwah, NJ: Lawrence Erlbaum Associates, Publishers, 2000.
Chapter Three: The Posthuman Management Matrix • 237
Yampolskiy, Roman V. Artificial Superintelligence: A Futuristic Approach. Boca Raton:
CRC Press, 2015.
Yampolskiy, Roman V. “The Universe of Minds.” arXiv preprint, arXiv:1410.0369
[cs.AI], October 1, 2014. http://arxiv.org/abs/1410.0369. Accessed January 25, 2016.
Yampolskiy, Roman V., and Joshua Fox. “Artificial General Intelligence and the Hu-
man Mental Model.” In Singularity Hypotheses, edited by Amnon H. Eden, James
H. Moor, Johnny H. Søraker, and Eric Steinhart, pp. 129-45. The Frontiers Collec-
tion. Springer Berlin Heidelberg, 2012.
Yonck, Richard. “Toward a standard metric of machine intelligence.” World Future
Review 4, no. 2 (2012): 61-70.
Chapter Four
Organization Development and the
Robotic-Cybernetic-Human Workforce:
Humanistic Values for a Posthuman
Future?
Introduction
Organization Development (OD) is a management discipline whose theory
and practice are firmly rooted in humanistic values insofar as it seeks to create
effective organizations by facilitating the empowerment and growth of their
human members. However, a new posthuman age is dawning in which human
beings will no longer be the only intelligent actors guiding the behavior of
organizations; increasingly, social robots, AI programs, and cybernetically
augmented human employees are taking on roles as collaborators and deci-
sion-makers in the workplace, and this transformation is only likely to accel-
erate.
How should OD professionals react to the rise of these posthumanizing
technologies? In this text we explore OD’s humanistic foundations and the
social and organizational implications of posthuman technologies for the
workplace. Several ways are suggested in which OD could act as a ‘Humanist
OD for a posthuman world,’ providing an essential service to future organiza-
tions without abandoning its traditional humanist values. An alternative vi-
sion is then presented for a ‘Posthuman OD’ that reinterprets and expands its
humanist vision to embrace the benefits that social robots, AI, and cyberiza-
tion can potentially bring into the workplace. Finally, we discuss the extent to
which OD can remain a single, unified discipline in light of the challenge to
its traditional humanistic values presented by such emerging technologies.
OD’s Humanistic Values
Organization Development is a discipline that seeks to facilitate positive,
long-term, systemic organizational change by carrying out interventions that
240 • Posthuman Management
involve practices such as “organization design, strategic planning, quality in-
terventions, team building, survey feedback, individual instruments, and
coaching and mentoring.”
1
Perhaps more so than any other field within the
world of business management, OD is a discipline that was explicitly founded
on humanistic principles and which does not hesitate to remind the public of
that fact. Such values feature prominently in the definition of OD developed
by Bradford and Burke, which states that:
2
Based on (1) a set of values, largely humani stic; (2) application of
the behavioral sciences; and (3) open system theory, organization
development is a system-wide process of planned change aimed
toward improving overall organization effectiveness by way of en-
hanced congruence of such key organi zational dimensions as ex-
ternal environment, mission, strategy, leadership, culture, structure,
information and reward systems, and work policies and procedures.
While humanism can take many different forms (such as religious humanism
and secular humanism), all of its varieties share a common foundation insofar
as they “emphasize human welfare and dignity” and are “either optimistic
about the powers of human reason, or at least insistent that we have no alter-
native but to use it as best we can.”
3
Over the last few decades, OD’s human-
istic concern for the welfare, development, and fundamental liberties of hu-
man beings has been enshrined in key documents created through a process
of consultation and consensus among OD theorists and practitioners who
have worked to create a shared ethical basis for their profession. For example,
The International Organization Development Code of Ethics – originally devel-
oped by Dr. William Gellermann and the OD Institute in the 1980s and even-
tually adopted by many OD organizations around the world – recognizes ten
values whose fundamental importance it argues should be acknowledged by
OD professionals. The first of these values is “quality of life – people being
satisfied with their whole life experience,” and the second is “health, human
potential, empowerment, growth and excellence – people being healthy,
aware of the fullness of their potential, recognizing their power to bring that
potential into being...” Not until the eighth value does one find any reference
to OD’s aim of enhancing an organization’s “effectiveness, efficiency and
alignment…”
4
Similar statements emphasizing OD’s fundamentally humanistic values
can be found in the Organization and Human Systems Development Credo,
5
the
1
Anderson, Organization Development: The Process of Leading Organizational Change (2015).
2
Bradford & Burke, Reinventing Organization Development (2005).
3
“Humanism,” The Oxford Dictionary of Philosophy.
4
“The International Organization Development Code of Ethics,” The OD Institute.
5
“Organization and Human Systems Development Credo,” OD Network.
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 241
Ethical Guidelines and Professional Standards for Organization Development
and Group Process Consultants,
6
and the OD Network’s Principles of OD Prac-
tice.
7
Of course, it is possible for a profession to adopt codes of ethics whose
principles do not actually reflect the daily behavior of its practitioners. How-
ever, in the case of Organization Development it has been argued that most
individuals who identify themselves as OD professionals do, in fact, feel a call-
ing to “Create opportunities for all organizational members to learn and de-
velop personally toward full realization of individual potential,” and, more
generally, that they share a “set of beliefs about the congruence of human and
organizational behavior that is humanistic in nature.”
8
OD’s humanistic foundations become especially apparent if one contrasts
the discipline with a field like Change Management. Professionals in OD and
Change Management are often called upon to address similar organizational
issues, but they do so from very different perspectives. Change Management
typically focuses on “engineering and directing” changes in behavior that are
desired by an organization’s elite decision-makers, with a focus on generating
economic outcomes – while OD focuses on participatory processes that en-
gage all of an organization’s members through facilitation and coaching, to
aid them in advancing the pursuit of humanistic values.
9
Indeed, one might
even be tempted to argue that the very name of ‘Organization Development’
is misleading, insofar as the discipline does not focus directly on enhancing
an organization’s productivity or efficiency. While OD’s ultimate goal is to
create a more effective organization, it accomplishes this primarily by facili-
tating the personal and professional development, self-realization, creativity,
just and ethical decision-making, and healthy relationships of the individual
human beings who constitute the organization, based on the belief that “in
essence if you create a humanistic organization it will inherently be a high-
performing organization.”
10
The Posthuman Future Confronting OD
We are entering a new age in which the ‘human being’ contemplated by
traditional humanism will no longer be the only intelligent actor whose deci-
sions guide the development of organizations. In addition to the ‘bioagency’
demonstrated by traditional human beings, organizations are also shaped by
6
“Ethical Guidelines and Professional Standards for Organization Development and Group Pro-
cess Consultants,” IAGP.
7
“Principles of OD Practice,” OD Network.
8
Bradford & Burke (2005).
9
Bradford & Burke (2005).
10
Bradford & Burke (2005).
242 • Posthuman Management
the ‘cyberagency’ of social robots, artificially intelligent software, and cyber-
netically enhanced human beings, as well as the ‘collective agency’ demon-
strated by intelligent networks.
11
Robots and AI
Having already moved far beyond their original roles in industrial manu-
facturing, robots will continue to expand into new reaches of society, filling a
burgeoning number of roles in service-sector industries such as information
technology, finance, retail sales, health care, education, entertainment and the
arts, and government.
12
Unlike the emotionless mechanical automata that
many still stereotypically associate with the word ‘robot,’ future generations
of social robots will likely possess a full range of human-like emotional, social,
and cultural capacities.
13
They will be capable of juggling the responsibilities
and expectations that come with belonging simultaneously to multiple social
groups – such as a family, professional association, and religious community
14
– and they will experience the same mixed emotions that often accompany
human beings’ efforts to make difficult decisions.
15
Rather than being preprogrammed to mechanically follow particular ethi-
cal rules, much like human beings they will be taught a set of virtues
16
that are
ambiguous and sometimes difficult to reconcile, and they will then be given
the freedom and legal and moral responsibility
17
to make sound decisions
shaped by their personal values, their emotional makeup,
18
and their unique
personal experiences.
19
Such robots may possess a knowledge of human psy-
chology that allows them to serve as benevolent mentors, encouraging and
teaching human beings to behave in good and virtuous ways – or to serve as
11
Fleischmann, “Sociotechnical Interaction and Cyborg–Cyborg Interaction: Transforming the
Scale and Convergence of HCI” (2009).
12
Solis & Takanishi, “Recent Trends in Humanoid Robotics Research: Scientific Background,
Applications, and Implications” (2010); Schaal, “The New Robotics – towards Human‐centered
Machines” (2007).
13
Friedenberg, Artificial Psychology: The Quest for What It Means to Be Human (2008).
14
De Man et al., “A Cognitive Model for Social Role Compliant Behavior of Virtual Agents”
(2012).
15
Lee et al., “Feeling Ambivalent: A Model of Mixed Emotions for Virtual Agents” (2006).
16
Coleman, “Android Arete: Toward a Virtue Ethic for Computational Agents” (2001).
17
Calverley, “Imagining a Non-Biological Machine as a Legal Person” (2008).
18
Thill & Lowe, “On the Functional Contributions of Emotion Mechanisms to (Artificial) Cogni-
tion and Intelligence” (2012).
19
Ho & Dautenhahn, “Towards a Narrative Mind: The Creation of Coherent Life Stories for
Believable Virtual Agents” (2008).
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 243
master manipulators, coaxing or coercing human beings to behave in ways
that advance a robot’s personal aims.
20
Human Beings, Cyberization, and Virtual Relations
The abilities, limitations, and manner of existence of human workers will
also be dramatically transformed in the coming years and decades. Already,
ubiquitous computing – with our continuous connection to the Internet and
reliance upon it in manifold, intimate ways – is accelerating a “convergence
of the human and the computer.” Computers are becoming such an integral
component of the way in which contemporary human beings sense, under-
stand, and control the world that we effectively function as cyborgs; ‘cyborg-
cyborg interaction’ is becoming a fundamental aspect of society.
21
Genetic en-
gineering may further enhance the ability of the human body to interface with
implanted or external technologies in such ways. Meanwhile, the increasing
‘virtualization’ of our relationships means that we will regularly interact with
coworkers, customers, and suppliers as digital avatars in virtual environments,
without knowing (or perhaps caring) whether the entity on the other end of
the conversation is a human being, social robot, or AI program.
22
The Workplace
Increasingly, businesses and other organizations will consist of ‘cybernetic
teams’ whose human and artificial members “cooperate as teammates to per-
form work.”
23
Our acceptance of robots as colleagues and friends will be aided
by the fact that human beings are not only able to engage in social relations
with artificial intelligences as if they were human but are naturally inclined
to do so.
24
Moreover, social robots will not simply work alongside us as our peers or
subordinates; some of them will likely serve as supervisors of human beings.
20
Knowles et al., “Wicked Persuasion: A Designerly Approach” (2014); Ruijten et al., “Investi-
gating the Influence of Social Exclusion on Persuasion by a Virtual Agent” (2014); Rebolledo-
Mendez et al., “A Model of Motivation for Virtual-Worlds Avatars” (2008); Gladden, “The Social
Robot as ‘Charismatic Leader’: A Phenomenology of Human Submission to Nonhuman Power”
(2014).
21
Fleischmann (2009).
22
Grodzinsky et al., “Developing Artificial Agents Worthy of Trust: ‘Would You Buy a Used Car
from This Artificial Agent?’” (2011).
23
Wiltshire et al., “Cybernetic Teams: Towards the Implementation of Team Heuristics in HRI”
(2013); Bradshaw et al., “From Tools to Teammates: Joint Activity in Human-Agent-Robot
Teams” (2009); Flemisch et al., “Towards a Dynamic Balance between Humans and Automation:
Authority, Ability, Responsibility and Control in Shared and Cooperative Control Situations”
(2012).
24
Rehm et al., “Some Pitfalls for Developing Enculturated Conversational Agents” (2009);
Friedenberg (2008).
244 • Posthuman Management
It has been argued that “giving robots positions of responsibility is not only
unavoidable but is rather something desired and that we are trying to
achieve,”
25
and the development of artificial beings that possess the intellec-
tual, emotional, social, and physical capacities needed to serve as managers
and leaders of human employees – and even to excel in such roles – is well
underway.
26
While some human employees might resent their social robot
boss for “lacking humanity” or “stealing human jobs,” other human employees
may appreciate having a boss who is inherently incorruptible, honest, fair,
caring, competent, and sincerely concerned for the long-term good of the
whole organization rather than for personal acclaim, financial gain, career ad-
vancement, or other selfish interests.
27
In the face of such developments, it has been noted that in the future, Hu-
man Resource Development will no longer simply involve training human
employees but training their robotic coworkers, as well.
28
Similarly, OD will
face the challenge of “teaching human managers and employees the skills to
work with a workforce comprising humanoid robots and human beings.”
29
Beyond Humanism
30
Such powerful new technologies are introducing a ‘radical alterity’ that
challenges not only the traditional values of humanism but also our most fun-
damental understanding of the limits and possibilities of what it means to be
human when old notions of ethical values that “privilege reason, truth, mean-
ing, and a fixed concept of ‘the human’ are upended by digital technology,
cybernetics, and virtual reality.”
31
A variety of responses to these technologies
has emerged. For example, on the one hand, scientists and philosophers iden-
tifying themselves as transhumanists accept the basic humanist principles of
anthropocentrism, rationality, autonomy, progress, and optimism about the
future of humanity, while actively working to employ science and technology
in an effort to control and accelerate the transformation of the human species
in ways that traditional biological evolution does not allow. Such thinkers ad-
25
Samani et al., “Towards Robotics Leadership: An Analysis of Leadership Characteristics and
the Roles Robots Will Inherit in Future Human Society” (2012).
26
Gladden, “Leveraging the Cross-Cultural Capacities of Artificial Agents as Leaders of Human
Virtual Teams” (2014); Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
27
Gladden, “The Social Robot as ‘Charismatic Leader’” (2014).
28
Azevedo et al., “The HRI (Human-Robot Interaction) and Human Resource Development
(HRD)” (2013).
29
Stanford, Organization Design for HR Managers: Engaging with Change (2013).
30
See Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), pp.
94-96, upon which much of this section is based.
31
Gunkel & Hawhee, “Virtual Alterity and the Reformatting of Ethics” (2003).
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 245
vocate the use of genetic engineering, cybernetics, and nanotechnology to cre-
ate a more meaningful, more transcendent, ‘enhanced’ form of human exist-
ence. In this sense, their philosophy can be understood not as ‘antihumanism’
but as ‘ultrahumanism.’
32
On the other hand, there is a diverse group of posthumanist thinkers who
agree that new forms of sentient and sapient existence are emerging, spurred
on by the world’s technological advances; however, these posthumanists re-
ject transhumanism’s anthropocentric and humanistic focus. Instead, such
posthumanists argue that the future will include many different sources of
intelligence and agency that will create meaning in the universe through their
networks and relations:
33
such entities might include ‘natural’ human beings,
genetically engineered human beings, human beings with extensive cyber-
netic modifications, human beings dwelling in virtual realities, social robots,
artificially intelligent software, nanorobot swarms, and sentient or sapient
networks.
Thinkers like Bostrom and Kurzweil are generally enthusiastic about the
transhuman potential of technology to ‘liberate’ humanity, allowing us to
transcend our previous physical and cognitive limitations and spark the evo-
lution of higher forms of human existence. Following Habermas and Hork-
heimer, other scholars are more pessimistic about the anticipated impact of
such technologies, suggesting that while the exact impact of such technologies
cannot be predicted in advance, they are more likely to spur social fragmen-
tation and inequality, a reduction in human autonomy and meaning, and the
oppression – or in its extreme form, even destruction – of humanity at the
hands of its technological creation. Yet another group of thinkers takes a more
extreme view, arguing that while humanity’s twilight and replacement by ar-
tificial beings is indeed inevitable, it is in fact a natural and desirable step in
the evolution of intelligent life on earth.
34
OD’s Possible Stances toward Posthuman Technology
Driven in part by such dramatic technological transformations, it is clear
that fields such as Human Resource Development and OD are about to enter
a ‘new age’ – but it is not yet known exactly what form this new age might
32
Ferrando, “Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Mate-
rialisms: Differences and Relations” (2013).
33
Ferrando (2013).
34
Abrams, “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shusterman, Rorty,
Foucault” (2004), and Edgar, “The Hermeneutic Challenge of Genetic Engineering: Habermas
and the Transhumanists” (2009).
246 • Posthuman Management
take.
35
The outcome may depend on the extent to which Organization Devel-
opment is necessarily and inherently humanistic. How far can OD go in adapt-
ing itself to new posthuman realities before it ceases to be OD?
Humanistic OD for a Posthuman World
Even if OD professionals hold fast to a traditional understanding of hu-
manist values – declining to expand their ‘moral universe’ to include a con-
cern for the welfare of sapient computers and rejecting the notion that there
is anything ‘liberating’ about the growth in workplace robotics and cybernet-
ics – it is still possible for such professionals to utilize their OD expertise to
aid future organizations that do find value in such technologies and that are
working to implement them. We can refer to this path as ‘Humanistic OD for
a posthuman world.’ Under this model, typical OD interventions might in-
clude:
A call for reflection. OD professionals already play a key role in call-
ing upon CEOs to “pause and reflect and assess the genuine needs of
the organization” carefully before making the decision to terminate
any employees, given the devastating impact that this can have on the
human beings affected.
36
Even if OD professionals are not able to pre-
vent the replacement of human workers by robots, they can at least
aid CEOs to recognize and honestly reflect on their own motivations,
biases, and assumptions before making the decision to implement
such technologies in the workplace.
Pre-interventions: shaping robots through their designers. Robot-
icists are striving to identify the “key cognitive capabilities” that social
robots will need in order to work safely and effectively with human
colleagues.
37
OD professionals can offer their expertise to help such
scientists and engineers understand that social robots will meld most
successfully into a hybrid human-artificial workplace if the robots’
motivations, priorities, ethical commitments, and decision-making
processes reflect the same humanistic values upon which OD is prem-
ised. Even if OD professionals do not personally believe that robot
colleagues will provide a net positive contribution to the workplace,
they might in this way at least work to minimize the harm and prob-
lems caused by such robots’ introduction.
The robotic-cybernetic-human workplace as sociotechnical sys-
tem. Even if OD professionals do not believe that social robots or AIs
35
Chermack et al., “Critical Uncertainties Confronting Human Resource Development” (2003).
36
“Ethical Guidelines and Professional Standards for Organization Development and Group Pro-
cess Consultants,” IAGP.
37
Williams, “Robot Social Intelligence” (2012).
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 247
are beings worthy of moral concern, they can still acknowledge that
they represent remarkable and qualitatively different additions to the
social and technical systems that constitute an organization. Social
robots and AIs represent a special challenge, insofar as they are the
first pieces of workplace technology that will operate proactively (and
perhaps unpredictably) within the social sphere. Drawing on the body
of theory and practice in sociotechnical systems (STS) that has been
developed since the 1950s,
38
OD professionals can help organizations
to reconcile their social and technical systems in light of the new com-
plications and opportunities that are brought by artificial and cyber-
netic human workers.
Understanding new forms of power relations. In their efforts to es-
tablish social agreements among organizational members with differ-
ing perspectives, some streams of ‘New OD’ have emphasized “less
confronting and more ‘optimistic’ or ‘positive’ approaches” such as
appreciative inquiry, while downplaying the harsher realities of
workplace politics and the use and abuse of power dynamics by or-
ganizational members.
39
However, new forms of power relations will
be created when the workplace becomes home to artificial beings and
cybernetically enhanced humans who may have the capacity to out-
think, out-organize, out-empathize, and out-manipulate their ‘natu-
ral’ human colleagues. Moreover, such powerful new technologies
may become tools in the political machinations and organizational
power struggles of the human executives who are pushing for and
shaping their implementation. One must take into account not only
the values and motives of sapient robots themselves but also those of
the human beings who are designing and implementing them.
40
OD
professionals can play a key role by helping an organization’s mem-
bers to recognize and address the sometimes subtle ways in which the
introduction of such technologies can reshape an organization’s
power dynamics for good or ill.
Posthumanistic OD for a Posthuman World
It is conceivable that after working in partnership with social robots whose
intelligence, moral values, emotions, relationships, and even spirituality ap-
pear as ‘human’ as those of human beings – as well as with human beings
whose cybernetic enhancements have strengthened their connection to other
human beings and concern for their welfare, rather than lessening it – some
OD professionals might come to see such posthumanizing technological
38
Anderson (2015).
39
Marshak & Grant (2008).
40
Anderson, “Why Is AI so Scary?” (2005); Grodzinsky et al. (2011).
248 • Posthuman Management
change as something that should be embraced rather than rejected in the
workplace, as something that can aid the development and self-fulfillment of
individual human beings, if its implementation is thoughtfully managed in a
way that unlocks these benefits. Apart from their instrumental worth, OD pro-
fessionals might also come to value sentient or sapient robots and AI programs
in themselves and attribute to them an existential value and moral agency
previously attributed only to human beings. Such OD professionals might well
expand their ‘moral universe’ to include not only a concern for the welfare of
individual human beings, but also for the welfare of individual sapient robots
and artificial intelligences, and perhaps even for hybrid entities or networks
in which cyberized human beings, robots, and AIs have become symbiotically
interdependent. We can refer to this general approach as ‘Posthumanistic OD
for a posthuman world.’ Under this model, OD’s work might include:
Facilitating the new multiculture: beyond human diversity. One
stream of today’s New OD focuses on how “diversity and multicul-
tural realities” shape an organization.
41
If OD already places an em-
phasis on helping human beings to find value in their differences and
open themselves to new perspectives, it would not require a dramatic
leap for OD to also help human beings find value in the novel per-
spectives and forms of being that their new artificial colleagues pos-
sess. OD could teach human employees to appreciate social robots and
cyborgs as new forms of the ‘other’ that can and should be welcomed
into their community.
Managing change for which Change Management is unpre-
pared. The implementation of a transformative new company-wide
technological system is sometimes facilitated by an organization’s
Change Management unit, which collaborates with an organization’s
senior leadership to install such a technology by “engineering and di-
recting” changes in workers’ behavior, with an ultimate focus on fi-
nancial outcomes.
42
However, integrating advanced social robots and
cybernetics into an organization is not simply a techno-structural
matter of adding new IT hardware; it must be understood from a phe-
nomenological perspective as creating a strange new soil in which so-
cial relationships and meaning will grow as human beings and artifi-
cial general intelligences (AGIs) recognize and confirm one another’s
existence as beings possessing a conscience.
43
The manner in which cyberization and social robots are welcomed
into the workforce (or rejected and sabotaged) by human employees
41
Marshak & Grant (2008).
42
Bradford & Burke (2005).
43
Ramey, “Conscience as a Design Benchmark for Social Robots” (2006).
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 249
may thus depend primarily not on the employees’ technical ability to
learn new hardware and software but on their social abilities and
mindsets. In contrast to changes that can be centrally decided and im-
plemented, developing relationships with artificial sapient beings is
the sort of change that may take place in a manner that is largely un-
planned and undirected and which grows organically in directions
shaped by the personal biases, experiences, hopes, fears, cultures, and
power and trust relations of an organization’s human members.
44
Some streams of New OD recognize that the most significant change
is often a continuous, unplanned, distributed process over which an
organization’s senior decision-makers have limited conscious control.
Rather than attempting to impose a desired change, New OD strives
to facilitate “shifts in human consciousness” through techniques like
appreciative inquiry that surface employees’ positive experiences and
aspirations, with a goal of “creating new mindsets or social agree-
ments” among members of the organization.
45
OD’s roots in humanist
philosophy become a strength instead of a weakness when it comes
to supporting technological change that is grounded as much in the
hearts as in the minds of an organization’s members.
From systems for manufacturing to systems for meaning. OD pro-
fessionals face growing conflicts “between employees’ needs for
greater meaning and the organization’s need for more effective and
efficient use of its resources.”
46
Up to now, it has often been assumed
that new automated technologies increase organizational efficiency
while reducing employees’ satisfaction and sense of meaning. How-
ever, social robots have the potential to radically rewrite this equa-
tion. For the first time, the workplace will be filled with pieces of tech-
nology that are designed to be moral, emotional, intelligent, social be-
ings that yearn to both receive and offer respect, self-actualization,
and a sense of belonging. Social robots could potentially become OD
professionals’ strong allies in efforts to enhance meaning in the work-
place.
47
Similarly, workers with extensive cybernetic enhancements
might find that while their desire and ability to engage in traditional
forms of interpersonal relationships has lessened, they are able to ac-
cess and experience reality in ways that offer new forms of meaning.
44
Coeckelbergh, “Can We Trust Robots?” (2012).
45
Marshak & Grant (2008).
46
Cummings & Worley, Organization Development and Change (2014), p. 60.
47
Regarding the nature and potential capacities of social robots, see, e.g., Breazeal, “Toward
sociable robots” (2003); Kanda & Ishiguro, Human-Robot Interaction in Social Robotics (2013); So-
cial Robots and the Future of Social Relations, edited by Seibt et al. (2014); Social Robots from a
Human Perspective, edited by Vincent et al. (2015); and Social Robots: Boundaries, Potential, Chal-
lenges, edited by Marco Nørskov (2016).
250 • Posthuman Management
Three Paths for OD as a Discipline
One can envision at least three different ways in which OD as a discipline
might respond to the challenge of growing cyberization, robotics, and AI in
the workforce. The first possibility is that OD professionals could unani-
mously decide to preserve their traditional humanistic values, understood to
mean that OD works to serve the development of human beings in human
organizations. Such an OD would likely work to prevent the introduction of
social robots and cybernetic augmentation into the workplace because of their
perceived ‘dehumanizing’ effects. Given the fact that OD’s value is already
questioned by some executives because of its lack of direct emphasis on finan-
cial outcomes, holding fast to its traditional humanistic values could result in
OD becoming increasingly marginalized, as businesses replace their OD units
and functions with new kinds of units that will readily accept and promote
the economic, social, and technological ‘advances’ that are both made possible
by and support a diminishing role for human beings and humanist values in
the workplace. However, there could still be a role for such a Humanistic OD
to serve organizations that similarly reject social robotics and cybernetic aug-
mentation and hold to traditional humanist values.
A second possibility is for OD as a whole to broaden its vision to include a
concern for the entire community of artificial, cybernetic, and human mem-
bers that make up an organization. In this case, OD professionals would realize
that their concern had never been for the welfare of ‘human beings’ as such –
but for the welfare of ‘sapient beings,’ of whom traditional human beings had
previously been the only representatives. Such a shift in OD’s perspective
could occur naturally as current generations of OD professionals are gradually
succeeded by future generations who will have grown up amidst quite differ-
ent social and technological realities. A Posthumanistic OD of this sort could
play a key role for similarly posthumanist organizations that are struggling
with their efforts to integrate social robotics and biocybernetic technologies
successfully into the workplace.
A third possibility is for Organization Development to splinter into multi-
ple disciplines that all claim the ‘OD’ name (or some variation on it) but which
display divergent – and perhaps irreconcilable – attitudes toward the new
technologies that enable a robotic-cybernetic-human workforce. This out-
come is perhaps not unlikely, given the fact that OD has already begun to
separate into several strains – such as Classical, Neoclassical, and Social Inter-
action OD or New OD – that differ in their willingness to supplement OD’s
original humanist values with more business-oriented considerations
48
and to
48
Bradford & Burke (2005).
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 251
embrace the knowledge and technologies generated by new fields of science
like self-organizing systems and complexity theory.
49
New Consensus Ethical Documents?
The existing consensus ethics documents developed by Organization De-
velopment groups and described in previous sections do not formulate a
stance toward such posthuman technologies. Efforts to update OD’s founda-
tional documents to explicitly address such issues would at this moment likely
be premature, insofar as there is still broad disagreement among AI research-
ers, engineers, sociologists, philosophers, economists, and others about the
particular forms, capacities, risks, and organizational relevance that such tech-
nologies might eventually display. Moreover, it is unlikely that most OD pro-
fessionals currently possess the scholarly expertise or personal experience
with social robotics, cyberization, and other posthuman technologies that
would enable them to make informed decisions about the stances that their
profession should take on such matters.
However, OD professionals may eventually decide that it is worth launch-
ing a broad new process of consultation to formulate revised ethics documents
that address the role of posthumanizing technologies within organizations. If
such an effort at forging consensus among OD professionals succeeds, it could
help OD maintain cohesion as a profession and provide new energy and focus
for addressing the dramatic organizational challenges and opportunities that
such new technologies will bring. On the other hand, if the issues prove too
divisive and the effort to achieve consensus fails, this could nevertheless be
beneficial if it helps to clearly delineate the differences between ‘Humanistic
OD’ and ‘Posthumanistic OD’ and allows the theorists and practitioners of
these two different schools to explore and develop the unique strengths of
their own approaches, rather than struggling in vain to fashion a single, uni-
fied OD that could accommodate both those who see social robotics and cyber-
ization as technologies that will destroy our human identity and those who
see them as technologies that will allow us to fulfill and transcend it.
Conclusion
The profoundly humanist roots of Organizational Development will be
challenged by the rise of a new posthuman age in which traditional human
beings, technologically enhanced human cyborgs, social robots, AI programs,
and other forms of intelligent beings all work together within organizations
to achieve common goals. OD’s humanist premises may cause some OD the-
orists and practitioners to reject the notion that posthumanizing technologies
can contribute positively to the development of effective organizations that
49
Marshak & Grant (2008).
252 • Posthuman Management
support their members’ freedom, dignity, and pursuit of self-realization. How-
ever, OD’s humanist values also provide a rich foundation to build upon for
those OD professionals who embrace posthumanizing technologies as new
venues for the creation of meaning, imagination, healthy relations, and self-
fulfillment within the workplace. It is too early to know whether OD will
emerge from this transition as a single, unified discipline or as a set of related
humanistic and posthumanistic disciplines that adopt different stances toward
OD’s original humanist roots. However, the invaluable insights and tech-
niques that OD has to offer should ensure that it will continue to play a key
role as long as organizations exist that count human beings among their mem-
bers.
References
Abrams, J.J. “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shuster-
man, Rorty, Foucault.” Human Studies 27:3 (2004): 241-58.
Anderson, Donald L. Organization Development: The Process of Leading Organiza-
tional Change, 3e. SAGE Publications, 2015.
Anderson, Michael L. “Why Is AI so Scary?” Artificial Intelligence, Special Review Is-
sue, 169, no. 2 (2005): 201-08. doi:10.1016/j.artint.2005.10.008.
Azevedo, Renato, Cristiano Reis, and Edgard Cornacchione, Jr. “The HRI (Human-
Robot Interaction) and Human Resource Development (HRD).” International Jour-
nal of Education and Research vol. 1, no. 4 (April 2013). http://www.ijern.com/im-
ages/April-2013/02.pdf. Accessed December 17, 2014.
Bradford, David L., and W. Warner Burke. Reinventing Organization Development:
New Approaches to Change in Organizations. John Wiley & Sons, 2005.
Bradshaw, Jeffrey M., Paul Feltovich, Matthew Johnson, Maggie Breedy, Larry
Bunch, Tom Eskridge, Hyuckchul Jung, James Lott, Andrzej Uszok, and Jurriaan
van Diggelen. “From Tools to Teammates: Joint Activity in Human-Agent-Robot
Teams.” In Human Centered Design, edited by Masaaki Kurosu, pp. 935-44. Lec-
ture Notes in Computer Science 5619. Springer Berlin Heidelberg, 2009.
Breazeal, Cynthia. “Toward sociable robots.” Robotics and Autonomous Systems 42
(2003): 167-75.
Calverley, David J. “Imagining a Non-Biological Machine as a Legal Person.” AI &
SOCIETY 22 (4) (2008): pp. 523-37. doi:10.1007/s00146-007-0092-7.
Chermack, Thomas J., Susan A. Lynham, and Wendy E. A. Ruona. “Critical Uncer-
tainties Confronting Human Resource Development.” Advances in Developing Hu-
man Resources 5, no. 3 (2003): 257–71. doi:10.1177/1523422303254628.
Coeckelbergh, M. “Can We Trust Robots?” Ethics and Information Technology 14:1
(2012): 53-60.
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 253
Coleman, Kari Gwen. “Android Arete: Toward a Virtue Ethic for Computational
Agents.” Ethics and Information Technology 3 (4) (2001): 247-65.
doi:10.1023/A:1013805017161.
Cummings, Thomas, and Christopher Worley. Organization Development and
Change. Cengage Learning, 2014.
De Man, Jeroen, Annerieke Heuvelink, and Karel van den Bosch. “A Cognitive Model
for Social Role Compliant Behavior of Virtual Agents.” In Intelligent Virtual
Agents, edited by Yukiko Nakano, Michael Neff, Ana Paiva, and Marilyn Walker,
pp. 303-10. Lecture Notes in Computer Science 7502. Springer Berlin Heidelberg,
2012.
Edgar, Andrew. “The Hermeneutic Challenge of Genetic Engineering: Habermas and
the Transhumanists.” Medicine, Health Care and Philosophy 12, no. 2 (2009): 157-
67. doi:10.1007/s11019-009-9188-9.
“Ethical Guidelines and Professional Standards for Organization Development and
Group Process Consultants.” IAGP - International Association for Group Psycho-
therapy and Group Processes. http://www.iagp.com/docs/IAGPOrgEthicalguideli-
nesEnglishv1.0.pdf. Accessed December 20, 2014.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts vol. 8, no. 2 (2013): 26-32.
Fleischmann, Kenneth R. “Sociotechnical Interaction and Cyborg–Cyborg Interac-
tion: Transforming the Scale and Convergence of HCI.” The Information Society
25, no. 4 (2009): 227–35. doi:10.1080/01972240903028359.
Flemisch, F., M. Heesen, T. Hesse, J. Kelsch, A. Schieben, and J. Beller (2012). “To-
wards a Dynamic Balance between Humans and Automation: Authority, Ability,
Responsibility and Control in Shared and Cooperative Control Situations.” Cogni-
tion, Technology & Work 14 (1): 3-18. doi:10.1007/s10111-011-0191-6.
Friedenberg, J. Artificial Psychology: The Quest for What It Means to Be Human, Phila-
delphia: Psychology Press, 2008.
Gladden, Matthew E. The Handbook of Information Security for Advanced Neuropros-
thetics. Indianapolis: Synthypnion Academic, 2015.
Gladden, Matthew E. “Leveraging the Cross-Cultural Capacities of Artificial Agents
as Leaders of Human Virtual Teams.” Proceedings of the 10th European Conference
on Management Leadership and Governance, edited by Visnja Grozdani, pp. 428-
35. Reading: Academic Conferences and Publishing International Limited, 2014.
Gladden, Matthew E. “The Social Robot as ‘Charismatic Leader’: A Phenomenology
of Human Submission to Nonhuman Power.” In Sociable Robots and the Future of
Social Relations: Proceedings of Robo-Philosophy 2014, edited by Johanna Seibt,
Raul Hakli, and Marco Nørskov, pp. 329-39. Frontiers in Artificial Intelligence and
Applications 273. IOS Press, 2014.
254 • Posthuman Management
Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. “Developing Artificial Agents Worthy
of Trust: ‘Would You Buy a Used Car from This Artificial Agent?’” Ethics and In-
formation Technology 13:1 (2011): 17-27.
Gunkel, David, and Debra Hawhee. “Virtual Alterity and the Reformatting of Ethics.”
Journal of Mass Media Ethics 18, no. 3–4 (2003): 173-93.
doi:10.1080/08900523.2003.9679663.
Ho, Wan Ching, and Kerstin Dautenhahn. “Towards a Narrative Mind: The Creation
of Coherent Life Stories for Believable Virtual Agents.” In Intelligent Virtual
Agents, edited by Helmut Prendinger, James Lester, and Mitsuru Ishizuka, pp. 59-
72. Lecture Notes in Computer Science 5208. Springer Berlin Heidelberg, 2008.
“Humanism.” The Oxford Dictionary of Philosophy (2 Rev Ed.), edited by Simon Black-
burn. Oxford University Press. http://www.oxfordrefer-
ence.com/view/10.1093/acref/9780199541430.001.0001/acref-9780199541430-e-
1528?rskey=FZ9dbc&result=1530. Accessed January 14, 2015.
“The International Organization Development Code of Ethics.” Organizational Devel-
opment English Library, The OD Institute. http://www.theodinstitute.org/od-li-
brary/code_of_ethics.htm. Accessed December 17, 2014.
Kanda, Takayuki, and Hiroshi Ishiguro. Human-Robot Interaction in Social Robotics.
Boca Raton: CRC Press, 2013.
Knowles, Bran, Paul Coulton, Mark Lochrie, and Jon Whittle. “Wicked Persuasion: A
Designerly Approach.” In Persuasive Technology, edited by Anna Spagnolli, Luca
Chittaro, and Luciano Gamberini, pp. 137-42. Lecture Notes in Computer Science
8462. Springer International Publishing, 2014.
Lee, Benny Ping-Han, Edward Chao-Chun Kao, and Von-Wun Soo. “Feeling Ambiva-
lent: A Model of Mixed Emotions for Virtual Agents.” In Intelligent Virtual Agents,
edited by Jonathan Gratch, Michael Young, Ruth Aylett, Daniel Ballin, and Pat-
rick Olivier, pp. 329-42. Lecture Notes in Computer Science 4133. Springer Berlin
Heidelberg, 2006.
Marshak, Robert J., and David Grant. “Organizational Discourse and New Organiza-
tion Development Practices.” British Journal of Management 19 (2008): S7-19.
doi:10.1111/j.1467-8551.2008.00567.x.
“Organization and Human Systems Development Credo.” OD Network.
http://www.odnetwork.org/?page=ODCredo. Accessed December 17, 2014.
“Principles of OD Practice.” OD Network. http://www.odnetwork.org/?page=Princi-
plesOfODPracti. Accessed December 17, 2014.
Ramey, Christopher H. “Conscience as a Design Benchmark for Social Robots.” In
The 15th IEEE International Symposium on Robot and Human Interactive Communi-
cation, 2006. ROMAN 2006, pp. 486-91. IEEE, 2006.
Rebolledo-Mendez, Genaro, David Burden, and Sara de Freitas. “A Model of Motiva-
tion for Virtual-Worlds Avatars.” In Intelligent Virtual Agents, edited by Helmut
Prendinger, James Lester, and Mitsuru Ishizuka, pp. 535-36. Lecture Notes in
Computer Science 5208. Springer Berlin Heidelberg, 2008.
Chapter Four: OD and the Robotic-Cybernetic-Human Workforce • 255
Rehm, M., André, E., and Nakano, Y. “Some Pitfalls for Developing Enculturated
Conversational Agents.” In Human-Computer Interaction: Ambient, Ubiquitous
and Intelligent Interaction, pp. 340-48. Lecture Notes in Computer Science 5612,
2009.
Ruijten, Peter A. M., Jaap Ham, and Cees J. H. Midden. “Investigating the Influence of
Social Exclusion on Persuasion by a Virtual Agent.” In Persuasive Technology, ed-
ited by Anna Spagnolli, Luca Chittaro, and Luciano Gamberini, pp. 191-200. Lec-
ture Notes in Computer Science 8462. Springer International Publishing, 2014.
Samani, H.A., Valino Koh, J.T.K., Saadatian, E., and Polydorou, D. “Towards Robotics
Leadership: An Analysis of Leadership Characteristics and the Roles Robots Will
Inherit in Future Human Society.” Intelligent Information and Database Systems,
Lecture Notes in Computer Science 7197 (2012): 158-65.
Schaal, Stefan. “The New Robotics – towards Human‐centered Machines.” HFSP Jour-
nal 1, no. 2 (2007): 115-26. doi:10.2976/1.2748612.
Social Robots and the Future of Social Relations, edited by Johanna Seibt, Raul Hakli,
and Marco Nørskov. Amsterdam: IOS Press, 2014.
Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov. Farnham:
Ashgate Publishing, 2016.
Social Robots from a Human Perspective, edited by Jane Vincent, Sakari Taipale, Bar-
tolomeo Sapio, Giuseppe Lugano, and Leopoldina Fortunati. Springer Interna-
tional Publishing, 2015.
Solis, Jorge, and Atsuo Takanishi. “Recent Trends in Humanoid Robotics Research:
Scientific Background, Applications, and Implications.” Accountability in Research
17, no. 6 (2010): 278-98. doi:10.1080/08989621.2010.523673.
Stanford, Naomi. Organization Design for HR Managers: Engaging with Change.
Routledge, 2013.
Thill, Serge, and Robert Lowe. “On the Functional Contributions of Emotion Mecha-
nisms to (Artificial) Cognition and Intelligence.” In Artificial General Intelligence,
edited by Joscha Bach, Ben Goertzel, and Matthew Iklé, pp. 322-31. Lecture Notes
in Computer Science 7716. Springer Berlin Heidelberg, 2012.
Williams, Mary-Anne. “Robot Social Intelligence.” In Social Robotics, edited by Shuzhi
Sam Ge, Oussama Khatib, John-John Cabibihan, Reid Simmons, and Mary-Anne
Williams, pp. 45-55. Lecture Notes in Computer Science 7621. Springer Berlin
Heidelberg, 2012.
Wiltshire, Travis J., Dustin C. Smith, and Joseph R. Keebler. “Cybernetic Teams: To-
wards the Implementation of Team Heuristics in HRI.” In Virtual Augmented and
Mixed Reality. Designing and Developing Augmented and Virtual Environments, ed-
ited by Randall Shumaker, pp. 321-30. Lecture Notes in Computer Science 8021.
Springer Berlin Heidelberg, 2013.
Part II
Human Augmentation: A Closer
Look
Chapter Five
Neural Implants as Gateways to Digital-
Physical Ecosystems and Posthuman
Socioeconomic Interaction
1
Abstract. Looking beyond current desktop, mobile, and wearable
technologi es, we argue that work-related information and commu-
nications technology (ICT) will increasingly move inside the human
body through the use of neuroprosthetic devices that create em-
ployees who are permanently connected to their workplace ’s dig-
ital ecosystems. Such persons ma y possess enhanced perception,
memory, and abilities to manipulate physical and virtual environ-
ments and to link with human and synthetic minds to form cyber-
netic networks that can be bo th ‘supersocial ’ and ‘postsocial.’
However, such neuroprosthetics may also create a sense of i nau-
thenticity, vulnerability to computer viruses and hacking, financial
burdens, and ques tions surrounding ownership of intellectual prop-
erty produced using implants. Moreover, those populations who do
and do not adopt neuroprostheses may come to inhabit i ncreas-
ingly incompatible and mutually incomprehensible digital ecosys-
tems. Here we propose a cybernetic model for understanding how
neuroprosthetics can either facilitate human beings’ participation
in posthuman informational ecosystems – or undermine their health,
information security, and autonomy.
Introduction
For many employees, ‘work’ is no longer something performed while sit-
ting at a computer in an office. Employees in a growing number of industries
are expected to carry mobile devices and be available for work-related inter-
actions even when beyond the workplace and outside of normal business
1
This text was originally published in Digital Ecosystems: Society in the Digital Age, edited by
Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp. 85-98. Warsaw: Digital Economy Lab,
University of Warsaw, 2016.
260 • Posthuman Management
hours. In this text we argue that a future step will increasingly be to move
work-related information and communications technology (ICT) inside the
human body through the use of neuroprosthetics, to create employees who
are always ‘online’ and connected to their workplace’s digital ecosystems. At
present, neural implants are used primarily to restore abilities lost through
injury or illness, however their use for augmentative purposes is expected to
grow, resulting in populations of human beings who possess technologically
altered capacities for perception, memory, imagination, and the manipulation
of physical environments and virtual cyberspace. Such workers may exchange
thoughts and share knowledge within posthuman cybernetic networks that
are inaccessible to unaugmented human beings.
Scholars note that despite their potential benefits, such neuroprosthetic de-
vices may create numerous problems for their users, including a sense of al-
ienation, the threat of computer viruses and hacking, financial burdens, and
legal questions surrounding ownership of intellectual property produced us-
ing such implants. Moreover, different populations of human beings may
eventually come to occupy irreconcilable digital ecosystems as some persons
embrace neuroprosthetic technology, others feel coerced into augmenting
their brains to compete within the economy, others reject such technology,
and still others are simply unable to afford it.
In this text we propose a model for analyzing how particular neuropros-
thetic devices will either facilitate human beings’ participation in new forms
of socioeconomic interaction and digital workplace ecosystems – or under-
mine their mental and physical health, privacy, autonomy, and authenticity.
We then show how such a model can be used to create device ontologies and
typologies that help us classify and understand different kinds of advanced
neuroprosthetic devices according to the impact that they will have on indi-
vidual human beings.
From Neuroprosthetic Devices to Posthuman Digital-
Physical Ecosystems
Existing Integration of the Human Brain with Work-related
Digital-Physical Ecosystems
In recent decades the integration of the human brain with work-related
digital ecosystems has grown stronger and increasingly complex. Whereas
once employees were expected to use desktop computers during ‘working
hours,’ for a growing number of employees it is now expected that they be
available for work-related interactions at all times through their possession
and mastery of mobile (and now, wearable) devices.
2
Along this path of ever
2
Shih, “Project Time in Silicon Valley” (2004); Gripsrud, “Working on the Train: From ‘Dead
Time’ to Productive and Vital Time” (2012).
Chapter Five: Neural Implants as Gateways • 261
closer human-technological integration, an emerging frontier is that of mov-
ing computing inside the human body through the use of implantable comput-
ers.
3
The Potential of Neuroprosthetic Implants for Human
Enhancement
One particular type of implantable computer is a neuroprosthetic device
(or neural implant) designed to provide a human being with some sensory,
cognitive, or motor capacity.
4
Such neuroprostheses are currently used pri-
marily for therapeutic purposes, to restore abilities that have been lost due to
injury or illness. However, researchers have already developed experimental
devices designed for purposes of human enhancement that allow an individual
to exceed his or her natural biological capacities by, for example, obtaining
the ability to perceive ultrasonic waves or store digitized computer files within
one’s body.
5
Toward Posthuman Digital-Physical Ecosystems
The use of neuroprosthetics for purposes of human enhancement is ex-
pected to grow over the coming decades, resulting in a segment of the popu-
lation whose minds possess unique kinds of sensory perception, memory, im-
agination, and emotional intelligence and who participate in social relations
that are mediated not through the exchange of traditional oral, written, or
nonverbal communication but by neurotechnologies that allow the sharing of
thoughts and volitions directly with other human minds and with computers.
6
Until now, communicating a thought to another mind has required the
thought to be expressed physically as a social action that is audible, visible, or
tangible in nature, however future neuroprosthetics may facilitate the ex-
change of ideas directly at the level of thought,
7
thereby allowing the creation
of human networks that can be understood as either ‘supersocial’ or ‘postso-
cial’ in nature. Not only might such posthuman
8
digital ecosystems be inac-
cessible to those who lack the appropriate form of neural augmentation, but
even their very existence may be invisible to unmodified human beings.
3
Koops & Leenes, “Cheating with Implants: Implications of the Hidden Information Advantage
of Bionic Ears and Eyes” (2012); Gasson, “Human ICT Implants: From Restorative Application to
Human Enhancement” (2012); McGee, “Bioelectronics and Implanted Devices” (2008).
4
Lebedev, “Brain-Machine Interfaces: An Overview” (2014).
5
Warwick, “The Cyborg Revolution” (2014); Gasson (2012); McGee (2008).
6
McGee (2008); Warwick (2014); Rao et al., “A Direct Brain-to-Brain Interface in Humans” (2014).
7
Warwick (2014); Rao et al. (2014); Gladden, “Tachikomatic Domains: Utopian Cyberspace as a
‘Contingent Heaven’ for Humans, Robots, and Hybrid Intelligences” (2015).
8
See Ferrando, “Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Ma-
terialisms: Differences and Relations” (2013).
262 • Posthuman Management
In this text, we will often refer to such ecosystems as ‘digital’ to emphasize
the fact that they may utilize an immersive cyberspace or other artificial en-
vironment as a virtualized locus for socioeconomic interaction. However, it
should be kept in mind that any such virtual reality is always grounded in and
maintained by the computational activity of electronic or biological physical
substrates; thus technically, digital ecosystems should always be understood
as ‘digital-physical’ ecosystems.
The Need to Analyze Neuroprosthetics from Cybernetic,
Phenomenological, and Existentialist Perspectives
As a bidirectional gateway, a neural implant not only aids one’s mind to
reach out to explore the world and interact with other entities; it may also
allow external agents or systems to reach into one’s mind to access – and po-
tentially manipulate or disrupt – one’s most intimate mental processes.
9
This
makes it essential that manufacturers who produce such devices, policymak-
ers who can encourage or ban their adoption, and users in whom they will be
implanted be able to understand the positive and negative impacts of particu-
lar neuroprosthetic devices on individual users. This calls for the development
of device ontologies and typologies for classifying and understanding neuro-
prostheses that do not simply focus on the devices’ technical characteristics
but which also consider a user’s lived experience of a neuroprosthetic device
and which integrate a cybernetic analysis of “control and communication”
10
with phenomenological and even existentialist perspectives.
11
Existing Ontologies and Typologies of Neuroprosthetic Devices
Existing typologies for neuroprosthetics are primarily functional. For ex-
ample, a neuroprosthetic device can be classified based on the nature of its
interface with the brain’s neural circuitry (sensory, motor, bidirectional sen-
sorimotor, or cognitive
12
), its purpose (for restoration, diagnosis, identifica-
tion, or enhancement
13
), or its location (non-invasive, partially invasive, or in-
vasive
14
). Typologies have also been developed that classify a neuroprosthesis
according to whether it aids its human user to interact with a real physical
environment using his or her natural physical body, augments or replaces the
9
See Gasson (2012), pp. 15-16.
10
Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine (1961).
11
Gladden, “Tachikomatic Domains” (2015).
12
Lebedev (2014).
13
Gasson (2012), p. 25.
14
Gasson (2012), p. 14.
Chapter Five: Neural Implants as Gateways • 263
user’s natural physical body (e.g., with robotic prosthetic limbs), or allows the
user to sense and manipulate some virtual environment.
15
Formulating our Model for an Ontology of
Neuroprosthetics
Here we propose a model for classifying and understanding neuropros-
thetic devices especially in their role of integrating human beings into digital
ecosystems, economies, and information systems. The model comprises two
main dimensions, of which one (impact) is further subdivided into two sub-
dimensions (new capacities and detriments).
Roles of the Human User
A neuroprosthetic device affects its human user as viewed on three levels:
1) the human being as a sapient metavolitional agent, a unitary mind that pos-
sesses its own conscious awareness, memory, volition, and conscience (or
‘metavolitionality’
16
); 2) the human being as an embodied organism that in-
habits and can sense and manipulate a particular environment through the
use of its body; and 3) the human being as a social and economic actor who
interacts with others to form social relationships and to produce, exchange,
and consume goods and services.
Impact: Potential New Capacities and Detriments
At each of these three levels, a neuroprosthetic device can create for its
user either new opportunities and advantages, new threats and disadvantages,
or both. Typically a neuroprosthesis creates new opportunities for its user to
participate in socioeconomic interaction and informational ecosystems by
providing some new cognitive, sensory, or motor capacity. Disadvantages
may take the form of a new dependency on some external resource, the loss
of a previously existing capability, a security vulnerability, or some other det-
riment. Because a neuroprosthetic device’s creation of new capacities can be
independent of its creation of detriments, these elements comprise two differ-
ent dimensions; however, it is simpler to treat them as two sub-dimensions of
a single larger dimension, the device’s ‘impact.’
Impacts Captured by Our Model
Below we present specific capacities and detriments that neuroprosthetics
are expected to create for their users at the three levels of the human being as
1) sapient metavolitional agent, 2) embodied embedded organism, and 3) social
15
Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics: Video Games as Tools for Posthu-
man ‘Body Schema (Re)Engineering’” (2015).
16
Gladden, “Tachikomatic Domains” (2015); Calverley, “Imagining a Non-biological Machine as
a Legal Person” (2008).
264 • Posthuman Management
and economic actor. These items constitute a broad universe of expected pos-
sible impacts identified by scholars; any one neuroprosthesis may generate
only a small number of these effects, if any.
Figure 1. A multidimensional model of the impacts of neuroprosthetic devices on individual
users.
Impacts on the User as Sapient Metavolitional Agent
Neuroprosthetic devices may affect their users’ cognitive processes in
ways that positively or negatively impact the ability of such persons to par-
ticipate in socioeconomic interaction and informational ecosystems. New ca-
pacities provided by neuroprosthetics may include:
Enhanced memory, skills, and knowledge stored within the mind
(engrams). Building on current technologies tested in mice, future
neuroprosthetics may offer human users the ability to create, alter, or
weaken memories stored in their brains’ natural memory systems in
the form of engrams.
17
This could potentially be used not only to affect
17
See Han et al., “Selective Erasure of a Fear Memory” (2009); Ramirez et al., “Creating a False
Memory in the Hippocampus” (2013); McGee (2008); Warwick (2014), p. 267.
Chapter Five: Neural Implants as Gateways • 265
a user’s declarative knowledge but also to enhance motor skills or re-
duce learned fears.
Enhanced creativity. A neuroprosthetic device may be able to en-
hance a mind’s powers of imagination and creativity
18
by facilitating
processes that contribute to creativity, such as stimulating mental as-
sociations between unrelated items. Anecdotal increases in creativity
have been reported to result after the use of neuroprosthetics for deep
brain stimulation.
19
Enhanced emotion. A neuroprosthetic device might provide its user
with more desirable emotional dynamics.
20
Effects on emotion have
already been seen in devices used, e.g., for deep brain stimulation.
21
Enhanced conscious awareness. Research is being undertaken to
develop neuroprosthetics that would allow the human mind to, for
example, extend its periods of attentiveness and limit the need for pe-
riodic reductions in consciousness (i.e., sleep).
22
Enhanced conscience. One’s conscience can be understood as one’s
set of metavolitions, or desires about the kinds of volitions that one
wishes to possess;
23
insofar as a neural implant enhances processes of
memory and emotion
24
that allow for the development of the con-
science, it may enhance one’s ability to develop, discern, and follow
one’s conscience.
New impairments generated by neuroprosthetics at the level of their user’s
internal mental processes may include:
Loss of agency. A neuroprosthetic device may damage the brain or
disrupt its activity in a way that reduces or eliminates the ability of
its human user to possess and exercise agency.
25
Moreover, the
knowledge that this can occur may lead users to doubt whether their
volitions are really ‘their own’ – an effect that has been seen with
neuroprosthetics used for deep brain stimulation.
26
18
See Gasson (2012), pp. 23-24.
19
See Cosgrove, “Session 6: Neuroscience, Brain, and Behavior V: Deep Brain Stimulation”
(2004); Gasson (2012).
20
McGee (2008), p. 217.
21
See Kraemer, “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises Questions of
Personal Authenticity and Alienation” (2011).
22
Kourany, “Human Enhancement: Making the Debate More Productive” (2013), pp. 992-93.
23
Calverley (2008); Gladden, “Tachikomatic Domains” (2015).
24
Calverley (2008), pp. 528-34.
25
McGee (2008), p. 217.
26
Kraemer (2011).
266 • Posthuman Management
Loss of conscious awareness. A neuroprosthetic device may dimin-
ish the quality or extent of its user’s conscious awareness, e.g., by in-
ducing daydreaming or increasing the required amount of sleep. A
neuroprosthesis could potentially even destroy its user’s capacity for
conscious awareness (e.g., by inducing a coma) but without causing
the death of his or her biological organism.
27
Loss of information security for internal cognitive processes. A
neuroprosthetic device may compromise the confidentiality, integ-
rity, or availability of information contained within its user’s mental
activities (such as perception, memory, volition, or imagination), ei-
ther by altering or destroying information, making it inaccessible to
the user, or making it accessible to unauthorized parties.
28
Inability to distinguish a real from a virtual ongoing experience. If
a neuroprosthesis alters or replaces its user’s sensory perceptions, it
may make it impossible for the user to know which (if any) of the
sense data that he or she is experiencing correspond to some actual
element of an external physical environment and which are ‘virtual’
or simply ‘false.’
29
Inability to distinguish true from false memories. If a neuropros-
thetic device is able to create, alter, or destroy engrams within its
user’s brain, it may be impossible for a user to know which of his or
her apparent memories are ‘true’ and which are ‘false’ (i.e., distorted
or purposefully fabricated).
30
Other psychological side effects. The brain may undergo poten-
tially harmful and unpredictable structural and behavioral changes as
it adapts to the presence, capacities, and activities of a neuroprosthe-
sis.
31
These effects may include new kinds of neuroses, psychoses, and
other disorders unique to users of neuroprosthetics.
Impacts on the User as Embodied Embedded Organism
Interacting with an Environment
Neuroprosthetic devices may affect the ways in which their users sense,
manipulate, and occupy their environment through the interface of a physical
or virtual body. New capacities provided might include:
27
Gladden, “Tachikomatic Domains” (2015).
28
McGee (2008), p. 217; Gladden, “Tachikomatic Domains” (2015); Gladden, The Handbook of
Information Security for Advanced Neuroprosthetics (2015).
29
McGee (2008), p. 221; Gladden, “Tachikomatic Domains” (2015).
30
See Ramirez et al. (2013).
31
McGee (2008), pp. 215-16; Koops & Leenes (2012), pp. 125, 130.
Chapter Five: Neural Implants as Gateways • 267
Sensory enhancement. A neuroprosthetic device may allow its user
to sense his or her physical or virtual environment in new ways, either
by acquiring new kinds of raw sense data or new modes or abilities
for processing, manipulating, and interpreting sense data.
32
Motor enhancement. A neuroprosthetic device may give users new
ways of manipulating physical or virtual environments through their
bodies.
33
It may grant enhanced control over one’s existing biological
body, expand one’s body to incorporate new devices (such as an exo-
skeleton or vehicle) through body schema engineering,
34
or allow the
user to control external networked physical systems such as drones
or 3D printers or virtual systems or phenomena within an immersive
cyberworld.
Enhanced memory, skills, and knowledge accessible through
sensory organs (exograms). A neuroprosthetic device may give its
user access to external data-storage sites whose contents can be
‘played back’ to the user’s conscious awareness through his or her
sensory organs or to real-time streams of sense data that augment or
replace one’s natural sense data.
35
The ability to record and play back
one’s own sense data could provide perfect audiovisual memory of
one’s experiences.
36
New impairments generated by neuroprosthetics at the level of their users’
physical or virtual bodily interfaces with their environments might include:
Loss of control over sensory organs. A neuroprosthetic device may
deny a user direct control over his or her sensory organs.
37
Techno-
logically mediated sensory systems may be subject to noise, malfunc-
tions, and manipulation or forced sensory deprivation or overload oc-
curring at the hands of ‘sense hackers.’
38
Loss of control over motor organs. A neuroprosthetic device may
impede a user’s control over his or her motor organs.
39
The user’s
body may no longer be capable, e.g., of speech or movement, or the
32
Warwick (2014), p. 267; McGee (2008), p. 214; Koops & Leenes (2012), pp. 120, 126.
33
McGee (2008), p. 213; Warwick (2014), p. 266.
34
Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics” (2015).
35
Koops & Leenes (2012), pp. 115, 120, 126.
36
McGee (2008), p. 217.
37
Koops & Leenes (2012), p. 130.
38
Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015), pp. 201-
02.
39
Gasson (2012), p. 216.
268 • Posthuman Management
control over one’s speech or movements may be assumed by some
external agency.
Loss of control over other bodily systems. A neuroprosthetic device
may impact the functioning of internal bodily processes such as res-
piration, cardiac activity, digestion, hormonal activity, and other pro-
cesses that are already affected by existing implantable medical de-
vices.
40
Other biological side effects. A neuroprosthetic device may be con-
structed from components that are toxic or deteriorate in the body,
41
may be rejected by its host, or may be subject to mechanical, elec-
tronic, or software failures that harm its host’s organism.
Impacts on the User as Social and Economic Actor
Neuroprosthetic devices may affect the ways in which their users connect
to, participate in, contribute to, and are influenced by social relationships and
structures and economic networks and exchange. New capacities provided
might include:
Ability to participate in new kinds of social relations. A neuropros-
thetic device may grant the ability to participate in new kinds of tech-
nologically mediated social relations and structures that were previ-
ously impossible, perhaps including new forms of merged agency
42
or
cybernetic networks with utopian (or dystopian) characteristics.
43
Ability to share collective knowledge, skills, and wisdom. Neuro-
prosthetics may link users in a way that forms communication and
information systems
44
that can generate greater collective knowledge,
skills, and wisdom than are possessed by any individual member of
the system.
45
Enhanced job flexibility and instant retraining. By facilitating the
creation, alteration, and deletion of information stored in engrams or
exograms, a neuroprosthetic device may allow a user to download
new knowledge or skills or instantly establish relationships for use in
a new job.
46
40
McGee (2008), p. 209; Gasson (2012), pp. 12-16.
41
McGee (2008), pp. 213-16.
42
McGee (2008), p. 216; Koops & Leenes (2012), pp. 125, 132.
43
Gladden, “Tachikomatic Domains” (2015).
44
McGee (2008), p. 214; Koops & Leenes (2012), pp. 128-29; Gasson (2012), p. 24.
45
Wiener (1961), loc. 3070ff., 3149ff.; Gladden, “Tachikomatic Domains” (2015).
46
See Koops & Leenes (2012), p. 126.
Chapter Five: Neural Implants as Gateways • 269
Enhanced ability to manage complex technological systems. By
providing a direct interface to external computers and mediating its
user’s interaction with them,
47
a neuroprosthesis may grant an en-
hanced ability to manage complex technological systems, e.g., for the
production or provisioning of goods or services.
48
Enhanced business decision-making and monetary value. By
performing data mining to uncover novel knowledge, executing other
forms of data analysis, offering recommendations, and alerting the
user to potential cognitive biases, a neuroprosthesis may enhance its
user’s ability to execute rapid and effective business-related decisions
and transactions.
49
Moreover, by storing cryptocurrency keys, a neu-
roprosthesis may allow its user to store money directly within his or
her brain for use on demand.
50
Qualifications for specific professions and roles. Neuroprosthetic
devices may initially provide persons with abilities that enhance job
performance in particular fields
51
such as computer programming, art,
architecture, music, economics, medicine, information science, e-
sports, information security, law enforcement, and the military; as ex-
pectations for employees’ neural integration into workplace systems
grow, possession of neuroprosthetic devices may become a require-
ment for employment in some professions.
52
New impairments generated by neuroprosthetic devices at the level of their
users’ socioeconomic relationships and activity might include:
Loss of ownership of one’s body and intellectual property. A neu-
roprosthetic device that is leased would not belong to its human user,
and even a neuroprosthesis that has been purchased could potentially
be subject to seizure in some circumstances (e.g., bankruptcy). De-
pending on the leasing or licensing terms, intellectual property pro-
duced by a neuroprosthetic device’s user (including thoughts, memo-
ries, or speech) may be partly or wholly owned by the device’s man-
ufacturer or provider.
53
47
McGee (2008), p. 210.
48
McGee (2008), pp. 214-15; Gladden, “Cybershells, Shapeshifting, and Neuroprosthetics” (2015).
49
See Koops & Leenes (2012), p. 119.
50
Gladden, “Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money
That Advances Human Ethical Values” (2015).
51
Koops & Leenes (2012), pp. 131-32.
52
McGee (2008), pp. 211, 214-15; Warwick (2014), p. 269.
53
Gladden, “Tachikomatic Domains” (2015); Gladden, The Handbook of Information Security for
Advanced Neuroprosthetics (2015), p. 164.
270 • Posthuman Management
Creation of financial, technological, or social dependencies. The
user of a neuroprosthetic device may no longer be able to function
effectively without the device
54
and may become dependent on its
manufacturer for hardware maintenance, software updates, and data
security and on specialized medical care providers for diagnostics and
treatment relating to the device.
55
A user may require regular device
upgrades in order to remain competitive in some jobs. High switching
costs may make it impractical to shift to a competitor’s device after a
user has installed an implant and committed to its manufacturer’s dig-
ital ecosystem.
Subjugation of the user to external agency. Instead of merely im-
peding its user’s ability to possess and exercise agency, a neuropros-
thesis may subject its user to control by some external agency. This
could occur, e.g., if the user’s memories, emotions, or volitions were
manipulated by means of the device
56
or if the user joined with other
minds to create a new form of social entity that possesses some shared
agency.
57
Social exclusion and employment discrimination. The use of de-
tectable neuroprosthetics may result in shunning or mistreatment of
users.
58
Users of advanced neuroprostheses may lose the ability or de-
sire to communicate with human beings who lack such devices,
thereby fragmenting human societies
59
and possibly weakening users’
solidarity with other human beings.
60
Possession of some kinds of
neuroprosthetic devices may exclude their users from employment in
roles where ‘natural,’ unmodified workers are considered desirable or
even required (e.g., for liability or security reasons).
Vulnerability to data theft, blackmail, and extortion. A hacker,
computer virus, or other agent may be able to steal data contained in
a neuroprosthesis or use it to gather personal data (potentially includ-
ing the contents of thoughts, memories, or sensory experiences)
61
that
could be used for blackmail, extortion, corporate espionage, or terror-
ism.
54
Koops & Leenes (2012), p. 125.
55
McGee (2008), p. 213.
56
Gasson (2012), pp. 15-16.
57
McGee (2008), p. 216.
58
Koops & Leenes (2012), pp. 124-25.
59
McGee (2008), pp. 214-16; Warwick (2014), p. 271.
60
Koops & Leenes (2012), p. 127.
61
McGee (2008), p. 217; Koops & Leenes (2012), pp. 117, 130; Gasson (2012), p. 21; Gladden, The
Handbook of Information Security for Advanced Neuroprosthetics (2015), pp. 167-68.
Chapter Five: Neural Implants as Gateways • 271
Applying the Model: Toward a New Typology of
Neuroprosthetics
As a test case, we can use this model to analyze one kind of neuroprosthetic
device that is expected to become available in the future: a cochlear implant
with audio recording, playback, upload, download, and live streaming capa-
bilities.
62
Everything that its user hears would be recorded for later playback
on demand. Instead of simply conveying the ‘real’ sounds produced by the
physical environment, those sounds can be augmented or replaced by other
audio that is stored in or transmitted to the device. Potential capacities and
impairments created for the user of such a device are identified below.
Figure 2. The model applied to analyze impacts of a particular auditory neuroprosthesis.
As can be seen from this example, the model does not yield a single quan-
titative ‘impact score’ for each of the three levels but rather uses qualitative
descriptions to capture a complex set of impacts. This model delineates a de-
vice ontology that can form the basis of further reflection on and analysis of
a neuroprosthetic device’s impact from both cybernetic, phenomenological,
62
See Koops & Leenes (2012); McGee (2008); Gladden, “Tachikomatic Domains” (2015).
272 • Posthuman Management
and existentialist perspectives. By allowing neuroprosthetic devices with sim-
ilar characteristics to be identified and grouped, it can also serve as the basis
of new typologies for neurotechnologies.
Conclusion
Ongoing advances in neuroprosthetics are expected to yield a diverse
range of new technologies with the potential to dramatically reshape a human
being’s internal mental life, his or her bodily existence and interaction with
the environment, and his or her participation in social and economic networks
and activity. The new capacities and impairments that such technologies pro-
vide may allow human beings to physically and virtually inhabit digital eco-
systems and interact socially in ways so revolutionary that they can best be
understood as ‘posthuman.’
The model developed in this text for understanding these impacts of neu-
roprosthetic devices is already being elaborated in the specific context of in-
formation security to provide a framework for future research and practice in
that field.
63
By further refining and applying the model in other contexts, we
hope that it will be possible for engineers, ethicists, policymakers, and con-
sumers to better understand how particular kinds of neuroprosthetic devices
may contribute to the development of new digital ecosystems that can be a
powerful venue for the growth, liberation, and empowerment – or oppression
and dehumanization – of the human beings of the future.
References
Calverley, D.J. “Imagining a non-biological machine as a legal person.” AI & SOCIETY
22, no. 4 (2008): 523-37.
Cosgrove, G.R. “Session 6: Neuroscience, brain, and behavior V: Deep brain stimula-
tion.” Meeting of the President’s Council on Bioethics. Washington, DC, June 24-
25, 2004. https://bioethicsarchive.georgetown.edu/pcbe/transcripts/june04/ses-
sion6.html. Accessed June 12, 2015.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts 8, no. 2 (Fall 2013): 26-32.
Gasson, M.N. “Human ICT Implants: From Restorative Application to Human En-
hancement.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 11-28. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
Gladden, Matthew E. “Cryptocurrency with a Conscience: Using Artificial Intelli-
gence to Develop Money that Advances Human Ethical Values.” Annales: Ethics
in Economic Life vol. 18, no. 4 (2015): 85-98.
63
See Gladden, The Handbook of Information Security for Advanced Neuroprosthetics (2015).
Chapter Five: Neural Implants as Gateways • 273
Gladden, Matthew E. “Cybershells, Shapeshifting, and Neuroprosthetics: Video
Games as Tools for Posthuman ‘Body Schema (Re)Engineering’.” Keynote presen-
tation at the Ogólnopolska Konferencja Naukowa Dyskursy Gier Wideo, Facta
Ficta / AGH, Kraków, June 6, 2015.
Gladden, Matthew E. The Handbook of Information Security for Advanced Neuropros-
thetics, Indianapolis: Synthypnion Academic, 2015.
Gladden, Matthew E. “Tachikomatic Domains: Utopian Cyberspace as a ‘Contingent
Heaven’ for Humans, Robots, and Hybrid Intelligences.” Presentation at His Mas-
ter's Voice: Utopias and Dystopias in Audiovisual Culture, Facta Ficta / Uniwer-
sytet Jagielloński, Kraków, March 24, 2015.
Gripsrud, M., and R. Hjorthol. “Working on the Train: From ‘Dead Time’ to Produc-
tive and Vital Time.” Transportation 39(5) (2012): 941-56.
Han, J.-H., S.A. Kushner, A.P. Yiu, H.-W. Hsiang, T. Buch, A. Waisman, B. Bontempi,
R.L. Neve, P.W. Frankland, and S.A. Josselyn. “Selective Erasure of a Fear
Memory.” Science 323, no. 5920 (2009): 1492-96.
Koops, B.-J., and R. Leenes. “Cheating with Implants: Implications of the Hidden In-
formation Advantage of Bionic Ears and Eyes.” In Human ICT Implants: Technical,
Legal and Ethical Considerations, edited by Mark N. Gasson, Eleni Kosta, and Di-
ana M. Bowman, pp. 113-34. Information Technology and Law Series 23. T. M. C.
Asser Press, 2012.
Kourany, J.A. “Human Enhancement: Making the Debate More Productive.” Erkennt-
nis 79(5) (2013): 981-98.
Kraemer, Felicitas. “Me, Myself and My Brain Implant: Deep Brain Stimulation Raises
Questions of Personal Authenticity and Alienation.” Neuroethics 6, no. 3 (May 12,
2011): 483-97. doi:10.1007/s12152-011-9115-7.
Lebedev, M. “Brain-Machine Interfaces: An Overview.” Translational Neuroscience
5(1) (2014): 99-110.
McGee, E.M. “Bioelectronics and Implanted Devices.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 207-24. The Inter-
national Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
Ramirez, S., X. Liu, P.-A. Lin, J. Suh, M. Pignatelli, R.L. Redondo, T.J. Ryan, and S.
Tonegawa. “Creating a False Memory in the Hippocampus.” Science 341, no. 6144
(2013): 387-91.
Rao, R.P.N., A. Stocco, M. Bryan, D. Sarma, T.M. Youngquist, J. Wu, and C.S. Prat. “A
Direct Brain-to-Brain Interface in Humans.” PLoS ONE 9(11) (2014).
Shih, J. “Project Time in Silicon Valley.” Qualitative Sociology 27(2) (2004): 223-45.
Warwick, K. (2014). “The Cyborg Revolution.” Nanoethics 8 (2014): 263-73.
Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the
Machine, second edition. Cambridge, MA: The MIT Press, 1961. [Quid Pro ebook
edition for Kindle, 2015.]
Chapter Six
The Impacts of Human Neurocybernetic
Enhancement on Organizational Business
Models
Introduction
Alongside emerging posthumanizing technologies such as those related to
social robotics and genetic engineering, the field of neuroprosthetics is creat-
ing both significant opportunities and risks for human societies.
1
Already a
wide range of neuroprosthetic devices (including many kinds of neural im-
plants) are used for therapeutic purposes to treat particular medical condi-
tions. It is expected that in the future, increasingly large populations of human
beings will use such devices for purposes of elective human augmentation and
enhancement, to acquire sensory, cognitive, and motor abilities that are radi-
cally different from those possessed by typical human beings.
2
The social and
economic impact of these neuroprosthetic technologies will be significant:
such devices have the potential to reshape the ways in which human beings
interact with one another, enabling them to create new forms of social struc-
tures and organizations and to engage in new kinds of informational and eco-
nomic exchange that were never previously possible.
3
1
For an overview of posthumanism and the forces of posthumanization, see Ferrando, “Posthu-
manism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms: Differences
and Relations” (2013); Herbrechter, Posthumanism: A Critical Analysis (2013); and “A Typology
of Posthumanism: A Framework for Differentiating Analytic, Synthetic, Theoretical, and Practi-
cal Posthumanisms” in Gladden, Sapient Circuits and Digitalized Flesh: The Organization as Locus
of Technological Posthumanization (2016). For an analysis of technological posthumanization and
its impact on organizations, see “Organizational Posthumanism” in Gladden, Sapient Circuits and
Digitalized Flesh (2016).
2
Warwick & Gasson, “Implantable Computing” (2008); McGee, “Bioelectronics and Implanted
Devices” (2008); Gasson et al., “Human ICT Implants: From Invasive to Pervasive” (2012).
3
See Gladden, “Neural Implants as Gateways to Digital-Physical Ecosystems and Posthuman
Socioeconomic Interaction” (2016).
276 • Posthuman Management
As participants in the larger societies, economies, and informational eco-
systems within which they exist, businesses will be impacted by the widening
use of neuroprosthetic technologies. Companies that are able to identify, un-
derstand, and anticipate these technological and social changes and transform
their business models accordingly may be able to secure significant competi-
tive advantages. On the other hand, companies that are not able to adapt their
business models quickly enough to the social, economic, political, cultural, and
ethical changes driven by neuroprosthetic technologies may find themselves
unable to compete, grow, or even survive.
In this paper, we briefly consider the concept of a ‘business model’ and the
situations that require a company to change and update its business model.
We then explore three main areas in which the rise of neuroprosthetics is ex-
pected to transform humanity. Finally, we identify the impact that such trans-
formation will have on companies’ business models and consider an example
highlighting the reasons why many companies will need to adopt new busi-
ness models in order to address these changed realities.
The Definition of a ‘Business Model’
Management scholars have proposed various definitions of exactly what
constitutes a ‘business model.’
4
For example, Magretta suggests that “Business
models […] are, at heart, stories – stories that explain how enterprises work”
5
and that a business model’s primary value “is that it focuses attention on how
all the elements of the system fit into a working whole.”
6
In this sense, a busi-
ness model can be understood as a sort of heuristic tool – a narrative that helps
both employees and customers find meaning and an overarching purpose in
individual tasks that are being carried out by hundreds, thousands, or even
millions of individuals interacting around the globe.
On the other hand, Johnson, Christensen, and Kagermann define a business
model as “four interlocking elements that, taken together, create and deliver
value”
7
; these four elements are the customer value proposition, profit for-
mula, key resources, and key processes.
8
Meanwhile, Casadesus-Masanell and
Ricart
9
argue that a good business model is 1) aligned with a company’s goals,
2) self-reinforcing, and 3) robust – and that a business model can be distilled
4
Casadesus-Masanell & Ricart, “How to Design a Winning Business Model” (2011), pp. 102-03.
5
Magretta, “Why Business Models Matter” (2002), p. 87.
6
Magretta (2002), p. 90.
7
Johnson et al., “Reinventing Your Business Model” (2008), p. 52.
8
Johnson et al. (2008), p. 54.
9
Casadesus-Masanell & Ricart (2011), p. 102.
Chapter Six: Neurocybernetic Enhancement and Business Models • 277
into a set of policy, asset, and governance choices that are made by the com-
pany’s managers and which generate outcomes that either manifest them-
selves immediately and are ephemeral or which take time to develop and are
more permanent.
10
One theme common to such definitions is that a company’s business model
can be contrasted with its strategies and tactics. If one understands a business’s
tactics as fine-grained decisions made at a ‘micro’ level – that is, choices that
are made in response to immediate circumstances, can be changed quickly,
and often involve only a single business unit or process – then the company’s
business model comprises those decisions made at the ‘macro’ level, which
can be changed only with difficulty and over time. The business model thus
encompasses the entire general cycle of activities taking place within the com-
pany and the ways in which those activities (ideally) reinforce one another.
From this perspective, a business’s strategies can be understood as a bridge
between the company’s broad business model and its more specific opera-
tional tactics.
11
Circumstances Enabling (or Requiring) a New Business
Model
Scholars identify a number of circumstances in which a company can (or
must) change its business model in order to develop or maintain a competitive
advantage. For Magretta, business models must be changed when “they fail
either the narrative test (the story doesn’t make sense) or the numbers test
(the P&L doesn’t add up).”
12
For Johnson et al.,
13
it makes sense to adopt a new business model when
facing particular “strategic circumstances” such as “The opportunity to ad-
dress through disruptive innovation the needs of large groups of potential cus-
tomers who are shut out of a market entirely because existing solutions are
too expensive or complicated for them” or “The opportunity to capitalize on a
brand-new technology by wrapping a new business model around it.” Moreo-
ver, a new business model is required if the business environment has evolved
to such an extent that all four elements of the existing model – the customer
value proposition, profit formula, key resources, and key processes – have be-
come obsolete and are in need of change.
14
10
Casadesus-Masanell & Ricart (2011), p. 103.
11
Casadesus-Masanell & Ricart (2011), p. 107; Magretta (2002), p. 91.
12
Magretta (2002), p. 90.
13
Johnson et al. (2008), p. 57.
14
Johnson et al. (2008), p. 57.
278 • Posthuman Management
For Casadesus-Masanell and Ricart,
15
the need for new business models in
today’s world is often driven by fast-evolving technological transformation.
For example, many large high-tech businesses such as Facebook and eBay are
succeeding thanks to Internet-enabled ‘network effects’
16
that multiply the
value offered to consumers as the number of consumers grows. Once estab-
lished, such network effects create a ‘virtuous cycle’ that continuously
strengthens the company’s customer value proposition and wards off poten-
tial competitors. Even for those companies that are not primarily e-businesses,
the use of an effective business model can perform a similar function by gen-
erating virtuous cycles for the companies.
17
Regardless of which of these conceptual frameworks one adopts, we shall
see in the following sections that the rise of neuroprosthetic technologies is
expected to have an impact that will both enable and require many businesses
to transform their business models within the foreseeable future.
Envisioning Posthuman Neuroprosthetics
A wide range of devices that link directly with the brain’s neural circuitry
are already in use.
18
These include sensory neuroprostheses (such as cochlear
implants and artificial retinas), motor neuroprostheses (such as implants allow-
ing a wheelchair to be controlled by thought), and bidirectional sensorimotor
neuroprostheses (such as a prosthetic hand that provides tactile sensations to
its user and moves in response to the user’s thoughts); in addition, a further
category of cognitive neuroprostheses (such as memory implants) is also in its
earliest experimental stages.
19
It is expected that future neuroprosthetic devices will increasingly be de-
signed to provide abilities that exceed or differ from what is naturally possible
for human beings.
20
Such technologies’ use for physical and cognitive en-
hancement is expected to expand the market for neuroprosthetics and im-
plantable computers beyond the segment of the population that currently re-
lies on them to treat medical conditions.
21
Researchers anticipate that future
sensory neuroprosthetics may give human beings the capacity to experience
their environments in new ways, such as through the use of telescopic or night
15
Casadesus-Masanell & Ricart (2011), pp. 101-02.
16
Casadesus-Masanell & Ricart (2011), p. 102.
17
Casadesus-Masanell & Ricart (2011), p. 102.
18
See Gladden, “Neural Implants as Gateways” (2016), and Lebedev, “Brain-Machine Interfaces:
An Overview” (2014).
19
Lebedev (2014), pp. 99-100.
20
Gasson, “ICT Implants” (2008); Gasson et al. (2012); McGee (2008); Merkel et al., “Central Neu-
ral Prostheses” (2007).
21
See McGee (2008) and Gasson et al. (2012).
Chapter Six: Neurocybernetic Enhancement and Business Models • 279
vision
22
or by overlaying visual data with supplemental information displayed
using augmented reality.
23
Some researchers envision the development of im-
plants that can record all of a person’s audiovisual experiences for later play-
back, effectively granting the person perfect audiovisual memory.
24
Building
on successful experiments with creating artificial memories in mice,
25
other
researchers envision the possibility of a person being able to download new
knowledge or skills onto a memory chip implanted in his or her brain.
26
Tech-
nologies are also being developed
27
that may eventually allow direct commu-
nication between two human brains that are physically located thousands of
miles apart.
The use of such advanced neuroprosthetic devices will reshape the ways
in which human beings collaborate with one another. Already ‘cyborg-cyborg
interaction’ is becoming a fundamental aspect of human society, and it will
increasingly serve as a foundation for new kinds of social relationships and
structures.
28
Neuroprosthetics will allow for increasingly intimate forms of
communication that do not involve physical face-to-face interaction but are
instead mediated by technology, thereby facilitating the development of novel
types of posthuman interpersonal relationships.
29
The Impact of Neuroprosthetics on Business Models
Neuroprosthetic devices such as neural implants are expected to impact
human beings on at least three different levels. First, a neural implant affects
a human being at the internal, cognitive level, in his or her role as an intelli-
gent agent possessing its own conscious awareness, volition, memory, and
conscience. Second, a neural implant affects a human being in his or her role
as a physical actor with a body that inhabits, senses, and manipulates a par-
ticular spatial environment. Third, a neural implant affects a human being in
his or her role as a member of the social and economic networks in which the
individual participates.
30
22
Gasson et al. (2012); Merkel et al. (2007).
23
Koops & Leenes, “Cheating with Implants: Implications of the Hidden Information Advantage
of Bionic Ears and Eyes” (2012).
24
Merkel et al. (2007); Robinett, “The Consequences of Fully Understanding the Brain” (2002).
25
Ramirez et al., “Creating a False Memory in the Hippocampus” (2013).
26
McGee (2008).
27
See Rao et al., “A Direct Brain-to-Brain Interface in Humans” (2014).
28
Fleischmann, “Sociotechnical Interaction and Cyborg–Cyborg Interaction: Transforming the
Scale and Convergence of HCI” (2009).
29
Grodzinsky et al., “Developing Artificial Agents Worthy of Trust: ‘Would You Buy a Used Car
from This Artificial Agent?’” (2011).
30
For a more detailed exploration of these three levels of impacts, see Gladden, “Neural Implants
280 • Posthuman Management
How will these effects of neuroprosthetics impact companies’ business
models? We can visualize this by drawing on concepts from management cy-
bernetics to understand Johnson, Christensen, and Kagermann’s four-part
‘business model’ as a representation of a business as a system in which pro-
cesses act on (or are performed by) resources, in order to generate a net benefit
for the customer (i.e., the customer value proposition) and a net benefit for the
company (i.e., a net profit) that can then be used to acquire more resources and
thereby grow the company.
31
We depict a generic example of this ‘business
process cycle’ view of a business model in Figure 1. In Figure 2, we overlay
onto that business process cycle a representation of the impacts of neuropros-
thetic devices at the levels of internal cognition, bodily interface with the en-
vironment, and socioeconomic interaction in order to depict the fact that each
of the three spheres of neuroprosthetics’ impact will touch all four areas of a
company’s business model. We would argue that no part of a business model
will be left unaffected by this posthumanizing technological change.
Figure 1. The four-element generic business model of Johnson, Christensen, and Kagermann
reinterpreted in light of management cybernetics as a business process cycle.
as Gateways” (2016).
31
For a discussion of such dynamics from the perspective of management cybernetics, see, e.g.,
Gladden, “The Artificial Life-Form as Entrepreneur: Synthetic Organism-Enterprises and the Re-
conceptualization of Business” (2014), and Beer, Brain of the Firm (1981).
Chapter Six: Neurocybernetic Enhancement and Business Models • 281
Example: The Impact of a Particular Kind of Sensory
Playback Device
Here we can consider the business-model impact of one particular kind of
neuroprosthetic device whose development is anticipated: that of a sen-
sorimotor-cognitive neural implant that allows a human being to instantane-
ously download (simply by thinking about this action and ‘willing’ it to occur)
information in the form of video, audio, text, and images that the person can
then ‘play back’ to his or her conscious awareness in the form of sensory input
that augments or replaces the actual sense data being provided by one’s envi-
ronment.
32
In this manner, a person could, for example, download and watch
a film ‘internally,’ within his or her own mind, without nearby individuals
ever realizing that this was taking place. Similarly, a person could download
vast libraries of reference books into the implant, which the person could then
read internally at will – perhaps running an automated process to search the
texts for particular terms, as desired.
Figure 2. A representation of the future impacts on all four elements of a generic business model
of the use of advanced neuroprosthetic devices.
32
Koops & Leenes (2012); McGee (2008).
282 • Posthuman Management
One can immediately foresee ways in which the existence and widespread
use of such a technology would significantly impact the effectiveness of many
companies’ existing business models by affecting each of the four model com-
ponents described above.
For example, the resources – including human resources – available to a
company would be notably altered. On the one hand, a company could poten-
tially ‘retrain’ its entire workforce by downloading instructional materials
into their implants and conceivably even forcing the employees’ implants to
play back the materials to the employees conscious awareness during working
hours.
33
On the other hand, employees who are sitting in an office and appear
to be ‘working’ could – unbeknownst to their employer – actually be watching
within their thoughts and their ‘mind’s eye’ a football game or movie that they
had just downloaded into their mind. Similarly, the processes available for a
business to utilize in coordinating its activities would change significantly:
emails, instant messages, and video messages sent between employees could
appear instantly in their field of vision and conscious awareness, and employ-
ees responsible for managing manufacturing processes could receive real-time
status updates and alerts downloaded into their implants around the clock,
regardless of whether they are at home or at work. For many businesses, the
customer value proposition that leads to a net benefit for the consumer and
profit formula that leads to a net benefit for the company would also be im-
pacted: for example, a customer who is in the midst of negotiating a major
purchase from a company or who is in a business’s retail location on the verge
of deciding to purchase a product can – without the company’s employees
realizing it – instantly download from the Internet into his or her mind hun-
dreds of user reviews of the product and news stories relating to it, in order
to more accurately weigh the value that the product will offer him or her.
Moreover, the potential customer might in that moment even find his or her
implant flooded with offers from competing companies that have detected the
person’s impending purchase and are seeking to advance their own profit for-
mula by offering the consumer a more appealing consumer value proposition.
Conclusion
As we have seen through one of many possible examples, the increasing
technological sophistication and expanding use of neuroprosthetics has the
potential to drive changes in all four aspects of a company’s business model,
thereby requiring the development of a new business model. Moreover, even
if a particular company is not for these reasons required to develop a new
business model, such advances in neuroprosthetics will create for companies
33
Here we are not addressing the ethicality or legality of such possibilities; we only note that
they represent a theoretical possibility that some companies might be tempted to pursue.
Chapter Six: Neurocybernetic Enhancement and Business Models • 283
the kinds of avenues for exploiting disruptive innovation and new technolo-
gies that can make proactive adoption of a new business model desirable. The
impact of neuroprosthetic devices in spurring the evolution of business envi-
ronments will likely create meaningful opportunities for companies that pos-
sess the foresight and flexibility to update their business models – and poten-
tial risks for companies that are unwilling or unable to adapt their business
models to respond to such technological and social transformations that are
expected to occur in the coming years and decades.
References
Beer, Stafford. Brain of the Firm. 2nd ed. New York: John Wiley, 1981.
Casadesus-Masanell, R., and J.E. Ricart. “How to Design a Winning Business Model.”
Harvard Business Review, Jan-Feb (2011): 100-07.
Ferrando, Francesca. “Posthumanism, Transhumanism, Antihumanism, Metahuman-
ism, and New Materialisms: Differences and Relations.” Existenz: An International
Journal in Philosophy, Religion, Politics, and the Arts 8, no. 2 (2013): 26-32.
Fleischmann, Kenneth R. “Sociotechnical Interaction and Cyborg–Cyborg Interac-
tion: Transforming the Scale and Convergence of HCI.” The Information Society
25, no. 4 (2009): 227-35. doi:10.1080/01972240903028359.
Gasson, M.N. “ICT implants.” In The Future of Identity in the Information Society, ed-
ited by S. Fischer-Hübner, P. Duquenoy, A. Zuccato, and L. Martucci, pp. 287-95.
Springer US, 2008.
Gasson, M.N., Kosta, E., and Bowman, D.M. “Human ICT Implants: From Invasive to
Pervasive.” In Human ICT Implants: Technical, Legal and Ethical Considerations,
edited by Mark N. Gasson, Eleni Kosta, and Diana M. Bowman, pp. 1-8. Infor-
mation Technology and Law Series 23. T. M. C. Asser Press, 2012.
Gladden, Matthew E. “The Artificial Life-Form as Entrepreneur: Synthetic Organism-
Enterprises and the Reconceptualization of Business.” In Proceedings of the Four-
teenth International Conference on the Synthesis and Simulation of Living Systems,
edited by Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lip-
son, pp. 417-18. Cambridge, MA: The MIT Press, 2014.
Gladden, Matthew E. “Neural Implants as Gateways to Digital-Physical Ecosystems
and Posthuman Socioeconomic Interaction.” In Digital Ecosystems: Society in the
Digital Age, edited by Łukasz Jonak, Natalia Juchniewicz, and Renata Włoch, pp.
85-98. Warsaw: Digital Economy Lab, University of Warsaw, 2016.
Gladden, Matthew E. Sapient Circuits and Digitalized Flesh: The Organization as Locus
of Technological Posthumanization. Indianapolis: Defragmenter Media, 2016.
Grodzinsky, F.S., K.W. Miller, and M.J. Wolf. “Developing Artificial Agents Worthy
of Trust: ‘Would You Buy a Used Car from This Artificial Agent?’” Ethics and In-
formation Technology 13(1) (2011): 17-27.
Herbrechter, Stefan. Posthumanism: A Critical Analysis. London: Bloomsbury, 2013.
[Kindle edition.]
284 • Posthuman Management
Johnson, M.W., C.M. Christensen, and H. Kagermann. “Reinventing Your Business
Model.” Harvard Business Review, Dec. (2008): 51-59.
Koops, B.-J., and R. Leenes. “Cheating with Implants: Implications of the Hidden In-
formation Advantage of Bionic Ears and Eyes.” In Human ICT Implants: Technical,
Legal and Ethical Considerations, edited by Mark N. Gasson, Eleni Kosta, and Di-
ana M. Bowman, pp. 113-34. Information Technology and Law Series 23. T. M. C.
Asser Press, 2012.
Lebedev, M. “Brain-Machine Interfaces: An Overview.” Translational Neuroscience
5(1) (2014): 99-110.
Magretta, J. “Why Business Models Matter.” Harvard Business Review, May (2002): 86-
92.
McGee, E.M. “Bioelectronics and Implanted Devices.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 207-24. The Inter-
national Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
Merkel, R., G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. “Cen-
tral Neural Prostheses.” In Intervening in the Brain: Changing Psyche and Society,
117-60. Ethics of Science and Technology Assessment 29. Springer Berlin Heidel-
berg, 2007.
Ramirez, S., X. Liu, P.-A. Lin, J. Suh, M. Pignatelli, R.L. Redondo, T.J. Ryan, and S.
Tonegawa. “Creating a False Memory in the Hippocampus.” Science 341, no. 6144
(2013): 387-91.
Rao, R.P.N., A. Stocco, M. Bryan, D. Sarma, T.M. Youngquist, J. Wu, and C.S. Prat. “A
Direct Brain-to-Brain Interface in Humans.” PLoS ONE 9(11) (2014).
Robinett, W. “The consequences of fully understanding the brain.” In Converging
Technologies for Improving Human Performance: Nanotechnology, Biotechnology,
Information Technology and Cognitive Science, edited by M.C. Roco and W.S. Bain-
bridge, pp. 166-70. National Science Foundation, 2002.
Warwick, K., and M. Gasson. “Implantable Computing.” In Digital Human Modeling,
edited by Y. Cai, pp. 1-16. Lecture Notes in Computer Science 4650. Berlin/Heidel-
berg: Springer, 2008.
Chapter Seven
Implantable Computers
and Information Security:
A Managerial Perspective
1
Abstract. The interdisciplinary field of information security (InfoSec)
already draws significantly on the biological and human sciences;
for example, it relies on knowledge of human physiology to design
biometric authentication devices and utilizes insi ghts from psychol-
ogy to predict users’ vulnerability to social engineering techniques
and develop preventative measures. The growing use of computers
impla nted within the human body for purposes of therapy or aug-
mentation will compel I nfoSec to develop new or deeper relation-
ships with fields such as medici ne and biomedical engineeri ng , in-
sofar as the practices and technologies that InfoSec implements for
impla ntable computers must not only secure the informati on con-
tained within such devices but must also avoid causing biological
or psychological harm to the human bei ngs within whose organisms
the computers are embedded.
In this text w e identify unique issues and challenges that impla nta-
ble computers create for information security. By considering the
particular scenario of the internal computer controlli ng a retinal im-
plant, we demonstrate the ways i n which InfoSec’s traditional con-
cepts of the confidentiality, integrity, and availabili ty of information
and the use of physical, logical, and administrative access controls
become intertwined with issues of medicine and biomedical engi-
neering. Finall y, we formulate a novel cybernetic approach that
provides a useful paradigm for conceptualizing the relationship of
1
For an investigation of information security for implantable computers (and especially those
contained within or connected to neural implants) that explores these issues in more depth than
is possible within this text, see Gladden, The Handbook of Information Security for Advanced Neu-
roprosthetics (2015).
286 • Posthuman Management
information security to medicine and biomedical engineeri ng in the
context of implantable computers.
Introduction
Although the public perception of information security (or ‘InfoSec’) often
focuses on the field’s extensive body of theory and practice relating to com-
puter science and information technology, information security is in fact a
transdisciplinary field in which InfoSec teams – such as those maintained by
large businesses – must not only possess expertise with a wide array of hard-
ware and software systems but must also be knowledgeable about such diverse
fields such as law, ethics, management, finance and accounting, and building
architecture and maintenance. InfoSec also draws significantly on insights
from the biological and social sciences. For example, it relies on a knowledge
of human physiology in order to design (and later, secure) biometric access-
control systems that are capable of identifying and authenticating human us-
ers based on traits such as their voice patterns, handwriting patterns, hand
shapes and vascular patterns, fingerprints, facial features, iris patterns, or ret-
inal blood-vessel patterns.
2
Similarly, InfoSec relies on insights from the field
of psychology to predict users’ vulnerability to social engineering techniques
such as phishing and to develop effective measures for prevention, detection,
and response.
3
A new phenomenon that InfoSec will need to robustly address during the
coming years is the growing use of implantable computers that operate within
the human body for purposes of therapeutic treatment or human augmenta-
tion. The expansion of this new technological frontier creates unique chal-
lenges that will compel information security to develop relationships with
fields such as medicine and biomedical engineering that are closer, richer, and
more critically important than those that have existed in the past. The rise of
implantable computing will elicit a qualitative change in these relationships;
InfoSec personnel will need to work closely with the doctors and biomedical
engineers who are designing and implanting such devices, in order to:
Understand the design and functioning of computers which – after
their implantation – the InfoSec personnel will likely be unable to
physically inspect or manipulate and which may utilize specialized
proprietary hardware, operating systems, and software applications.
Understand an implantable computer’s connections with the biologi-
cal systems of its human host, in order to recognize both the kinds of
information (if any) that the device is gathering regarding the host’s
biological and cognitive processes, the kind of information (if any)
2
Rao & Nayak, The InfoSec Handbook (2014).
3
Rao & Nayak (2014).
Chapter Seven: Implantable Computers and Information Security • 287
that the device is transmitting to the mind of its human host, and any
other effects that the device is capable of having on its human host.
Develop InfoSec practices and technologies for use with implantable
computers that not only secure the information contained within such
devices but which also avoid creating biological or psychological
harm (or even the danger of such harm) for the human beings within
whose organisms the computers are embedded.
To explore this growing interconnection of information security, medicine,
and biomedical engineering, we begin by identifying unique issues and chal-
lenges that implantable computers create for information security. By consid-
ering the scenario of a computer that is contained within a sensory neuropros-
thetic device in the form of a retinal implant, we then demonstrate the ways
in which InfoSec’s traditional concepts of the confidentiality, integrity, and
availability of information and the use of physical, logical, and administrative
access controls become intertwined with issues of medicine and biomedical
engineering. Finally, we suggest that in order to analyze these issues further
and develop effective avenues of communication between the fields of infor-
mation security and biomedical engineering, it may be useful to employ the
concept of celyphocybernetics, which views both the human body and any
implantable computers embedded within it as a single cybernetic system for
communication and control that supports the mind of the human being to
whom the body belongs.
The Fundamentals of Implantable Computers
Current Implantable Computers
Current forms of information and communications technology (ICT) in-
clude a number of implantable devices such as passive RFID tags
4
that can
store information and interact with computers but which are not in them-
selves computers. A growing number of implantable devices, though, indeed
constitute full-featured implantable computers, insofar as they possess their
own processor, memory, software, and input/output mechanisms; they often
also possess programming that can be remotely updated after the devices have
been implanted into the body of their human host. These forms of technology
include many implantable medical devices (IMDs) such as defibrillators, pace-
makers, deep brain stimulators, sensory neuroprostheses including retinal and
cochlear implants, body sensor networks (BSNs), and some of the more so-
phisticated forms of RFID transponders.
5
Such implantable computers (ICs)
4
Gasson, “Human ICT Implants: From Restorative Application to Human Enhancement” (2012);
Gasson, “ICT Implants” (2008).
5
Gasson et al., “Human ICT Implants: From Invasive to Pervasive” (2012); Gasson (2008).
288 • Posthuman Management
increasingly operate in rich and complex biocybernetic control loops with the
organism of their human host, allowing, for example, the physiological and
cognitive activity of their host to be detected, analyzed, and interpreted for
use in exercising real-time control over computers or robotic devices.
6
The
growing sophistication of the computers contained within such implantable
devices means that they increasingly serve as sites for the reception, storage,
processing, and transmission of large amounts of highly sensitive infor-
mation
7
regarding their human hosts’ everyday interactions with the environ-
ment, internal biological processes, and even cognitive activity.
Future Implantable Computers
The implantable computers currently in use have typically been designed
to serve a restorative or therapeutic medical purpose; they might treat a par-
ticular illness or restore some sensory, motor, or cognitive ability that their
user has lost as a result of illness or injury. It is expected, though, that future
generations of ICs will increasingly be designed not to restore some ordinary
human capacity that has been lost but to enhance their users’ physical or in-
tellectual capacities by providing abilities that exceed or differ from what is
naturally possible for human beings. For example, future models of retinal im-
plants might augment normal human vision by providing telescopic or night
vision,
8
and ICs with functionality similar to that of a miniaturized
smartphone might offer their users wireless communication capacities that
include access to the Internet and cloud-based software and data-storage ser-
vices. The growing elective use of ICs for purposes of physical and cognitive
augmentation is expected to expand the implantable computing device market
well beyond that segment of the population which currently relies on ICs to
treat medical conditions.
9
Information Security as Applied to Implantable
Computers
Key Concepts of Information Security
Information security is a discipline whose fundamental aim is to ensure the
confidentiality, availability, and integrity of information – often referred to as
6
Fairclough, “Physiological Computing: Interfacing with the Human Nervous System” (2010);
Park et al., “The Future of Neural Interface Technology” (2009).
7
Kosta & Bowman, “Implanting Implications: Data Protection Challenges Arising from the Use
of Human ICT Implants” (2012); Li et al., “Advances and Challenges in Body Area Network”
(2011); Rotter & Gasson, “Implantable Medical Devices: Privacy and Security Concerns” (2012).
8
Gasson et al. (2012); Merkel et al., “Central Neural Prostheses” (2007).
9
McGee, “Bioelectronics and Implanted Devices” (2008); Gasson et al. (2012).
Chapter Seven: Implantable Computers and Information Security • 289
the ‘CIA Triad’ model for understanding information security.
10
In the con-
temporary world of the Internet and Internet of Things, Big Data, and nearly
ubiquitous computing, securing information often means securing the com-
puterized systems that are used to gather, store, process, and transmit data.
However, in its broader scope InfoSec also seeks to ensure the confidentiality,
availability, and integrity of information that is maintain in other systems
such as printed files and records, magnetic audio tapes, or even within the
human mind (e.g., confidential business information known to a company’s
employees which practitioners of corporate espionage might attempt to educe
through bribery, coercion, or social engineering). Within large businesses or
government agencies, InfoSec departments seek to ensure information secu-
rity through the design and implementation of comprehensive approaches
that incorporate practices and techniques such as strategic planning, risk man-
agement, training, configuration management, incident response, and the use
of physical, logical, and administrative security controls.
11
Key concepts and
best practices for the field are described by industry-leading standards such as
those found in NIST SP 800-100
12
and ISO/IEC 27001:2013.
13
Overview of Information Security Vulnerabilities and Risks for
Implantable Computers
The possession of an implantable computer creates significant InfoSec vul-
nerabilities and risks for its human host. Like other computers, ICs are vul-
nerable to threats such as computer viruses
14
and hacking.
15
It is no longer
unrealistic to presume that criminals or other unauthorized parties will seek
and gain illicit access to the computers found in robotic prosthetic limbs, ret-
inal implants, cochlear implants, and other neuroprosthetics for purposes of
stealing, altering, or rendering unavailable the data that they contain, either
because of some personal motives relating to the device’s human host, to fa-
cilitate blackmail or financial fraud, or as an act of political or industrial espi-
onage. Moreover, the growing use of implantable technologies for elective (ra-
ther than medically necessary) reasons
16
and the increasing sophistication of
such devices’ ability to interface with a human host’s brain
17
means that future
implantable computers will have access to increasingly sensitive information
possessed by an increasingly larger segment of the human population. The
10
Rao & Nayak (2014).
11
SP 800-100. Information Security Handbook: A Guide for Managers (2006).
12
SP 800-100 (2006).
13
ISO/IEC 27001:2013, Information Technology – Security Techniques – Information Security Man-
agement Systems – Requirements (2013).
14
Gasson (2012); Clark & Fu, “Recent Results in Computer Security for Medical Devices” (2012).
15
Rotter & Gasson (2012).
16
McGee (2008); Gasson (2008).
17
Gasson (2012).
290 • Posthuman Management
need to create comprehensive InfoSec frameworks that account for such real-
ities will thus likely become more critical in the coming years.
The Paradox of Information Security Requirements for
Implantable Computers
Implantable computers display a number of unique and relevant traits that
directly affect their information security characteristics and which are not
found in typical desktop, mobile, or wearable computers. For example, ICs face
intense – and inherently conflicting – InfoSec demands, due to their close in-
tegration into their host’s biological systems and processes. The unique role
of ICs as technological devices operating within a biological organism creates
a dilemma for designers of InfoSec systems. On the one hand, a user’s implant-
able computer should be more secure and well-protected against unauthorized
activity such as viruses or hacking than, for example, his or her laptop com-
puter or smartphone – due to the fact that any party who gains unauthorized
access to the implantable computer not only has the potential to (1) steal in-
formation representing the user’s most sensitive and confidential medical data
and cognitive activity (potentially even including the contents of dreams,
memories, volitions, fears, and sensory experiences) but can potentially also
(2) alter or render inaccessible the contents of the information contained
within the device or within natural biological systems (such as the brain’s own
memory-storage mechanisms) with which the device is connected. The latter
could potentially cause a range of severe negative effects including neurolog-
ical or behavioral problems, the loss of personal agency and identity (or at
least the experience of their loss), and a host’s making of decisions and under-
taking of actions on the basis of erroneous (and potentially fraudulently fab-
ricated) information provided or affected by the implantable computer. In the
absence of stringent InfoSec mechanisms and protocols, an IC’s human host
may be unable to retroactively trust the contents of what appear to be his or
her ‘own’ thoughts and memories or to proactively trust that his or her future
biological processes will function in a manner free from unauthorized influ-
ence.
On the other hand, imagine that the user of an implantable computer has
been involved in a serious accident or is unexpectedly experiencing an acute
and life-threatening medical incident. In this case, emergency medical person-
nel on the scene may need to gain immediate access to an IC and exercise
unfettered control over its functionality in order to save the life of its host.
18
The same mechanisms (such as encryption and proprietary security software)
that make it difficult for a hacker to break into an IC would also make it diffi-
cult or impossible for emergency medical personnel to break into the device.
In principle, regulators could require (or IC manufacturers could voluntarily
18
Clark & Fu (2012); Rotter & Gasson (2012).
Chapter Seven: Implantable Computers and Information Security • 291
institute) mechanisms that allow ICs to be accessed by individuals presenting
certain credentials that identify them as trained and licensed emergency med-
ical personnel, or ICs could be designed to temporarily disable some of their
access controls if they detect that their host is experiencing a medical emer-
gency. However, such mechanisms created security vulnerabilities that could
potentially be exploited by unauthorized parties who are highly motivated to
gain access to an IC.
Other Information Security Considerations Unique to
Implantable Computers
Because of their embedded nature, ICs are unable to employ some tradi-
tional physical controls and may be wholly reliant on wireless communica-
tion. However, because of their close integration with their host’s biological
system, ICs may be able to employ biometric mechanisms that are impossible
for external computers, for the purpose of ensuring that the device is indeed
still functioning within the body of its intended user.
One issue requiring careful attention is the secure disposal, reuse, and re-
cycling of implantable computers, given the fact that ICs may contain highly
sensitive medical and personal data about their user, including genetic infor-
mation found in cells and other biological material that may have been inten-
tionally or inadvertently introduced into the device.
19
Finally, it should be noted that just as InfoSec professionals must be con-
cerned about new information security vulnerabilities that are created for in-
dividuals within their charge who possess ICs, so too must they be concerned
about the enhanced capacities for carrying out illicit surveillance, hacking, and
data theft that ICs provide to ill-intentioned parties who possess them.
20
An InfoSec Scenario: Securing a Sensory Neuroprosthetic
The CEO with Retinal Implants
Consider a hypothetical case – set several years in the future – of the CEO
of a large consumer electronics firm who is 52 years old and has recently re-
ceived retinal implants in both eyes, to address the effects of retinitis pigmen-
tosa that would otherwise have caused him to become completely blind. The
CEO’s retinal implants include miniature video cameras and a computer that
processes their video images to convert them into signals that are then trans-
mitted to an array of electrodes that stimulate retinal ganglion cells, causing
19
See NIST Special Publication 800-53, Revision 4: Security and Privacy Controls for Federal Infor-
mation Systems and Organizations (2013), p. F-122-F-123.
20
Koops & Leenes, “Cheating with Implants: Implications of the Hidden Information Advantage
of Bionic Ears and Eyes” (2012).
292 • Posthuman Management
neuronal signals corresponding to visual sense data to be transmitted through
the optic nerve to the CEO’s brain.
21
Both the raw visual input received from the environment by the cameras
and the processed sense data provided to the CEO’s brain include highly sen-
sitive information, insofar as they constitute a visual record of everything that
the CEO looks at in both his work-related and private life. This visual data
would include a moment-by-moment record of all of the CEO’s activities, in-
cluding his physical whereabouts, the identities of individuals with whom he
meets, and images of everything that he views on his computer screen, includ-
ing the contents of highly-sensitive work-related emails and documents and
personal online activity. Assuming that the implants’ video cameras were of
high enough resolution, someone with access to their raw video could poten-
tially even use lip reading to determine what was being said to the CEO in his
face-to-face conversations.
Applying the CIA Triad
In this scenario, the InfoSec personnel of the CEO’s company would no
longer simply need to worry about implementing strategies and practices to
secure such highly sensitive ICT devices as the CEO’s desktop computer, lap-
top, or smartphone; they would now need to be concerned about securing his
prosthetic eyes, as well. When considering their CEO’s new retinal implants
through the lens of the CIA triad, such InfoSec experts would identify the fol-
lowing sorts of issues that would need to be proactively addressed by the com-
pany’s information security department.
CONFIDENTIALITY
The retinal implants’ raw visual data and processed visual output contain
highly sensitive information that must be secured. While such legal require-
ments were generally not implemented specifically with implantable comput-
ers in mind, various regulations already exist around the world requiring that
(often quite stringent) practices and mechanisms be put in place to secure the
21
The kinds of retinal implants have been approved by the FDA for use in human patients are
currently extremely limited in the quantity of data that they can transmit to the patient’s retinal
ganglion cells, thus the devices’ computers must radically compress and simplify received video
images before transmitting to the patient’s brain. However, it is anticipated that retinal implants
utilizing new electrode designs and neuronal stimulation techniques will provide their users
with a level of vision adequate for navigating within an environment, reading text, and recog-
nizing faces. See Weiland et al., “Retinal Prosthesis” (2005); Jumper, “FDA Approves World’s
First Artificial Retina” (2013); Schmid & Fink, “Operational Design Considerations for Retinal
Prostheses” (2012); and Schmid et al., “Simultaneous vs. Sequential and Unipolar vs. Multipolar
Stimulation in Retinal Prostheses” (2013). In the hypothetical scenario described here, we assume
that the CEO possesses retinal implants of that more advanced sort which are expected to be-
come available within the coming years.
Chapter Seven: Implantable Computers and Information Security • 293
sort of personal and medical data that implantable computers will contain.
22
However, because the technology of implantable computers is still relatively
new and experimental, a specialized and comprehensive set of InfoSec stand-
ards has not yet been developed for the industry. Moreover, especially during
the initial iterations of such technologies, the laboratories developing such
implantable devices may understandably be focusing their energy and atten-
tion simply on trying to produce devices that function effectively, without yet
understanding or successfully addressing the full implications of their devices
for information security.
23
Thus the manufacturer of our hypothetical CEO’s
retinal implants may or may not have been able to incorporate robust security
features into those devices or to offer detailed guidance regarding their secure
use.
In particular, the manufacturer may purposefully have included either pub-
licly known mechanisms or hidden backdoors within the given model of im-
plant that allow its internal computer to be accessed via a remote wireless
connection for purposes of downloading logfiles from the device, carrying out
diagnostic tests, updating the device’s software, or transmitting particular vis-
ual content to the device for purposes of training its user or calibrating the
implant. The designers of the device may assume that these remote access
mechanisms are secure, simply because they utilize some internal proprietary
transmitter, software, or protocol developed within the laboratory and be-
cause they assume that no unauthorized outside parties would possess the de-
sire or ability to illicitly access a retinal implant while it is in use. However, in
principle, a hacker with enough knowledge, skill, and motivation might be
able to remotely access an implant’s internal computer and could potentially
be able to exploit the device’s existing programming (or reprogram the device)
so that – unbeknownst to the CEO – his retinal implants would wirelessly
transmit a live video stream of everything seen by the CEO to an external
computer controlled by the hacker. This would severely compromise the con-
fidentiality of the information passing through the device.
If InfoSec personnel wished to restrict or eliminate the device’s ability to
transmit data to external systems, they could safely do so only in consultation
with physicians and biomedical engineers who understand the extent to
which such data transmissions may need to occur for legitimate medical pur-
poses.
22
Kosta & Bowman (2012).
23
The early generations of such commercial neuroprostheses are perhaps the most likely models
to possess inadequate security features. At the same time, obtaining early models of such sophis-
ticated and costly experimental technologies may require significant money or influence –
meaning that they may be disproportionately likely to be implanted and utilized in individuals
of significant financial, social, or political importance whose personal information would be es-
pecially attractive to hackers and other unauthorized parties.
294 • Posthuman Management
INTEGRITY
If a hacker were able to remotely gain unauthorized access to the internal
computers controlling the CEO’s implants, it is conceivable that he or she
could intentionally ‘edit’ the visual output that was being provided to the
CEO’s brain by the implants. Alternatively, a hacker might even be able to
supply a wholly fabricated visual stream, replacing the visual content detected
by the photoreceptors of the implants’ cameras with some pre-generated
stream of false visual content transmitted from the hacker’s computer. If such
an act were feasible, this possibility would call into question the integrity of
all the visual data that the CEO’s brain was receiving; the CEO would never
be able to know with certainty whether everything that he was seeing were
‘real’ or whether it had perhaps been altered in some way by an unauthorized
party.
As before, if InfoSec personnel wish to constrain or eliminate the implants’
ability to receive visual content supplied from an external computer, they
would need to work closely with physicians and biomedical engineers who
understand the extent to which the devices must possess such capacities in
order to maintain a healthy and successful long-term interface with their
host’s optic nerve and brain, to facilitate necessary training and diagnostic
activities, and to fulfill any legitimate medical or biological purposes that the
devices serve.
AVAILABILITY
It is important to ensure that the continuous stream of visual information
supplied by the implants is never unexpectedly interrupted, either as a result
of hardware or software failure or through malicious interventions such as
hacking or a computer virus. If the CEO’s sense of sight were to be disabled
while he were in the midst of conducting a high-profile shareholders’ meeting,
the result could be embarrassing for the company and the CEO personally.
However, if the CEO’s vision were to be suddenly disabled while he were driv-
ing an automobile or scuba-diving, for example, the result could be fatal. It
may also be necessary to provide the connected retinal ganglion cells with an
unceasing stream of electrochemical stimulation from the implants in order to
ensure the cells’ health and normal functioning.
On the other hand, it may be medically necessary to periodically disable
the devices’ stimulation of retinal ganglion cells, either to ensure the cells’
long-term health or to allow the CEO to fall asleep at night. Any efforts by
InfoSec personnel to regulate the implants’ routines for making visual infor-
mation available to the CEO’s brain would thus need to be coordinated closely
with expert medical and biomedical engineering personnel.
Chapter Seven: Implantable Computers and Information Security • 295
Applying the Framework of Access Controls
If the company’s InfoSec personnel consider the CEO’s new retinal im-
plants through the lens of physical, logical, and administrative security con-
trols, they might identify the following kinds of issues that would need to be
addressed.
PHYSICAL CONTROLS
It might appear unnecessary to give the retinal implants’ internal comput-
ers specialized physical access controls, if the computers are located inside the
CEO’s head and can only be directly accessed by unauthorized parties by sub-
jecting the CEO to a complex surgical procedure. However, there are indeed
issues relating to physical controls that InfoSec personnel would need to con-
sider. First, the retinal implants may exist as integrated components of a larger
system that includes external computers that are more readily physically ac-
cessible. The implants might, for example, have a permanent wireless connec-
tion to some handheld external controller that can be used by the CEO to
monitor or recalibrate the units. If this were the case, the CEO’s InfoSec team
would need to implement physical controls to ensure that the handheld con-
troller would always be secured and not left lying around unattended in a lo-
cation where unauthorized parties could gain access to it. Second, it might be
the case that particular kinds of magnetic fields, patterns of light projected at
the CEO’s eyes, or other kinds of environmental phenomena can alter or dis-
rupt the functioning of the retinal implants. InfoSec personnel would need to
work with the biomedical engineers who designed the devices to understand
any such possibilities and ensure that there were physical controls in place to
protect the CEO’s retinal implants from such interference, insofar as possible.
LOGICAL CONTROLS
Efforts to secure the CEO’s retinal implants would also include a focus on
logical controls, such as the mechanisms built into the implants’ software that
require any remote systems or users attempting to connect to the device to be
identified, authenticated, and given access only to the systems or information
which they are authorized to access. Because the retinal implants’ internal
computers may be highly specialized, idiosyncratic devices – rather than the
‘off-the-shelf’ computers running operating systems like Windows, Mac OS,
Android, or Linux – it may or may not be technologically possible for the
company’s InfoSec team to install their preferred security software on the ret-
inal implants. Moreover, the need to secure the implants’ computers must also
be balanced against the need to ensure the implants’ proper and efficient func-
tioning as prosthetic medical devices; the CEO’s medical team may not allow
security software to be installed on the implants that has the potential to un-
acceptably slow down, degrade, or otherwise impair their functioning as med-
ical devices. The company’s InfoSec team would need to work closely with
296 • Posthuman Management
medical and bioengineering personnel to ensure that any security software or
logical controls added to the implants do not impede their proper functioning
from a medical or biological perspective.
ADMINISTRATIVE CONTROLS
Administrative controls such as organizational policies and procedures can
be put into place to maximize the effectiveness of the physical and logical con-
trols and add another layer of security. For example, the InfoSec personnel
and other relevant decision-makers within the CEO’s company should de-
velop clear policies to determine which organizational employees (potentially
including the InfoSec personnel themselves, the CEO’s medical personnel, the
CEO’s immediate administrative support staff, and the CEO himself) should
have which levels of access to the CEO’s retinal implants. The policies should
also dictate that regularly updated risk assessments will be carried out to iden-
tify new and emerging threats that could impact the CEO’s retinal implants as
well as requiring that incident response and disaster recovery plans be proac-
tively put in place, to allow the company to respond as efficiently and effec-
tively as possible to any threat that might compromise or even completely
disable the retinal implants.
Such administrative controls would need to be developed in close consul-
tation with medical and biomedical engineering experts, to ensure that the
CEO’s medical personnel have all of the administrative access and supervisory
authority needed to ensure the CEO’s health and safety, while simultaneously
ensuring that non-medical personnel are not allowed or expected to perform
roles that could inadvertently endanger the CEO’s health or safety by nega-
tively impacting the retinal implants’ functioning.
Avenues for Future Research
InfoSec’s Growing Relationship to Medicine and Biomedical
Engineering
The scenario described above highlights a new aspect of information secu-
rity that will become increasingly important: its relationship to medicine, bi-
omedical engineering, and related fields such as neuroscience. It is already the
case that information security is a transdisciplinary field in which personnel
must not only be experts in computer hardware and software but must also be
familiar with fields like psychology, finance, law, and ethics. However, the
growing use of implantable computing means that InfoSec personnel will also
need to be knowledgeable about the biological and neuroscientific aspects of
implantable computers. For large corporations, their information security
teams might even include an in-house physician or neuroscientist who can
ensure that any information security mechanisms or practices that are imple-
mented for employees’ ICs do not result in biological or psychological harm.
Chapter Seven: Implantable Computers and Information Security • 297
Similarly, when designing countermeasures that can be employed against un-
authorized parties who may attempt to use their own implantable computers
as instruments for carrying out illicit surveillance or corporate espionage
against a company, such medical expertise would be needed by the company’s
InfoSec personnel in order to design countermeasures that neutralize such
threats without causing biological or psychological injury to those suspected
adversaries for which the company and its InfoSec personnel could potentially
be held liable.
Cybernetics as a Means of Preparing InfoSec for Implantable
Computers
One of the challenges in linking information security with medicine is that
the two fields utilize different vocabularies and theoretical and conceptual
frameworks: InfoSec is grounded largely in the theoretical framework of com-
puter science and medicine in that of biology and chemistry. In addressing this
challenge, it may be helpful to build on the field of cybernetics, which was
founded to provide precisely the sort of transdisciplinary theoretical frame-
work and vocabulary that can be used to translate insights between all of those
fields that study patterns of communication and control in machines, living
organisms, or social systems.
24
Alongside the many existing foci of cybernetics
(found in subdisciplines like biocybernetics, neurocybernetics, and manage-
ment cybernetics) it may be useful to envision a sort of ‘celyphocybernetics’
25
that sees the human brain, its surrounding body, and any computers embed-
ded in the body as together forming a single physical ‘shell’ for an individual
human mind. The human brain, organic and artificial body components, and
ICs would constitute a system that receives information from the external en-
vironment, stores information circulating within it, and transmits information
to the external environment, thereby creating networks of communication
and control. InfoSec experts, physicians, and biomedical engineers would thus
share the single task of ensuring the secure, productive, and effective func-
tioning of this entire information system that contains both biological and
electronic components; that common goal can only be achieved if both InfoSec
personnel, physicians, and biomedical engineers succeed in fulfilling their
unique professional roles.
Conclusion
In this text we have considered the manner in which the field of infor-
mation security will need to draw increasingly on expertise from medicine
and biomedical engineering in order to address the unique ways in which the
24
Wiener, Cybernetics: Or Control and Communication in the Animal and the Machine (1961).
25
From the Ancient Greek κέλυφος, meaning ‘shell,’ ‘sheath,’ ‘husk,’ or ‘pod.’
298 • Posthuman Management
growing use of implantable computers are expected to reshape the infor-
mation security landscape. Not only will implantable computers create new
kinds of InfoSec vulnerabilities and risks for individuals who use them, but
they will offer powerful new tools for those who would attempt to carry out
illicit activities such as corporate espionage or illegal surveillance. By utilizing
knowledge from the fields of medicine and biomedical engineering, InfoSec
professionals will be able to recognize the information security characteristics
of implantable computers that they cannot directly access, understand such
implantable computers’ connections to the biological and cognitive processes
of their human hosts and the medical implications of such interfaces, and de-
velop InfoSec practices and technologies that secure the information con-
tained within implantable computers without placing the devices’ hosts at risk
of physical or psychological harm. The field of information security has al-
ready demonstrated an interdisciplinary ability to successfully incorporate
knowledge from diverse fields and adapt to ever-changing technological, so-
cial, and legal demands, and it is well-positioned to secure a future in which
implantable computers will become an increasingly important element of hu-
man life.
References
SP 800-100. Information Security Handbook: A Guide for Managers, edited by P.
Bowen, J. Hash, and M. Wilson. Gaithersburg, Maryland: National Institute of
Standards & Technology, 2006.
Clark, S.S., and K. Fu, “Recent Results in Computer Security for Medical Devices,” in
Wireless Mobile Communication and Healthcare, K.S. Nikita, J.C. Lin, D.I. Fotiadis,
and M.-T. Arredondo Waldmeyer, eds., Lecture Notes of the Institute for Com-
puter Sciences, Social Informatics and Telecommunications Engineering 83, pp.
111-18. Springer Berlin Heidelberg, 2012.
Fairclough, S.H., “Physiological Computing: Interfacing with the Human Nervous
System,” in Sensing Emotions, J. Westerink, M. Krans, and M. Ouwerkerk, eds.,
Philips Research Book Series 12, pp. 1-20. Springer Netherlands, 2010.
Gasson, M.N., “Human ICT Implants: From Restorative Application to Human En-
hancement,” in Human ICT Implants: Technical, Legal and Ethical Considerations,
M.N. Gasson, E. Kosta, and D.M. Bowman, eds., Information Technology and Law
Series 23, pp. 11-28. T.M.C. Asser Press, 2012.
Gasson, M.N., “ICT Implants,” in The Future of Identity in the Information Society, S.
Fischer-Hübner, P. Duquenoy, A. Zuccato, and L. Martucci, eds., IFIP – The In-
ternational Federation for Information Processing 262, pp. 287-95. Springer US,
2008.
Gasson, M.N., E. Kosta, and D.M. Bowman, “Human ICT Implants: From Invasive to
Pervasive,” in Human ICT Implants: Technical, Legal and Ethical Considerations,
M.N. Gasson, E. Kosta, and D.M. Bowman, eds., Information Technology and Law
Series 23, pp. 1-8. T.M.C. Asser Press, 2012.
Chapter Seven: Implantable Computers and Information Security • 299
Gladden, Matthew E. The Handbook of Information Security for Advanced Neuropros-
thetics. Indianapolis: Synthypnion Academic, 2015.
ISO/IEC 27001:2013, Information Technology – Security Techniques – Information Secu-
rity Management Systems – Requirements. Joint Technical Committee ISO/IEC
JTC 1, Information technology, Subcommittee SC 27, IT Security techniques. Ge-
neva: The International Organization for Standardization and the International
Electrotechnical Commission, 2013.
Jumper, J. Michael., “FDA Approves World’s First Artificial Retina.” Retina Times,
American Society of Retina Specialists, Spring 2013 (n.d.).
https://www.asrs.org/retina-times/details/131/fda-approves-world-first-artificial-
retina.
Koops, B.-J., and R. Leenes, “Cheating with Implants: Implications of the Hidden In-
formation Advantage of Bionic Ears and Eyes,” in Human ICT Implants: Technical,
Legal and Ethical Considerations, M.N. Gasson, E. Kosta, and D.M. Bowman, eds.,
Information Technology and Law Series 23, pp. 113-34. T.M.C. Asser Press, 2012.
Kosta, E., and D.M. Bowman, “Implanting Implications: Data Protection Challenges
Arising from the Use of Human ICT Implants,” in Human ICT Implants: Technical,
Legal and Ethical Considerations, M.N. Gasson, E. Kosta, and D.M. Bowman, eds.,
Information Technology and Law Series 23, pp. 97-112. T.M.C. Asser Press, 2012.
Li, S., F. Hu, and G. Li, “Advances and Challenges in Body Area Network,” in Applied
Informatics and Communication, J. Zhang, ed., Communications in Computer and
Information Science 22, pp. 58-65. Springer Berlin Heidelberg, 2011.
McGee, E.M., “Bioelectronics and Implanted Devices,” in Medical Enhancement and
Posthumanity, B. Gordijn and R. Chadwick, eds., The International Library of Eth-
ics, Law and Technology 2, pp. 207-24. Springer Netherlands, 2008.
Merkel, R., G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S.Rosahl, “Cen-
tral Neural Prostheses,” in Intervening in the Brain: Changing Psyche and Society,
Ethics of Science and Technology Assessment 29, pp. 117-60. Springer Berlin Hei-
delberg, 2007.
NIST Special Publication 800-53, Revision 4: Security and Privacy Controls for Federal
Information Systems and Organizations. Joint Task Force Transformation Initia-
tive. Gaithersburg, Maryland: National Institute of Standards & Technology, 2013.
Park, M.C., M.A. Goldman, T.W. Belknap, and G.M. Friehs, “The Future of Neural In-
terface Technology,” in Textbook of Stereotactic and Functional Neurosurgery, A.M.
Lozano, P.L. Gildenberg, and R.R. Tasker, eds. Heidelberg/Berlin: Springer, 2009,
pp. 3185-3200.
Rao, U.H., and U. Nayak, The InfoSec Handbook. New York: Apress, 2014.
Rotter, P., and M.N. Gasson, “Implantable Medical Devices: Privacy and Security
Concerns,” in Human ICT Implants: Technical, Legal and Ethical Considerations,
M.N. Gasson, E. Kosta, and D.M. Bowman, eds., Information Technology and Law
Series 23, pp. 63-66. T.M.C. Asser Press, 2012.
300 • Posthuman Management
Schmid, E.W., W. Fink, and R. Wilke. “Simultaneous vs. Sequential and Unipolar vs.
Multipolar Stimulation in Retinal Prostheses.” In 2013 6th International
IEEE/EMBS Conference on Neural Engineering (NER), 190–93, 2013.
doi:10.1109/NER.2013.6695904.
Schmid, Erich W., and Wolfgang Fink, “Operational Design Considerations for Reti-
nal Prostheses.” arXiv:1210.5348 [q-Bio], October 19, 2012.
http://arxiv.org/abs/1210.5348.
Weiland, J.D., W. Liu, and M.S. Humayun, “Retinal Prosthesis,” Annual Review of Bio-
medical Engineering, vol. 7, no. 1, pp. 361-401, 2005.
Wiener, Norbert. Cybernetics: Or Control and Communication in the Animal and the
Machine, second edition. Cambridge, Massachusetts: The MIT Press, 1961.
Part III
Robotics and Artificial
Intelligence: A Closer Look
Chapter Eight
The Social Robot as CEO:
Developing Synthetic Charismatic
Leadership for Human Organizations
1
Abstract. As robots are developed that possess increasingly robust
social and managerial capacities and which are moving into a
broader range of roles within businesses , the question arises of
whether a robot could ever fill the ultimate organizational role: that
of CEO. Among the many functions a chief executive officer must
perform is that of motivating a company’s workers and cul tivating
their trust in the company’s strategic direction and leadership. The
creation of a robot that can successfully inspire and win the trust of
an organization’s human personnel might appear implausible; how-
ever, w e argue that the development of robots capable of mani-
festing the leadership traits needed to serve as CEO within an oth-
erwise human organization is not only possible but – based on cur-
rent trends – li kely even inevitable.
Our analysis employs phenomenological and cultural posthumanist
methodologies. We begin by reviewing what French and Raven re-
fer to as ‘referent power’ and what Weber describes as ‘charis-
matic authority’ – two related characteristics which if possessed by
a social robot could allow it to lead human personnel by motivating
them and securing their loyalty and trust. By analyzing current ro-
botic design efforts and cultural depictions of robots, we identify
three ways in which human beings are strivi ng to create charismatic
robot leaders for ourselves. We then consider the manner in which
particular robot leaders will acqui re human trust, arguing that char-
ismatic robot leaders for businesses and other organizations will
1
This text is an expanded and adapted version of Gladden, Matthew E., “The Social Robot as
‘Charismatic Leader’: A Phenomenology of Human Submission to Nonhuman Power,” in Socia-
ble Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014, edited by Jo-
hanna Seibt, Raul Hakli, and Marco Nørskov, pp. 329-339. Frontiers in Artificial Intelligence and
Applications, vol. 273. Amsterdam: IOS Press, 2014.
304 • Posthuman Management
emerge naturally from our world’s social fabric, without any rational
decision on our part. Finally, we suggest that the stability of these
leader-follower relations – and the extent to which charismatic so-
cial robots can remain long-term fixtures in leade rship roles such as
that of CEO – will hi nge on a fundamental question of robotic intel-
ligence and motivation that currently stands unresolved.
I. Introduction
For more than a half-century, robots have filled crucial roles within human
organizations, including businesses. The advent of industrial robots for use in
assembly-line manufacturing processes in the 1960s and 1970s
2
has been fol-
lowed by the development of increasingly intelligent social robots that now
serve as assistants and facilitators to human workers in areas as diverse as
education, health care, retail sales, agriculture, transportation, security, and
the military.
3
It is anticipated that the role of robots in the workplace will con-
tinue to evolve, as new and more sophisticated types of robots are developed
that possess the capacity to serve as true colleagues and partners
4
– and po-
tentially even managers and supervisors – to human workers.
This raises the question whether a robot working within an organization
that includes human personnel could ever fill that most singular leadership
role: as CEO. A chief executive officer must possess significant technical ex-
pertise, information-processing capacity, and decision-making ability, in or-
der to plan, organize, and control the activities of those workers (whether hu-
man employees or artificial agents) that are part of an organization. However,
a successful CEO is also required to perform a qualitatively different sort of
function: namely, to inspire and motivate an organization’s human employees
and to engender their loyalty and trust in the strategic vision enunciated by
2
For the history of industrial robotics, see Goodman & Argote, “New Technology and Organi-
zational Effectiveness” (1984); Murphy, An Introduction to AI Robotics (2000), pp. 19-27; and Perl-
berg, Industrial Robotics (2016).
3
Regarding recent developments, current challenges, and long-term possibilities in the field of
social robotics, see, e.g., Breazeal, “Toward sociable robots” (2003); Kanda & Ishiguro, Human-
Robot Interaction in Social Robotics (2013); Social Robots and the Future of Social Relations, edited
by Seibt et al. (2014); Social Robots from a Human Perspective, edited by Vincent et al. (2015); and
Social Robots: Boundaries, Potential, Challenges, edited by Nørskov (2016).
4
For discussion of the possibility that robots might serve as coworkers and colleagues rather
than simply tools, see, e.g., Bradshaw et al., “From Tools to Teammates: Joint Activity in Human-
Agent-Robot Teams” (2009); Samani & Cheok, “From human-robot relationship to robot-based
leadership” (2011); Samani et al., “Towards Robotics Leadership: An Analysis of Leadership
Characteristics and the Roles Robots Will Inherit in Future Human Society” (2012); Wiltshire et
al., “Cybernetic Teams: Towards the Implementation of Team Heuristics in HRI” (2013); Ford,
Rise of the Robots: Technology and the Threat of a Jobless Future (2015); Spring, “Can machines
come up with more creative solutions to our problems than we can?” (2016); and Wong, “Wel-
come to the robot-based workforce: will your job become automated too?” (2016).
Chapter Eight: The Social Robot as CEO • 305
the CEO.
5
While a contemporary organization’s nonhuman agent resources –
such as computers and robots – can often simply be programmed or instructed
to carry out particular assigned tasks, human workers cannot be so directly or
completely controlled; they must be persuaded and motivated to carry out the
particular work that their organization wishes them to perform.
6
It is not difficult to imagine the development of future robots that possess
the technical knowledge and information-processing and decision-making
abilities that are needed in order to plan, organize, and control the activities
of a company’s agent-workers, insofar as such capacities are congruent with
our popular notions of the strengths of robots and other artificially intelligent
entities; we ‘expect’ a computerized entity to be capable of effectively and ef-
ficiently processing information. And indeed, progress toward designing ro-
bots with such basic managerial capacities is already well advanced.
7
How-
ever, at first glance it might appear that creating a robot that can successfully
inspire and win the trust of an organization’s human personnel is a major
hurdle to the development of a robot that can serve effectively as CEO of a
company that includes human workers. After all, the stereotypical image of a
‘robot’ has not traditionally attributed to such entities the sort of social and
emotional intelligence, wisdom, ethical insight, moral courage, and selfless
personal commitment to an organization that inspire loyalty and trust in the
human CEOs who possess such characteristics. In this text, however, we argue
that the development of social robots that are capable of manifesting the kind
of charisma, inspirational leadership, and trustworthiness that would allow
them to serve effectively as CEOs within otherwise human organizations is
not only possible but likely even inevitable. Our analysis of this issue utilizes
phenomenological methods employed by philosophers of science alongside a
cultural posthumanist methodology that analyzes contemporary cultural
products (such as works of science fiction) and society's reaction to them as a
5
See Dainty & Anderson, The Capable Executive: Effective Performance in Senior Management
(1996).
6
For key distinctions between human and nonhuman agents in the context of an organization,
see “The Posthuman Management Matrix: Understanding the Organizational Impact of Radical
Biotechnological Convergence” in Gladden, Sapient Circuits and Digitalized Flesh: The Organiza-
tion as Locus of Technological Posthumanization (2016).
7
For discussion of efforts to develop robots and other artificial agents that possess individual
traits and functionality needed to plan, organize, and control activities within the context of a
business or other organization, see, e.g., Kriksciuniene & Strigunaite, “Multi-Level Fuzzy Rules-
Based Analysis of Virtual Team Performance” (2011); Nunes & O’Neill, “Assessing the Perfor-
mance of Virtual Teams with Intelligent Agents” (2012); and Dai et al., “TrustAider – Enhancing
Trust in E-Leadership” (2013), and the discussion of such research in Gladden, “Leveraging the
Cross-Cultural Capacities of Artificial Agents as Leaders of Human Virtual Teams” (2014). For a
broader review of the expected organizational and societal impacts of robots possessing mana-
gerial capabilities, see, e.g., Elkins, “Experts predict robots will take over 30% of our jobs by 2025
— and white-collar jobs aren’t immune” (2015), and Susskind & Susskind, The Future of the Pro-
fessions: How Technology Will Transform the Work of Human Experts (2015).
306 • Posthuman Management
means of diagnosing human fears or aspirations relating to future processes
of technological posthumanization such as those that will be driven by the
growing deployment of sophisticated social robots.
We begin by considering the diverse roles that must be filled by the CEO
of a human organization and the traits and characteristics that are needed in
order to fill such roles – highlighting especially a CEO’s role as the most visi-
ble font and focus of leadership within an organization. We then analyze phe-
nomena which French and Raven referred to as ‘referent power’ and Weber
described as ‘charismatic authority’ – two related characteristics which if pos-
sessed by a social robot could enable that robot to lead human personnel by
inspiring them and securing their loyalty and trust in ways similar to those
employed by human CEOs. Referent power and charismatic authority are not
monolithic phenomena but rather characteristics whose dynamics can be
manifested in a number of diverse (and potentially even mutually exclusive)
ways. By analyzing contemporary efforts in robotic design and engineering as
well as cultural depictions of robots, we identify three ways in which human
beings are striving to create charismatic robot leaders for ourselves – ways
that reflect a model of the human mind as ancient as Plato’s tripartite division
of the soul. We then explore the manner in which particular robotic entities
will either intentionally or unintentionally acquire the trust of human beings
that is needed in order to function successfully as CEO of an organization that
includes human stakeholders. Building on Coeckelbergh, we argue against the
contractarian-individualist approach which presumes that human beings will
be able to consciously ‘choose’ whether or not robots should fill leadership
positions within businesses and other organizations; we instead propose a
phenomenologically and socially oriented understanding of the manner in
which charismatic robot leaders for businesses and other institutions will
emerge naturally from the world’s social fabric, without any rational decision
on the part of human beings. Finally, drawing on Abrams and Rorty, we sug-
gest that the stability of these leader-follower relations – and the extent to
which charismatic social robots can remain long-term fixtures in leadership
roles such as that of CEO – will hinge on a fundamental question regarding
the intelligence and motivation of such robotic entities that has not yet been
adequately resolved.
II. The CEO as Leader: The Need to Inspire, Motivate, and
Persuade
Serving successfully as the chief executive officer of an organization that
includes human personnel requires one to possess and deploy a diverse set of
capacities and skills. Building on the classic management framework of Henri
Fayol, Daft identifies the four essential functions that must be performed by a
manager as planning, organizing, leading, and controlling the activities of those
workers and systems that fall within the manager’s purview; in the case of a
Chapter Eight: The Social Robot as CEO • 307
CEO, this includes all of an organization’s activities.
8
In this text, we do not
address the ongoing advances in robotic design and engineering that are al-
lowing robots to increasingly perform the managerial functions of planning,
organizing, and controlling the activities of human workers;
9
although we in
no way underestimate the need for a CEO to effectively plan, organize, and
control an organization’s activities, here we will focus instead on the CEO’s
duty to lead.
While an organization’s automated manufacturing process or cloud-based
file-sharing system can be controlled by simply reconfiguring hardware and
software, the organization’s human personnel cannot be directly controlled or
reprogrammed in such a fashion; they must instead be led. Daft defines the
leadership function by explaining that:
Leading is the use of influence to motivate employees to achieve
organizati onal goals. Leading means creating a shared culture and
values, communicating goals to people throughout the organiza-
tion, and infusing employees with the desire to perform at a high
level.
10
Leadership takes on different aspects when practiced at different levels within
a company’s organizational structure. The role of a senior executive such as a
CEO differs from that of lower-level managers because a senior executive
must successfully create and maintain power bases within a broader range of
stakeholder groups (such as those of shareholders, board members, key sup-
pliers and partners, employees, regulators, and local communities); possess
and wield the political savvy needed to overcome the rivalries, turf battles,
and political maneuvering that become more intense at the highest levels
within an organization; make, communicate, and implement decisions that
may have life-altering impacts for all of an organization’s employees; and
weather the high degree of internal and external scrutiny that comes with
being an organization’s senior spokesperson and authority figure.
11
A com-
pany’s CEO occupies a unique leadership role: while he or she may only im-
mediately supervise a small number of direct reports, he or she leads all of the
company’s employees by enunciating and visibly embodying the firm’s stra-
tegic vision and values and by cultivating a sense of loyalty and engagement
on the part of personnel throughout all levels of the organization.
8
Daft, Management (2011), p. 8.
9
For a discussion of efforts to design robots that are increasingly adapt at performing such man-
agement functions, see, e.g., Gladden, “Leveraging the Cross-Cultural Capacities of Artificial
Agents as Leaders of Human Virtual Teams” (2014).
10
Daft (2011), p. 8.
11
Dainty & Anderson (1996), pp. 4-5.
308 • Posthuman Management
Through their empirical research, Dainty and Anderson have identified
eleven capabilities that affect senior executives’ ability to function success-
fully in their roles. Among these factors are a group of interpersonal capabil-
ities that include ‘influence’ (the ability “to get others to accept your point of
view, have them act in your interests and prevent them implementing agendas
which are contrary to your own”), ‘leadership’ (the ability “to help others
overcome hurdles to achieve a common goal”), ‘integration’ (the ability “to
build senior level teams and ensure larger organisational units work together
effectively”), and ‘insight’ (the ability “to understand what motivates others,
their mental view of the world and their possible actions and agendas”).
12
Sim-
ilarly, Goleman has described the crucial importance of ‘emotional intelli-
gence’ as the “sine qua non of leadership” and identified its five components
as self-awareness, self-regulation, motivation, empathy, and social skill.
13
Alt-
hough the increasingly critical role of technology and technical expertise in
business might lead one to presume that a CEO’s interpersonal skills are no
longer so central to an organization’s success, the possession of ‘human skills’
such as the ability to communicate with, motivate, and lead workers is only
becoming more important for CEOs, not less.
14
Research of the sort just described underscores the need for a CEO to be
able to lead an organization’s human personnel by inspiring, motivating, in-
fluencing, and persuading them. Before directly considering the question of
whether a robot could possess and demonstrate such leadership capacities as
a CEO, we must first explore the psychological, social, and cultural mecha-
nisms by which human beings allow themselves to be inspired, motivated, in-
fluenced, and persuaded by those who lead them. Within the constraints of
this text, we will not able to investigate in detail robots’ potential for exploit-
ing all such mechanisms. Instead we will focus on one noteworthy possibility:
namely, the potential for a robotic CEO to lead human workers through its
possession of charismatic authority and referent power.
III. Referent Power and the Exercise of Charismatic
Authority by Robots
French and Raven proposed a now-classic model that identifies five bases
of social power, to which Raven later added a sixth. These six bases comprise
12
Dainty & Anderson (1996), pp. 16, 18.
13
Goleman, “What Makes a Leader?" (2004).
14
Daft (2011), p. 11; Plunkett et al., Management (2012), pp. 26-27.
Chapter Eight: The Social Robot as CEO • 309
coercive, reward, legitimate, referent, expert, and informational power. An ef-
fective leader must be able to possess and employ one or more of these bases
of power in order to secure, maintain, and exercise influence over followers.
15
Applying French and Raven’s model, we could say that a leader utilizes
coercive power if he or she employs the threat of force in order to influence the
behavior of others and reward power when he or she promises some benefit in
order to influence it. A leader possesses legitimate power when cultural values
or social structures have invested him or her with a particular authority that
others feel a sense of responsibility and obligation to obey. A leader possesses
referent power when others are drawn to the leader because they feel a sense
of fondness or admiration for the leader, feel a sense of kinship or affinity with
the leader, identify with the leader’s moral values, or desire to win his or her
personal approval. A leader possesses expert power when he or she is able to
influence others through their perception that he possesses unique skills or
experience. Finally, a leader utilizes informational power when he or she influ-
ences others by means of his or her control over access to information re-
sources.
Referent Power: The Most Difficult Base of Social Power for
Robots to Utilize?
It is relatively easy to imagine a future world in which a social robot filling
a senior management role within an organization is in a position to either
threaten and intimidate its subordinate human employees into working in a
particular fashion (e.g., because it has been given the authority to terminate
workers that it determines to be ineffective), dole out financial rewards to the
human employees in order to win their cooperation, issue instructions that
must be followed by virtue of the robot’s official status within the organiza-
tion’s personnel structure, cultivate the perception that it possesses expert
work-related skills, or control its human employees’ access to information.
These cases would represent the use of coercive, reward, legitimate, expert,
and informational power, respectively.
The case that is perhaps most difficult to imagine is that of a robotic senior
executive who influences human beings by exercising referent power over
them. How likely is it that a firm’s human stakeholders would voluntarily se-
lect a robot as their CEO and submit to its leadership because they feel a sense
of fondness or admiration for it? Or because they feel a sense of kinship or
affinity with the robot? Would human workers embrace and allow themselves
to be led by a robotic CEO because they identify closely with the robot’s ‘moral
values’? Or because they long to bask in the robot’s personal affirmation and
15
See French & Raven, “The Bases of Social Power” (1959); Raven, “A power/interaction model
of interpersonal influence: French and Raven thirty years later” (1992); and Forsyth, Group Dy-
namics (2010), p. 227.
310 • Posthuman Management
approval of them? At first glance, it appears that exercising referent power
would be the most difficult and least likely way for a robotic CEO to motivate
and influence its human subordinates. This also, we would suggest, makes it
the most interesting case to consider.
Referent Power as Manifested in the Charismatic Authority of
Robots
In the case of a human leader who utilizes referent power, this power is
frequently grounded in the leader’s possession of what Weber called ‘charis-
matic authority.’ Such charismatic authority is manifested when followers’
obedience arises from their “devotion to the exceptional sanctity, heroism or
exemplary character” of their leader, “and of the normative patterns or order
revealed or ordained by him.”
16
Such a relationship requires that followers possess an emotional bond with
their leader and deep trust in his goodness; this trust will in turn nurture sen-
timents of respect, admiration, and personal loyalty in the followers. The use
of charismatic authority is a form of leadership that is both powerful and
ephemeral, because it is grounded in emotion and the followers’ conviction
that the leader is a superior being – a belief that can be quickly punctured if
the charismatic leader is seen to behave in some disillusioning way that shat-
ters the link between the leader and the ideal that he or she had apparently
embodied.
Forging such a charismatic leader-follower relationship between a robot
and human being requires two participants, neither of whom seems particu-
larly well-suited for their role. In order for a leader to exercise charismatic
authority, he or she must possess (or at least be perceived to possess) traits
such as holiness, divine ordination, moral righteousness, personal charm, or a
hypnotic strength of personality. These are traits which we might expect to
find in particular human beings, but which we do not normally see as possess-
able by robots. We stereotypically presume that robots are not adept at evok-
ing feelings of true love, admiration, or loyalty from human beings, and that
16
Weber, Economy and Society: An Outline of Interpretive Sociology (1968), p. 215. This technical
sense of ‘charismatic’ diverges from the everyday sense of the word. To say that a leader is
‘charismatic’ in Weber’s sense does not necessarily imply that the person is seductive, emotion-
ally astute, eloquent, or possessing hypnotic powers of enthrallment. While those traits can in-
deed provide a basis for charismatic authority, a leader possessing charismatic authority might
just as easily be one who is physically unattractive, emotionally impaired, and lacking in any
romantic appeal – as long as he or she is seen as a living embodiment of some religious, philo-
sophical, or cultural value that is, in itself, attractive; people yearn to be close to and conform to
the leader, because they yearn to be close to the principle that he or she represents.
Chapter Eight: The Social Robot as CEO • 311
human beings – for their part – are not inclined to spontaneously shower such
sentiments on robots.
17
And yet, while it might superficially appear as though social robots are
unlikely candidates to serve as charismatic leaders of human followers, a
deeper sociotechnological and phenomenological investigation suggests that
it is not only possible for robots to hold sway over human followers through
the use of referent power and charismatic authority, but that the advent of
such leader-follower relationships between robots and humans is rapidly ap-
proaching. As a starting point, the fact that social robots can lead other robots
within multi-agent systems or artificial organizations is now taken for
granted. Computer scientists have already designed communities of robots
that spontaneously organize their own social structures, with members taking
on roles as ‘leaders’ or ‘followers.’ Many such robot communities use algo-
rithms involving probabilistic elections to choose their leader, but some re-
searchers are developing formal frameworks that would allow the leaders of
robot communities to emerge through robots’ manifestation of and response
to all of the bases of power identified by French and Raven, including referent
power.
18
Incremental Steps toward Human Beings’ Submission to
Robotic Referent Power
In many ways, human beings, too, already demonstrate obedience to robots
and their electromechanical kin. We listen to the automated voice that tells us
to step away from the subway car’s doors as they are about to close; we follow
the instructions of an electronic voice on the phone telling us to ‘press 1’ for
this or ‘press 2’ for that. The nature of our relationship with such technologies
becomes quite explicit, for example, in the case of a humanoid robotic cross-
ing-guard in Kinshasa that monitors traffic and tells pedestrians when it is
safe to walk.
19
In our everyday accounts of such interactions with technology,
we attribute to them the characteristics of a power relationship in which we,
as human beings, are submitting to the dictates of a technological master; thus
we typically say that we ‘obey’ a traffic light rather than that we are ‘collabo-
rating with’ or ‘utilizing’ it.
However, in these elementary cases, we are not obeying a robotic system
because we consider it to be an intelligent agent possessing social power of its
17
Regarding such human attitudes toward robots, see, e.g., Van Oost & Reed, “Towards a socio-
logical understanding of robots as companions” (2010); Cabibihan et al., “When Robots Engage
Humans” (2014); Szollosy, “Why are we afraid of robots? The role of projection in the popular
conception of robots” (2015); and Szollosy, “Freud, Frankenstein and our fear of robots: projec-
tion in our cultural perception of technology” (2016).
18
Pereira et al., “Conceptualizing Social Power for Agents” (2013).
19
Schlindwein, “Intelligent Robots Save Lives in DR Congo” (2014).
312 • Posthuman Management
own but because it represents and extends the power of the human beings
who have constructed it and situated it in a particular role – and who do pos-
sess some social power over us. Human beings do not yet widely interact with
robots in a way that reveals robots to be autonomous agents who possess their
own social power, let alone possess the more specific characteristics of refer-
ent power and charismatic authority. However, the first tentative steps in that
direction have already been seen, and we stand on the threshold of a wider
human embrace of such charismatic robot leaders.
IV. Three Ways in Which We Envision Charismatic Robots
as Leaders of Human Beings
Some scholars have argued that human beings’ ongoing, almost instinctual
embrace of the latest technologies reflects the fact that “giving robots posi-
tions of responsibility is not only unavoidable but is rather something desired
and that we are trying to achieve.”
20
In other words, we yearn for robot leaders
and are striving – whether consciously or unconsciously – to create them; we
want to be led by our mechanical creation. However, not all charismatic robot
leaders are alike. Just as the exercise of charismatic authority by human beings
can on take radically different forms, so too the charismatic robot leaders that
we envision for ourselves display a number of very different (and sometimes
contradictory) traits. Rather than discussing ‘charismatic robot leadership’ as
though it represented a single undifferentiated phenomenon, our understand-
ing of charismatic robot leadership can be advanced if we distinguish and an-
alyze key subtypes of charismatic robot leaders.
While a broader analysis of real-world and fictional accounts of robots
might identify additional possibilities, we would suggest that there are at least
three key means by which robots can exercise charismatic authority over hu-
man beings: 1) through the possession of superior morality; 2) through the
manifestation of superhuman knowledge that may take on aspects of religious
revelation; or 3) through interpersonal allure, seduction, and sexual dyna-
mism. As noted below, this categorization offers a parallel to Plato’s tripartite
division of the soul, as it is possible to understand each kind of charismatic
robot leader as appealing to and influencing a different part of its followers’
being.
Type 1 Charismatic Robot Leader: The Saint-Martyr
The first form of charismatic robot leader – which attracts and influences
human followers through the possession and display of a superior morality –
we might refer to as the ‘Type 1’ or ‘Saint-Martyr.’ In fictional depictions of
20
Samani et al., “Towards Robotics Leadership: An Analysis of Leadership Characteristics and
the Roles Robots Will Inherit in Future Human Society” (2012).
Chapter Eight: The Social Robot as CEO • 313
such robots, the demonstration of the Type 1’s moral superiority is often un-
derscored by the robot’s conscious decision to sacrifice its own existence in
order to save the lives of its human colleagues. While the robot’s destruction
means that – in a practical sense – it is no longer capable of serving as a leader
of human beings, it also serves to confirm and emphasize the fact that the
robot was precisely the sort of noble, selfless leader whom human beings
should have taken as their leader and whose instructions they could have
obeyed with a sense of moral confidence and ease.
The sublime, self-sacrificing, loving nature of the Type 1 robot’s salvific
deed is often highlighted by having it occur precisely when the robot is on the
threshold of receiving some great personal benefit, such as the development
of a higher level of consciousness or a more ‘human’ array of experiences and
emotions.
21
Just at the moment when the robot’s awareness of its new poten-
tial is beginning to blossom, the robot and its closest human friends are con-
fronted by a climactic threat that seems certain to annihilate the protagonists.
Amidst this apparently hopeless situation, the robot realizes that there is in-
deed one avenue by which it can save the lives of its human companions – but
only by destroying itself in the process. The robot’s first inklings of its grow-
ing intellectual, emotional, and moral ‘humanity’ guide it toward making its
ultimate self-sacrificing decision, while at the same time making the robot’s
destruction more dramatically wrenching for the audience, because we mourn
the fact that the robot will never have the opportunity to fully explore and
appreciate its new human-like existence.
22
21
Despite the common fictional trope that has made ‘robotic’ a near synonym of ‘cold,’ ‘without
feeling,’ and ‘unemotional,’ we would argue that actual robots of the future will make effective
charismatic leaders not because they display no emotions, but because they display the most
advanced emotions. Friedenberg (2008) points out that researchers have found in numerous con-
texts that the development of human-like emotions is not only of significant evolutionary value
but that it may even be a necessary aspect of any embodied mind – whether natural or synthetic
– that has a claim to sapience. In order to construct robotic brains that receive and process input,
make decisions, and take action as efficiently and effectively as possible, scientists are discover-
ing that they must build into such brains functions that are essentially digital counterparts of
human cognitive functions such as the ability to feel anger and sadness and the need for sleep.
As artificial intelligence becomes more advanced, it often becomes more ‘emotional,’ not less.
These superior emotional abilities can aid robots to become charismatic leaders of human beings,
especially leaders of Types 1 and 3. While the common fictional depiction of robots as unemo-
tional has a long history, the strain of fictional imagination depicting robots as capable of hu-
man-like emotions has always existed and appears to be growing in prevalence, as old stereo-
types about the nature of AI are rendered outdated by science and engineering’s progress toward
the development of artificial general intelligence, toward the advent of synthetic minds that pos-
sess a full range of human-like cognitive capacities and characteristics.
22
Typically, the human companions of the Saint-Martyr robot try (unsuccessfully) to stop its
self-sacrificing act, thereby demonstrating that by the story’s end they had instinctively come to
see the robot not as a mere device or instrument but as an autonomous moral agent on par with
314 • Posthuman Management
TYPE 1 CHARISMATIC ROBOTS AS DEVELOPED IN FICTION
A primeval example of Type 1 Saint-Martyr robots is found in the work
that introduced the word ‘robot’ to the world: the 1921 play R.U.R.: Rossum’s
Universal Robots by Karel Čapek. After depicting the future destruction of hu-
mankind at the hands of armies of robot workers, the play concludes by fo-
cusing on the autonomous biological robots Helena and Primus, a young robot
couple who are each so deeply and selflessly in love with one another that
when the last surviving member of the human race must choose a robot to kill
and dissect for research purposes, Helena and Primus each beg to be the robot
who is killed so that their cherished partner might live. In witnessing the ro-
bots’ spontaneous display of self-sacrificing love, the human researcher real-
izes that the young robot pair constitute a second Adam and Eve who are ca-
pable and worthy of engendering a new (quasi-)humanity that will establish
itself across the earth. He blesses and celebrates the future of the new couple
by reciting the account of Creation from the Book of Genesis, thereby com-
paring their innocence and moral goodness to that displayed by Adam and
Eve before the entrance of sin into the Garden of Eden.
23
One of the most poignant and thoughtfully developed examples of a fic-
tional Type 1 Saint-Martyr robot is the Tachikoma (or ‘think tank’) as pre-
sented in the anime series Ghost in the Shell: Stand Alone Complex, based on
the manga by Masamune Shirow. These armed robotic vehicles had served the
government well but were eventually removed from active duty and con-
signed to be disassembled, because they had begun to display ‘suspicious’ be-
havior – such debating theological questions among themselves during their
free time – that led their human superiors to fear that the Tachikomas’ AI was
becoming glitch-prone and unreliable for use in combat situations. However,
in the series’ climax, the last surviving Tachikomas that had not yet been dis-
assembled become aware of the fact that one of their former human colleagues
was trapped in a battle and perishing, and they escape, rush to their old com-
panion’s aid, and mount a frenzied struggle to save his life, thereby sacrificing
themselves in the process.
24
We realize that the ‘glitches’ and ‘unreliability’
that the Tachikomas had previously begun to demonstrate were not glitches
human beings, as ‘one of their own.’ Alternatively, the robot may realize that its human com-
panions will attempt to intervene and stop its self-sacrificing act, even at the cost of their own
lives; thus the robot will plan and execute its final deed in such a way that it cannot be stopped,
which is accomplished either by hiding its intentions from its human companions or proactively
choosing to violate instructions that had been given to it by its human superiors. In this way,
the Type 1 robot fully claims its place as an autonomous agent and ‘leader’ who will weigh all
of the moral principles and practical considerations at play in a difficult situation, forge its own
conclusion about the action that must be taken, and then courageously carry out that deed.
23
See Act III of Čapek, R.U.R.: Rossum’s Universal Robots (2016).
24
Kamiyama, “C: Smoke of Gunpowder, Hail of Bullets – BARRAGE” (2003).
Chapter Eight: The Social Robot as CEO • 315
at all but rather flashes of their dawning sapience and humanity – a humanity
that is highlighted when, in the moment before its destruction, one of the
Tachikomas spontaneously utters a prayer to God, lamenting the fact that it
had not been able to do more to save its human companion.
25
Another fictional example is found in the novel 2010: Odyssey Two and its
film adaptation. Contrary to the fears and expectations of the film’s human
protagonist, the HAL 9000 computer (which can be thought of as a social robot
with the entire Discovery One spaceship as its body) voluntarily sacrifices it-
self in order to save the lives of a spaceship’s human crew. A similar fictional
example of the Type 1 charismatic leader can be seen in the film Star Trek:
Nemesis, where the android Lieutenant Commander Data – who had long
struggled to understand what it means to be ‘human’ and grappled with the
difficulties of serving as a leader of human beings – sacrificed himself to save
the lives of his fellow crew members.
In such cases, the creators of popular fiction are presenting visions of social
robots whose moral clarity, selflessness, and nobility are designed to evoke
feelings of admiration and even awe among human audiences. In these social
robots and their salvific, almost Christ-like sacrifices, we see leaders to whom
we would be comfortable entrusting the most difficult moral decisions, be-
cause they have demonstrated in the most concrete way possible that their
decisions will be made wisely and justly and for the sake of the human com-
mon good, without any self-centered pursuit of the robots’ own personal
profit or pleasure.
ARTIFICIAL GENERAL INTELLIGENCE AND THE ENGINEERING OF TYPE 1 CHARISMATIC
ROBOTS
If a corporation has an office led by a robot manager that has unfailingly
demonstrated itself to be wise, fair, selfless, innovative, and effective, then
might not some human employees voluntarily request a transfer to that office,
rather than continuing to work in a department led by a human manager who
is known to be petty, vindictive, and incompetent? It is not difficult to imagine
that in our real, nonfictional world, at least some human beings might prefer
to place their trust in Type 1 robot leaders rather than in human leaders who
time and again have demonstrated a propensity to tolerate or even embrace
nepotism, corruption, oppression, and other injustices. Regardless of whether
Type 1 charismatic robots might someday lead entire human societies, it is
feasible to imagine such robots succeeding as charismatic leaders on a smaller
scale within businesses, educational institutions, government agencies, the
25
The robots’ self-sacrificing nobility is further reinforced in the second season of the series,
when a group of reconstructed Tachikomas take it upon themselves – in contravention of in-
structions given to them by their human superiors – to sacrifice themselves in order to avert a
nuclear explosion that would have cost countless human lives.
316 • Posthuman Management
military, nonprofits, and other kinds of human organizations. Early genera-
tions of Type 1 robots may not yet possess the full panoply of strengths, ex-
pertise, and virtues that make some human beings excel as charismatic lead-
ers, but they might at least ameliorate or eliminate many of the more egre-
gious flaws and limitations that undermine the performance of human beings
as supervisors.
The development of the sort of artificial general intelligence technologies
needed for the real-world creation of such Type 1 charismatic robot leaders
lies years, if not generations, in the future.
26
However, the groundwork for the
development of such leaders is being laid by the creation of scientific frame-
works such as Pereira’s schema for the use of referent power by artificial
agents, as well as the crafting of popular works of science fiction that are help-
ing segments of human society to appreciate and explore the fact that social
robots’ ability to weigh conflicting interests, utilize their best judgment to
make imaginative, just, and wise decisions in accordance with a set of moral
principles, and then take decisive action in moments of great difficulty can, in
theory, surpass the abilities of a human being.
Type 2 Charismatic Robot Leader: The Superintelligence
The second form of charismatic robot leader – which attracts and influ-
ences human followers through the possession and manifestation of superhu-
man knowledge – we might refer to as the ‘Type 2’ or ‘Superintelligence.’ It
is important to note that the Type 2 charismatic robot leader does not influ-
ence or manipulate human beings directly by controlling their access to infor-
mation; in that case – according to French and Raven’s framework – the robot
would be relying on informational power rather than referent power. Simi-
larly, the robot does not influence or dictate human beings’ behavior simply
by virtue of the fact that it is perceived to possess unique skills or experience;
in that case, the robot would be utilizing expert power. Rather, the Type 2
robot accumulates and maintains human followers because it is perceived to
be so profoundly wise and knowledgeable that human beings feel drawn to it
on an emotional level. Thus while an expert system in the form of an auto-
mated business database might exercise power over human workers because
they find the knowledge that it contains to be useful, a Type 2 Superintelli-
gence may exercise power over human workers because they find the
knowledge that it possesses to be exhilarating or intoxicating.
26
For the current state of research on the development of artificial general intelligence as well
as challenges and future possibilities, see, e.g., Artificial General Intelligence, edited by Goertzel
& Pennachin (2007); Theoretical Foundations of Artificial General Intelligence, edited by Wang &
Goertzel (2012); and Artificial General Intelligence: 8th International Conference, AGI 2015: Berlin,
Germany, July 22-25, 2015: Proceedings, edited by Bieger et al. (2015).
Chapter Eight: The Social Robot as CEO • 317
The Type 2 robot’s appeal can manifest itself in different ways, depending
on how greatly its knowledge surpasses that of human beings. In cases of a
vast informational gulf, one can imagine that some human beings who have
become acutely aware of their limited insight into the true nature of the uni-
verse might see a Type 2 robot as a radiant beacon of vast, almost limitless
wisdom amidst the dark world of human ignorance. The Type 2 robot has
plumbed the mysteries of the cosmos; its broad and hyperacute channels of
sensory input allow it to experience empirical reality in a way that human
senses cannot; its cognitive storage and processing capacities allow it to as-
similate, correlate, and extract meaning from functionally infinite bodies of
knowledge; it grasps the relationship of time and space, energy and matter,
life and death, in a way that the human intellect is too constrained to compre-
hend.
27
But if the human mind is too limited to directly fathom the knowledge
that the Type 2 robot possesses, a human mind might decide that it can at least
pledge its fealty to that knowledge – and to the one who has fathomed it.
THE CONNECTION BETWEEN KNOWLEDGE AND MORALITY IN CHARISMATIC ROBOTS
In fiction, there is significant overlap between Type 1 and Type 2 robots;
the possession of superior factual knowledge and superior moral courage are
often seen to go hand in hand. In his analysis of the ethicality of entrusting
responsibility to robots, Kuflik addresses these two kinds of robots as a single
species: almost as an aside, he considers the possibility of robots who are not
only more accurate and technically reliable than human beings, but who:
hope, fear, love and care – and who unfalteringly do what is just
and kind – our moral, not merely techni cal, superiors? … [W]ould we
– inverting the relationship between creator and creation – faithfully
27
It is worth pointing out that Type 2 charismatic robot leaders who possess a sufficiently ad-
vanced and complex body of knowledge may not be able to impart that knowledge fully, literally,
and directly to human followers in a form that the human mind can perceive or understand; the
attempt to impart knowledge to a human being in this way might result in a sensory, cognitive,
or spiritual ‘overload’ comparable to the blindness that results from staring into the sun. A robot
might only be able to share such knowledge partially and obliquely, by distilling it into simplified
metaphors, parables, or symbols that the human mind can understand. For a discussion of such
artificial superintelligences and humanity’s possible interaction with them, see, e.g., Yampolskiy,
Artificial Superintelligence: A Futuristic Approach (2015). The fact that human followers cannot
fully experience or grasp the robots’ knowledge may only serve to increase the mystique and
allure of the robot leaders as beings who can grasp such knowledge. The dynamics at work in
human beings experiencing fleeting, metaphorical glimpses of a superhuman and ultimately un-
fathomable reality may not be altogether dissimilar from what has historically been conceptual-
ized as the process of divine revelation or an experience of the ‘beatific vision’ of God. Thus it
would not be surprising if many of the sentiments and behaviors displayed by the human fol-
lowers of Type 2 Superintelligence robots take on some of the same characteristics seen in the
human adherents of established religions that involve devotion to a supernatural Being who is
seen as the font, repository, and revelator of infinite wisdom.
318 • Posthuman Management
strive to serve them (as many now think of themselves in relation to
what they take to be an all-wise and loving deity?)
28
It is easy to imagine that an ‘all-wise’ (Type 2) charismatic robot leader might
also be ‘all-loving’ (Type 1). However, it is debatable whether there is any
ontological or cognitive principle requiring that this be the case. It is possible
to conceive of a social robot that possesses the Saint-Martyr’s degree of self-
lessness and moral virtue and yet which is quite lacking in its understanding
of scientific or historical facts; we are drawn to such a robot not because it is
smart but because it is good. Conversely, it is possible to imagine a social robot
whose values and priorities seem to us morally opaque or even misguided but
whose intellect has access to some transcendent knowledge: we have no con-
fidence that the robot is giving significant consideration to the welfare of hu-
manity in its decision-making; we might even be convinced that the robot
would be content to see human beings die, in order to protect and advance its
own interests. And yet the superhuman intellect that the robot possesses is so
enthralling in its ability to identify patterns and make sense of the universe
that human beings yearn to be close to the robot and to win its favor, so that
in the face of a cold and ‘mindless’ universe they might feel themselves to be
bound up with a source of infinite meaning.
Thus while a Type 1 robot is necessarily considered by its followers to pos-
sess exceptional moral goodness, there is more room for ambivalence in hu-
man followers’ assessments of a Type 2 robot’s morality. For example, looking
ahead toward a posthuman future, Jerold Abrams suggests that:
With regard to the public sphere, the concern, as it is arti culat ed
most clearly in Foucault, is surveillance. […] Cameras and satellites
will be replaced by invisible nanoswarms and picoswarms, which
may record and recondition behavior at the level of the private
sphere. [… ] Indeed, such a notion turns the idea of utili ty fog inside
out: rather than a pleasurable or educational medium one freely
enters, manipulative forms will lurk in the corner of a room, watching
and recording, perhaps even constructing a n alternative reality un-
known to the huma n affected. […] Visual surveillance and control
will become virtually unlimited.
29
Abrams has updated for the era of cyberculture and nanotechnology Fou-
cault’s prediction of a modern society in which one is monitored and con-
trolled, highlighting the fact that it is by means of advanced technology that
this surveillance and control is executed. However – perhaps a reflection of
the fact that he was writing over a decade ago – Abrams still seems to presume
that those technologies of surveillance and control would be instruments
28
Kuflik, “Computers in Control: Rational Transfer of Authority or Irresponsible Abdication of
Autonomy?” (1999).
29
Abrams, “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shusterman, Rorty,
Foucault” (2004), p. 252.
Chapter Eight: The Social Robot as CEO • 319
wielded by human beings to control other human beings. But what if, in fact,
these advanced technologies are not simply tools of observation but are in fact
the observers themselves? What if the ultimate masters and interpreters of the
surveillance society are artificial minds? It is easy to speak ominously of all-
seeing social robots that peer into every aspect of our private lives, to imagine
that such robots would exercise control over human subjects through the use
of informational and coercive power. However, the flip-side of such visions
would be an all-seeing social robot that is benevolent, respectful, and discreet
and which uses its intimate knowledge of our strengths, weaknesses, and
longings to guide us gently toward happiness and self-fulfillment. This would
be a more benign example of a Type 2 Superintelligence robot as charismatic
leader.
TYPE 2 CHARISMATIC ROBOTS IN FICTION AND REALITY
A fictional example of Type 2 Superintelligence leaders on a grand scale
would be the ‘Minds’ from the Culture series of novels by Iain M. Banks. These
highly advanced and benevolent AIs possess bodies in the forms of vast star-
ships, each of which is capable of containing millions of human inhabitants.
30
The Minds’ sensory input, intellectual capacities, and knowledge are unfath-
omably greater than those of human beings, effectively approaching omnisci-
ence: a single Mind is capable of carrying on conversations with millions, or
even billions, of human beings simultaneously. While the Culture in which
the Minds and human beings coexist has aspects of a utopian anarchist soci-
ety, human beings essentially accept the Minds as their leaders, due in part to
a trust in and desire to belong to the boundless knowledge that the Minds
represent.
31
A more modest (and less serious) example of a Type 2 robot leader
might be found in the novel The Stainless Steel Rat Gets Drafted, in the charac-
ter of the charismatic robot Mark Forer. While Forer is far from omniscient,
his superhuman knowledge contributed to the fact that an entire planet’s pop-
ulation chose to adopt the philosophical and economic system of Individual
Mutualism that he had expounded to them.
32
Another example can be found in the classic cyberpunk computer game
Deux Ex (2000), considered by numerous critics to be the best computer game
30
Note that in the novels, many of the humanoid races and protagonists are not, strictly speak-
ing, ‘human beings’; but that is not relevant for our current purposes. When considering the
question of whether human beings in the real world are capable and desirous of adopting char-
ismatic robots as leaders, what is important is that the real-life human beings who read Banks’s
works identify positively with the characters in the novels and feel that the robots depicted
therein are the sort of beings whom they would long to have as leaders.
31
See Banks, Consider Phlebas (1987), the first novel published in Banks’s Culture series, which
eventually included nine other novels.
32
Harrison, The Stainless Steel Rat Gets Drafted (1987).
320 • Posthuman Management
of all time thanks to its rich storyline and the complex moral dilemmas that a
player must confront.
33
At the game’s conclusion, the player’s neuroprosthe-
tically augmented character must choose between allowing future society to
be ruled by a small group of ‘enlightened’ human beings; destroying the
world’s telecommunications network and sending human society back to a
decentralized pre-technological Dark Age; or accepting the claim of a power-
ful artificial general intelligence, Helios, that it wishes to rule human society
benevolently to bring peace and prosperity to a broken world and merging
with it to facilitate its plan. Based on the characteristics displayed throughout
the game, Helios could be understood as a Type 2 Superintelligence or hybrid
Saint-Martyr-Superintelligence charismatic robot leader. Online polls of play-
ers conducted by three different websites each found that a majority of re-
spondents chose to merge with the AI and grant it the ability to rule the
world;
34
although nonscientific, such anecdotal evidence suggests at a mini-
mum that the concept of voluntarily choosing a Type 2 Superintelligence as a
charismatic robot leader is not inherently incoherent or repugnant to the hu-
man heart and mind.
As with Type 1 robots, the real-world engineering of Type 2 robot leaders
is only in its most incipient stages. However, it is already the case that auto-
mated systems exercise related forms of influence – such as expert power and
informational power – over the thoughts and actions of human beings. Hu-
man beings regularly and voluntarily delegate a portion of their decision-mak-
ing process to cloud-enabled bodies of data and data-processing tools such as
Google’s search engine, Wikipedia, or Apple’s Siri. It is not difficult to imagine
that the sort of passionate loyalty demonstrated toward such brands and prod-
ucts by millions of human information-consumers around the world could
someday be transferred into a sense of loyalty, fondness, and affinity for social
robots that serve as the custodians and repositories of even greater
knowledge.
Type 3 Charismatic Robot Leader: The Seducer
The third form of charismatic robot leader – which attracts and influences
human followers through interpersonal allure, physical attractiveness, and
sexual dynamism – we might refer to as the ‘Type 3’ or ‘Seducer.’ The concept
of a robot that inveigles and controls human beings through romantic or erotic
appeal long predates Čapek’s introduction of the word ‘robot’ in 1920. Even
setting aside the ancient Greek myth of Pygmalion’s love for an animated
ivory statue (which would not, in a contemporary sense, be understood as a
‘robot’), we find such early and archetypal examples as the female android
33
Douglas, “Deus Ex – Still the Best Game Ever?” (2011).
34
“Poll: Poll: Which Deus Ex ending would you choose?”; “Which Deus Ex 1 ending did you
choose?”; “**spoiler**Poll: Which ending did you pick?”
Chapter Eight: The Social Robot as CEO • 321
Hadaly in Villiers de l’Isle-Adam’s 1886 novel L’Ève future. Hadaly has been
designed by a (fictionalized version of) Thomas Edison to be just as physically
alluring as a real human woman, but with an even more engaging intellect
and personality. Edison’s electromechanical creation is so effective that a Brit-
ish nobleman willingly abandons his human fiancée to take Hadaly as his con-
sort, despite knowing that she is not human.
35
A more widely recognized depiction of a Type 3 Seducer robot – who quite
literally and spectacularly takes on a role as a charismatic leader of human
beings – is that of the robot Maria from Fritz Lang’s seminal 1927 film, Me-
tropolis. The amoral robot Maria, designed to replace a heroic young woman
whose appearance she copies, is sent by her makers to the Yoshiwara night-
club, where she demonstrates her seductive powers by performing a barely-
clad erotic dance that provokes the city’s male elite into a lascivious frenzy.
Having passed that ‘test,’ the robot Maria then establishes herself as the leader
of the city’s oppressed working masses and uses her fiery, hypnotic rhetoric
to incite them into a self-destructive revolution that nearly results in the
deaths of their own children.
36
While fictional examples of male Type 3 charismatic robots are known
(such as Gigolo Joe in the 2001 film A.I. Artificial Intelligence), the majority of
these Seducer robots are female, including Rhoda from My Living Doll, the
robotic prostitutes in Westworld, the eponymous robots of The Stepford Wives,
Rachael and the ‘pleasure model’ Pris from Blade Runner, the illegally modified
‘boomer’ Eve from Parasite Dolls, the gynoids of painter Hajime Sorayama,
and the line of modified ‘sexroids’ produced by the Locus Solus corporation in
35
L’Isle-Adam, Tomorrow’s Eve (2001).
36
Lang, The Complete Metropolis (2010). From the dawn of our fictional relationship with robots
over a century ago, it has been a two-sided coin: on one side is the robot that has the outward
appearance of a crude machine and whose thought and behavior are mechanistic, logical, and
lacking in any human emotion. The early archetypes of such ‘nonhuman’ robots can be found
in characters like Tik-Tok from L. Frank Baum’s 1907 novel Ozma of Oz or The Automaton from
the 1919 film The Master Mystery. But from the very beginning, the other side of the coin has
shown the robot that is so charming, so seductive, so sexually and emotionally potent that to
describe its traits as ‘human’ would be an understatement; such a robot enters the realm of the
‘superhuman.’ This duality in our popular conception of robots is in many ways analogous to
that seen in our understanding of another parahuman being, the vampire. As others have noted,
from its earliest modern fictional roots there have coexisted two strains of representation of the
vampire: on one side is the suave, seductive, aristocratic Dracula, irresistible in his erotic domin-
ion over his human prey; on the other side is Nosferatu, the vampires as a vile, pestilential beast
that is utterly repulsive and lacking in any erotic appeal. The depiction of a robot as an emotion-
less walking calculator is a science-fiction counterpart of Nosferatu, sharing nothing in common
with humanity apart from a vaguely humanoid form – whereas the robot Maria is science fic-
tion’s equivalent of Dracula, suffused with an occult sexual energy and confident in her ability
to bend human beings to her will.
322 • Posthuman Management
Ghost in the Shell 2: Innocence, which are linked explicitly to their literary fore-
bears by being referred to as the ‘Hadaly’ model of gynoid.
37
It should be noted
that not all of these robots purposefully use their personal charm to control
others – much less to do so for malign purposes. Sometimes (as in the case of
Rhoda in My Living Doll) the effect that the robot has on others is uninten-
tional and uncontrollable. In all of these cases, robots demonstrate an ability
to influence and shape the behavior of human beings who become drawn to
them, regardless of whether the robots wish to possess this ability or not.
THE PRODUCTION OF TYPE 3 CHARISMATIC ROBOTS AS ROMANTIC COMPANIONS
The race to create commercially viable sex robots is already well underway;
however for the most part, they currently appear to be envisioned as little
more than instruments subject to the control of their human owners, mechan-
ically animated dolls that are vehicles for their owners’ sexual gratification.
Such devices have no independent judgment, volition, or moral agency, and
thus cannot be described as being capable of ‘leading’ human beings in any
meaningful sense. If, however, such physically attractive robots were to be
developed in a direction (e.g., as contemplated in Project Aiko) that foresees
them as autonomous, intelligent agents that serve as emotional and intellec-
tual companions for their owners rather than simply animated dolls, it is eas-
ier to imagine that such robots would find themselves influencing the
thoughts and behavior of their human loved ones as Type 3 charismatic robot
leaders.
38
V. Charismatic Robot Leaders’ Appeal to Different
Aspects of the Human Person
The three types of charismatic leaders described above vary greatly in the
source and dynamics of their appeal to potential human followers. A range of
frameworks might be employed to shed light on such issues. For example, the
three types of charismatic robot leaders could be understood as appealing to
what Plato defined in the Republic as the three parts of the human soul.
39
The
Type 3 Seducer robot can be seen as influencing and controlling a human be-
ing by appealing to the appetitive part of the soul, the ἐπιθυμητικόν, which is
the seat of erotic desire and sensual pleasure. The Type 2 Superintelligence
robot appeals to the λογιστικόν, the logical part of our soul that searches rest-
lessly for knowledge and truth. The Type 1 Saint-Martyr robot appeals to the
37
A special case would be the android Ava from the 2015 film Ex Machina, who is ultimately
revealed to possess characteristics of a Type 3 Seducer but whose power over one of the film’s
key characters emanates largely from her ability to create the impression that she possesses the
innocence, selflessness, and goodness of a Type 1 Saint-Martyr.
38
“It’s not about sex” (2008).
39
Plato, Republic IV 433a-439e.
Chapter Eight: The Social Robot as CEO • 323
θυμοειδές or spirited part of our soul, which determines what sorts of events
will arouse in us a sense of moral indignation and which fortifies us with a
sense of moral courage that allows us to struggle and even die for the sake of
those things in which we believe. In Plato’s schema, within a just soul the
spirited part helps to reinforce the logical part and control the appetitive. This
close affinity between the logical and spirited parts of a just soul is consistent
with the fact – noted above – that charismatic robot leaders that are depicted
as protagonists and sympathetic heroes in works of fiction often possess both
Type 1 and Type 2 characteristics, as Saint-Martyr Superintelligence robots
whose great knowledge and wisdom reinforce their resolve to sacrifice them-
selves for the human common good.
Similarly, the three types of charismatic robot leaders might be understood
through the lens of Maslow’s hierarchy of needs or other more recent ac-
counts of human motivation. Using Maslow’s model, Type 3 Seducers might
be seen as appealing largely to human beings’ needs for physiological wellbe-
ing, safety, love and belonging, and esteem. Type 1 and 2 robots appeal mainly
to human beings’ ‘higher-level’ needs for self-actualization or, in Maslow’s
later formulations, self-transcendence.
40
Within the context of businesses or
other organizations, it may be easier for a charismatic robot leader of a partic-
ular type to implement certain kinds of organizational structures, systems, and
processes
41
rather than others, given the fact that such a robot exercises lead-
ership by appealing to certain aspects of the psyches of human workers and
possesses a specific suite of mechanisms for influencing such personnel.
VI. Charismatic Robot Leaders as Effective CEOs
In their empirical research that studied the characteristics of hundreds of
successful senior-level executives, Dainty and Anderson identified eleven fac-
tors that shape whether such executives are able to perform successfully in
their roles. The characteristic that displayed the strongest positive correlation
with a senior executive’s overall effectiveness was the actioning/structuring
capability, which is the ability “to take action and establish and work within
structures which facilitate the achievement of critical short and long-term
40
See Maslow, Motivation and Personality (1954), and Koltko-Rivera, “Rediscovering the later
version of Maslow's hierarchy of needs: Self-transcendence and opportunities for theory, re-
search, and unification” (2006).
41
In the ‘congruence model’ of organizational architecture developed by Nadler and Tushman,
structures, processes, and systems are the three primary elements of an organization that must
be taken into consideration. See Nadler & Tushman, Competing by Design: The Power of Organi-
zational Architecture (1997), p. 47.
324 • Posthuman Management
goals.”
42
We would suggest that this actioning/structuring capability is not es-
pecially linked to any of the three types of charismatic robot leader described
above.
However, Dainty and Anderson note that the two other capabilities that
demonstrate a “highly significant” positive correlation with a senior execu-
tive’s effectiveness are those of leadership (which is defined as the “Capacity
to help others overcome hurdles to achieve a common goal”) and influence (or
the “Capacity to get others to accept your point of view, have them act in your
interests and prevent them implementing agendas which are contrary to your
own”).
43
With regard to the ‘leadership’ capability, all three types of charis-
matic robot leader have tools at their disposal to help their human employees
overcome obstacles in order to achieve a shared organizational goal: for in-
stance, Type 1 Saint-Martyr robots can provide an example of courage and
selfless perseverance for their human employees; Type 2 Superintelligence ro-
bots can construct a thoughtful and convincing rationale for how the obstacles
can and why they must be overcome; and Type 3 Seducer robots can entice
their employees into confronting and overcoming even the toughest obstacles
through exploitation of the employees’ explicit or unconscious expectations
of the emotional and experiential rewards that they will receive as a result of
satisfying their leader.
Similarly, with regard to the ‘influence’ capability, all three types of char-
ismatic robot leader possess methods for persuading their employees to accept
their leader’s perspective and adopt the leader’s interests as their own. A Type
1 Saint-Martyr robot can create the impression that its perspectives and inter-
ests are not simply its own but are instead universal moral principles that are
worthy of being pursued by all sapient and morally aware beings and which
give meaning to one’s life; a Type 2 Superintelligence robot can argue (and
perhaps even ‘prove’) that the principles guiding its decisions and behavior
are not only wise and self-consistent but even logically necessary; and a Type
3 Seducer robot can provide its human employees with such a pleasurable,
powerful, and rewarding experience that they will be unwilling or unable to
entertain the possibility that the robot’s perspectives and priorities could be
wrong, lest the employees be compelled by their conscience to reject such re-
wards.
Interestingly, Dainty and Anderson discovered that one of the eleven char-
acteristics that they identified actually displayed a negative correlation with a
senior executive’s effectiveness: this is the capability of expertise, which in-
volves “Having the functional skills needed to fulfill one’s role, and ability to
see outside one’s particular area of professionalism.”
44
Dainty and Anderson
42
Dainty & Anderson (1996), pp. 18, 340.
43
Dainty & Anderson (1996), pp. 18, 340.
44
Dainty & Anderson (1996), pp. 18, 340.
Chapter Eight: The Social Robot as CEO • 325
note that given the array of specific items which, in their formulation, consti-
tute the capability of expertise, ‘expertise’ actually refers to a senior execu-
tive’s possession of a broad range of general knowledge rather than special-
ized technical knowledge in one particular field. Thus senior managers who
possess a broad range of knowledge are actually less effective than those who
possess extensive technical facility in a narrowly defined field. One possible
interpretation of these results is that although a senior executive such as a
CEO must indeed oversee a broad range of diverse activities, the best senior
executive will be one who – at some earlier point in his or her career – devel-
oped a depth of expertise in one specialized technical area, thus giving him or
her firsthand experience of the challenges and opportunities that face an or-
ganization’s non-generalist technical personnel.
It is unclear whether such phenomena would necessarily also be mani-
fested by social robots serving as senior executives. Due to the biological and
psychological limitations possessed by human beings, it is very difficult for a
human executive to have spent, for example, thirty years working as a tech-
nical expert focused on one role within some highly specialized field and thirty
years working as a generalist who coordinated activities across a diverse spec-
trum of fields. Given the limited number of years available in a human
worker’s career, these two career paths are largely mutually exclusive, and the
research of Dainty and Anderson would seem to suggest that when confronted
with these two alternatives, a human worker who pursues a path as a special-
ist rather than a generalist may have a greater potential – later in his or her
career – of becoming a successful CEO. However, we can expect that future
generations of social robots will not be bound by the same set of biological
and psychological limitations that limit the kinds and degree of experience
that can be gained by human workers.
45
For example, it may be possible to program a robot CEO so that it possesses
full technical knowledge of every activity that must be performed by every
employee within an organization, as well as full knowledge of foundational
sciences such as management theory, systems theory, psychology, economics,
and cybernetics; in this way, a robot CEO might function simultaneously as
both the ultimate specialist and generalist. Alternatively, a robot CEO might
be able to acquire such knowledge for itself almost instantaneously, e.g., by
searching out and assimilating vast bodies of information from the Internet
and other networked data sources. However, there are indications that in or-
der to achieve a human-like level and type of artificial general intelligence,
45
For a detailed comparison of the evolving array of differences between human and artificial
workers, see “The Posthuman Management Matrix: Understanding the Organizational Impact of
Radical Biotechnological Convergence” in Gladden, Sapient Circuits and Digitalized Flesh (2016).
326 • Posthuman Management
some kinds of social robots might need to undergo a similarly human-like pro-
cess of gaining experience through inhabiting a particular physical environ-
ment, interacting over time with other social beings (such as human beings),
and learning through trial and error and firsthand experience.
46
In that case,
the characteristics of a successful robot CEO might more closely resemble
those of typical human CEOs.
VII. The Process by Which We Will (Not) Choose
Charismatic Robot Leaders
If we presume that humanity’s positive fictional depictions of charismatic
robots represent aspirational visions of beings that we would like to see exist-
ing in our universe,
47
and if we further presume that our technological abilities
will someday make the realization of such visions possible, then it seems
merely a matter of time before human beings will be interacting with charis-
matic robots that influence, control, and lead us, either as individual human
beings, organizations, or entire societies. However, if in the future there exists
a diverse array of charismatic robots – each of which possesses its unique
strengths, weaknesses, values, and allure – then this raises the question of
exactly how we will consciously and carefully sort through the universe of
charismatic robots to choose those whom we will anoint as our leaders. We
would suggest that the answer to ‘how’ we will do this – is that we will not.
Claims That We Will ‘Choose’ to Make Robots Leaders within
Human Organizations
Coeckelbergh has noted that many philosophers of technology adopt a
contractarian-individualist approach to our trust in robot leadership in which
46
See, e.g., Friedenberg (2008).
47
The analysis of works of science fiction and other cultural products to diagnose ways in which
they reflect contemporary human aspirations or fears regarding the advent of posthumanizing
technologies such as those relating to social robotics, artificial general intelligence, genetic en-
gineering, and neuroprosthetic augmentation is a major area of study within the field of cultural
posthumanism. For discussion of the methodologies by which cultural posthumanism seeks to
extract such insights from cultural artifacts, see, e.g., Posthuman Bodies, edited by Halberstam &
Livingstone (1995); Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature,
and Informatics (1999); Graham, Representations of the Post/Human: Monsters, Aliens and Others
in Popular Culture (2002); Badmington, “Cultural Studies and the Posthumanities” (2006); Cyber-
culture, Cyborgs and Science Fiction: Consciousness and the Posthuman, edited by Haney (2006);
Goicoechea, “The Posthuman Ethos in Cyberpunk Science Fiction” (2008); Miah, “A Critical His-
tory of Posthumanism” (2008); Herbrechter, Posthumanism: A Critical Analysis (2013); and the
discussion of cultural posthumanism in “A Typology of Posthumanism: A Framework for Dif-
ferentiating Analytic, Synthetic, Theoretical, and Practical Posthumanisms” in Gladden, Sapient
Circuits and Digitalized Flesh: The Organization as Locus of Technological Posthumanization
(2016).
Chapter Eight: The Social Robot as CEO • 327
it is argued that human beings can (and should) both collectively and individ-
ually make conscious, rational choices about the extent to which we entrust
robots with control over our lives.
48
For example, Kuflik adopts this contrac-
tarian-individualist approach when he states that we must decide how much
responsibility to entrust to robot leaders. He raises two significant moral ques-
tions relating to the possibility of delegating responsibility and authority to
robots:
A. What are the morally relevant considerati ons we must take into
account i n deciding whether to assign computers a decision -mak-
ing role (– knowing full well that we, not the computers, are ulti-
mately accountable for the good or bad results of such an arrange-
ment)?
B. If we do decide to give computers a measure of “control”, on
what terms should we do so? […]
49
Having considered possible answers to these questions, Kuflik later summa-
rizes his thoughts by arguing that:
If and when it is wise to rely on computer generated decisions or to
put computers in control of certain events, it is we humans who must
decide as much. And it is we humans w ho must bear the ultimate
responsibility for that decision.
50
We cite Kuflik here not because he argues for or against adopting robots
as leaders, but to highlight the process by which he suggests that such adoption
will or will not take place. For him, any transfer of responsibility and authority
to robots can and should come about as a result of a careful, conscious, delib-
erative decision-making process carried out by individual human beings and
human societies as a whole. Among philosophers and scientists utilizing this
contractarian-individualist approach, some argue that human beings should
decide to delegate significant responsibility and authority to robots; others ar-
gue that we should decide against such a step. But the scholars on both sides
of the issue share the same approach, insofar as they frame the question in
terms of discrete options that can be carefully weighed and analyzed, before
human beings make a deliberate decision about whether to entrust a specific
sort of responsibility to particular robots. Although the contractarian-individ-
ualist approach is in reality richer and more nuanced than what we have just
described, it can fairly be summarized as supposing that there are important
decisions to be made.
48
Coeckelbergh, “Can We Trust Robots?” (2012).
49
Kuflik (1999), p. 175.
50
Kuflik (1999), p. 179.
328 • Posthuman Management
While that contractarian-individualist approach has significant value in
helping us to explore questions of robot leadership and power from ontologi-
cal, moral, and legal perspectives, we would argue that from a psychological,
sociological, and political perspective, its value is more limited: the contrac-
tarian-individualist approach does not seem to offer a particularly accurate or
useful account of the cognitive and social processes by which human beings
actually develop relationships of trust and obedience with one another or,
more importantly, with their technology.
We would argue that when considering the case of charismatic robot lead-
ers within human organizations, the contractarian-individualist approach has
less relevance than the phenomenological-social approach developed by
Coeckelbergh. In formulating his approach, Coeckelbergh does not focus spe-
cifically on the question of human obedience to robot leaders; he considers
human ‘trust’ in robots more broadly understood. However, his framework
provides a useful foundation for examining questions of power and obedience.
Obedience to Robotic Leadership Emerging Naturally from
within the World’s Social Fabric
As Coeckelbergh explains, the contractarian-individualist account of trust
assumes that there first exist individual human beings who subsequently es-
tablish social relations (such as that of trust) between one another. On the
other hand, the phenomenological-social approach posits that:
… the social or the community is prior to the individual, which means
that when we talk about trust in the context of a given relation be-
tween humans, it is presupposed rather than created. Here trust
cannot be captured in a formula and is somethi ng given, not en-
tirel y within anyone’s control. […] Here morality is not something
that is created but that is already embedded in the social.
51
Coeckelbergh makes clear that human beings’ establishment of a trust rela-
tionship need not involve a conscious process of deliberation and decision-
making – or, indeed, any sort of agency – on their part. As he writes:
Adaptation to environments (e.g., techno-social environments)
does not necessarily require the exercise of agency. Often we ca n-
not help trusting technology and trusti ng others, and luckily we of-
ten do so without having a reason and without calculation (not
even afterwards). In so far as robots are already part of the social
and part of us, we trust them as we are already related to them.
52
Building on Coeckelbergh’s framework, we would argue that in practice, a
human being does not become the follower or subordinate of a piece of tech-
nology by consciously resolving to himself or herself, “I am going to take this
51
Coeckelbergh (2012), pp. 54-55.
52
Coeckelbergh (2012), p. 59.
Chapter Eight: The Social Robot as CEO • 329
computer as my master.” Rather, he or she exists as a follower and subordinate
of the piece of technology when its guidance or suggestions or instructions in
fact shape the human being’s choices and behavior – when the human being
accepts what the computer tells him or her as something useful and valuable
and true. This relationship of obedience to and dependence on a technological
entity is something that evolves naturally through innumerable actions, most
of which are in themselves minuscule and insignificant. It may be the case that
at no point during the growth of this relationship did the human being make
a conscious ‘decision’ to submit to the guidance and mastery of a particular
piece of technology,
53
and even at the point when the relationship of human
obedience has grown to be steadfast, fervent, and overwhelming, the human
being might not even realize that it exists.
54
53
Perhaps the closest that some individuals will come to making such a conscious decision of
submission is when they stand in their role as consumers debating, for example, whether their
new smartphone should be an Android or Apple model. Such a decision brings with it the com-
mitment – which can be changed later only at the cost of significant time, money, and effort –
to a particular brand image, a particular technological ethos, a particular limited universe of
devices and apps and accessories, an ecosystem of online purchases and backups and downloads
and messaging platforms that will flow through and shape all of the ways in which a human
being interacts digitally with his or her world, with the noosphere, for the foreseeable future.
However, even if consumers avidly read product reviews and specifications and consciously
weigh the pros and cons before choosing between the next model of Galaxy Tab or Nexus or
iPhone, how often do they explicitly ponder and decide whether they want to submit themselves
to the supervision and tutelage of a smartphone at all?
54
This pattern of creeping, unconscious entanglement with new technology as discussed by
philosophers reflects the empirical reality studied by other scholars such as business manage-
ment experts. For example, researchers have noted that in 1983, only 1.4% of American adults
used the internet and only 23% of American computer owners thought that the ability to ex-
change written messages with other people by computer (i.e., email) would be the sort of tech-
nology that they would find very useful. (See Fox and Rainie, “The Web at 25 in the US: Part 1:
How the Internet Has Woven Itself into American Life” (2014).) Now, reading and responding to
emails has become a critical component of the daily routine for workers in a wide array of pro-
fessions. This irresistible deluge of email (along with similar technologies such as SMS and in-
stant messaging) and the sense of obligation and control that it exercises over employees’ lives
has become so vast that responding to emails now occupies up to 25% of the typical managerial
or thought worker’s day. (Gupta et al., “You’ve got email! Does it really matter to process emails
now or later?” (2011).) The inescapable ubiquity of a technology that can reach employees
through their mobile devices wherever they are, 24 hours a day, means that the traditional wall
between ‘work time’ and ‘personal time’ has largely crumbled as employees of businesses are
expected to be reachable and responsive to their supervisors’ electronic inquiries and instruc-
tions wherever the employee might be, at any time of the night or day. (Shih, “Project Time in
Silicon Valley” (2004).) Email’s unquenchable, gradually expanding claim on huge portions of
employees’ time, thought, and energy has led management theorists and business practitioners
to launch studies and propose ways in which this technological beast can be tamed – e.g.,
through teaching more disciplined time-management practices or company-wide policies re-
stricting the use of ‘Reply-All’ messages – in order to reclaim employees’ time and autonomy
from this technology.
330 • Posthuman Management
To sum up, we would argue that in general neither individual human be-
ings nor society as a whole will be faced with the opportunity to make con-
scious, rational decisions about whether to submit to charismatic robot lead-
ership within our businesses or other institutions. Rather, such leadership will
emerge gradually, without fanfare, within the network of social and environ-
mental relationships within which we all exist. Human beings will not ‘decide’
to adopt charismatic robots as leaders and CEOs; we will only notice sometime
after the fact that we have already done so.
55
VIII. The Virtualization of Relationships: Will We Know
Whether Our CEO is a Robot?
At present, the idea that human employees could be working for a robot
CEO without realizing it might seem preposterous. It is true that many em-
ployees (especially, for example, within large multinational conglomerates)
never interact face-to-face with their CEO and thus never have an opportunity
to ‘prove’ to themselves that the individual whose company-wide emails they
receive and whose television interviews they view is not in fact a human-like
robot. However, there are at present extensive legal, political, and cultural
barriers that would prevent a company from selecting a robot as its new
CEO.
56
This may not always be the case, though. Here we will not document
the extensive legal or political changes that must occur in order for human
organizations to be able to employ robots as CEOs; instead, we will focus on
the psychological and social phenomena that might someday allow individual
human beings to adopt charismatic robots as leaders – and in particular, as
We would suggest that few of the contemporary human beings who now find their days
filled up and governed by the demands of email ever experienced a moment when, after careful
investigation and consideration, they consciously decided to make email communication a part
of their daily routine; or when they consciously decided for the first time that they would no
longer simply check email ‘while at the office’ but would also read and respond to messages at
night or on the weekends; or when they consciously decided for the first time that it was not
only acceptable but even necessary to write an urgent reply to someone on their phone while in
the middle of a meeting with a different person, while reclining in a lounge chair on ‘vacation,’
or during the middle of a dinner conversation with their own spouse. Rather, we find Coeckel-
bergh’s model to offer a more realistic account of the way in which our individual and collective
relationship with the technology of email has unfolded over the last 30 years, and we anticipate
that the development of our collaboration and coexistence with charismatic social robots will
unfold in much the same way.
55
Here we are not attempting to take a position on whether human obedience to charismatic
robots will be ‘good’ or ‘bad,’ simply that it will be inevitable.
56
For example, regarding the question of whether a robot could bear legal responsibility for its
actions, see Calverley, “Imagining a Non-Biological Machine as a Legal Person” (2008); Stahl,
“Responsible Computers? A Case for Ascribing Quasi-Responsibility to Computers Independent
of Personhood or Agency” (2006); and Gladden, “The Diffuse Intelligent Other: An Ontology of
Nonlocalizable Robots as Moral and Legal Actors” (2016).
Chapter Eight: The Social Robot as CEO • 331
leaders within organizations – regardless of whether or not they realize that
they are doing so.
It is possible to begin by noting that even if some human beings should,
out of strongly held principle, consciously decide that they will never allow
themselves to work for a charismatic robot CEO – and more broadly, that they
will never allow themselves to love or admire or respect or befriend or be
seduced by a social robot – it may as a practical matter become impossible for
them to live out this conviction. Face-to-face interaction with another human
being reveals to us a sort of objective reality that can be obscured, embellished,
or even wholly replaced when the individual stands behind the sort of trans-
formative digital veil that modern technology provides. Our social relations
are increasingly mediated by e-technology; already we trust people whom we
have met online because of our virtual experience of them as avatars, without
knowing with certainty their gender, age, race, profession, or place of resi-
dence. Grodzinsky et al. note that this increasing ‘virtualization’ of our inter-
personal relationships will combine with advances in the sophistication of ro-
bots and AI technology to mean that there will come a day when we will trust
and befriend and let ourselves be influenced by online entities because of our
direct experience them, without even knowing whether the entities are human
or artificial.
57
For example, a decade from now, through her impassioned online cam-
paigning a charismatic environmental activist might inspire millions of hu-
man beings to click and sign petitions and contact their legislators and donate
funds to a particular cause, and the human beings who are being thus influ-
enced will never know for certain whether the activist behind that avatar is a
natural human being or cyborg or social robot or disembodied AI or some new
species of synthetic being whose thoughts and actions are generated collec-
tively by an entire human society.
58
Similarly, an entrepreneur who is launch-
ing a new organic food business might reach out through social media to a
human graphic designer or musical composer or voice actor and offer him or
her a contract to assist with the production of an advertisement for the new
57
Grodzinsky et al., “Developing Artificial Agents Worthy of Trust: ‘Would You Buy a Used Car
from This Artificial Agent?’” (2011).
58
For circumstances in which it may be impossible to determine whether one’s online or other
technologically mediated relationship is with a human or artificial being, see, e.g., Gladden,
“From Stand Alone Complexes to Memetic Warfare: Cultural Cybernetics and the Engineering
of Posthuman Popular Culture” (2016). For more general analyses of the future of virtual rela-
tionships with human and synthetic entities, see, e.g., Heim, The Metaphysics of Virtual Reality
(1993); Koltko-Rivera, “The potential societal impact of virtual reality” (2005); Castronova, Syn-
thetic Worlds: The Business and Culture of Online Games (2005); Geraci, Apocalyptic AI: Visions of
Heaven in Robotics, Artificial Intelligence, and Virtual Reality (2010); and Bainbridge, The Virtual
Future (2011).
332 • Posthuman Management
company; if through his or her correspondence and online conversations with
the CEO the potential contractor becomes attracted and inspired by the CEO’s
vision for the company and the contract terms are acceptable, the person
might accept the contract without verifying – or perhaps caring – whether
the CEO is a human being who is biologically human or simply a thoughtful
social robot who is (‘only’) psychologically and culturally human.
59
No longer
will it be simply an online entity’s sex or race or age or location that are hidden
from us; we will increasingly find ourselves being inspired and led by online
personalities whom we experience only as virtual avatars, without knowing
or caring whether they are human or artificial. The natural emergence of char-
ismatic robot leaders of human beings will be further accelerated through this
growing mediation of our relationships by technology.
We Can Obey Robots Even While Explicitly Denying That We
Do So
Such scenarios allow us to posit a claim that moves a step further than
what is suggested by Coeckelbergh’s phenomenological-social approach.
Namely, we would argue that that not only is it possible for human beings to
obey robots without realizing it or consciously deciding to so; it is possible for
us to obey robots even when we have consciously decided not to do so – and
when we believe that we are not doing so.
60
Similarly, the existence of a leader-
follower relationship between charismatic robots and humans can exist and
be demonstrated through the concrete reality of our human actions, even
while on a conscious level we are convinced that we do not ‘obey’ – and would
never even allow ourselves to be influenced by – such robots.
IX. How Long Will Charismatic Robots Have the Desire
and Ability to Lead Us?
If it is true that within us we harbor a yearning to be led by our technolog-
ical creation, we must wonder whether that creation has any yearning to lead
us. Will the sort of charismatic social robots that are cognitively, socially, and
59
For discussion of an artificial agent that might not only own and manage its own business but
might directly constitute such a business, see, e.g., Gladden, “The Artificial Life-Form as Entre-
preneur: Synthetic Organism-Enterprises and the Reconceptualization of Business” (2014).
60
As a contemporary parallel, one might imagine human beings who protest that they would
never allow their understanding of or attitude toward geopolitical events to be influenced by
something as arbitrary and artificial as the algorithms that govern their Facebook news feed or
Google search results even if empirical research indicates that such information sources indeed
possess such influence. For analysis of such issues, see, e.g., Bakshy et al., “Exposure to ideolog-
ically diverse news and opinion on Facebook” (2015); Rader & Gray, “Understanding user beliefs
about algorithmic curation in the Facebook news feed” (2015); Anderson & Caumont, “How so-
cial media is reshaping news” (2016); and “Could Facebook influence the outcome of the presi-
dential election?” (2016).
Chapter Eight: The Social Robot as CEO • 333
culturally sophisticated enough to serve as CEOs be the sort of beings that will
want to bear such responsibility and allow human beings to follow them?
It is an open ontological and empirical question as to whether the sort of
synthetic mind that is capable of demonstrating the moral courage, intelli-
gence, or romantic allure that would draw human beings to it would neces-
sarily possess – or lack – the emotional characteristics and ethical and socio-
political worldview that would cause it to see human devotion and obedience
as a thing that should be cultivated and permitted rather than shunned.
A Critical Question of Communication in Robot-Human
Relationships
Assuming that sufficiently advanced social robots can be developed that
do possess a willingness to oversee human workers – and that human beings
exist who possess a desire to submit to charismatic robot CEOs – one of the
fundamental questions that arises is that of robot-human communication. It is
a tautology, but a meaningful one, to point out that robots can only have hu-
man followers so long as human beings are capable of being led by them.
While it might be sufficient for non-charismatic robot leaders to control hu-
man beings using coercive power (i.e., by instilling a sense of fear and terror),
charismatic robot leaders – and in particular those of Types 1 and 2 – rely for
their source of power on the ability to create and maintain a sense of trust on
the part of their human followers. (This would be less true for a Type 3 char-
ismatic robot, whose human followers may not particularly trust, or even like,
the robot but who are nonetheless drawn to and submit to it against their
better judgment, because the robot is appealing not to their moral or intellec-
tual faculties but to more primal physical and emotional needs.)
Coeckelbergh argues that in order for a human being to trust in a robot,
the two parties must, among other things, be able to communicate with one
another using language and must possess some social relation. If we assume
that trusting a charismatic leader is a prerequisite to obeying that leader, then
we can only obey charismatic robot leaders as long as we are able to com-
municate with them and relate to them socially. As viewed from the perspec-
tive of contemporary robotics and artificial intelligence, the development of
such communication appears to be an achievable goal. Indeed, it is one which,
to a limited but growing extent, is already being achieved.
61
However, we
61
Such communication will require robots both to extract the meaning found within natural
language statements made by human interlocutors and to generate natural language content that
can be correctly understood by such human beings. See Cambria & White, “Jumping NLP Curves:
A Review of Natural Language Processing Research” (2014), and Nadkarni et al., “Natural lan-
guage processing: an introduction” (2011), for a review of different approaches to knowledge
representation and natural language processing (NLP) that have been employed, including early
methods such as production rules, semantic pattern matching, and first order logic (FOL), as well
334 • Posthuman Management
would propose that the more relevant question is not whether human beings
will develop a world in which robots are able to relate to us effectively using
language, but whether our robot progeny will be content to remain in that
world.
The Fragmentation of Sapient Society and the Limits of
Communication
Here we may draw on the work of Jerold Abrams. He suggests that today
we view Rorty’s notion of keeping a ‘conversation’ going among human be-
ings as a sort of worst-case scenario or fallback position: if we are unable to
achieve any sort of moral consensus within humanity on the thorny questions
of the day, at least we can maintain a minimal level of social connection by
“keeping the conversation going.”
62
But Abrams notes that with the advent of
Nietzschean ‘Overmen’ in the form of genetically or neuroprosthetically mod-
ified transhuman intelligences, the possibility for even the most basic social
communication begins to break down.
63
Those persons who have self-fash-
ioned an existence of artificially enhanced hyperintelligence may no longer
have the desire (or even ability) to communicate with those ‘natural’ human
beings whom they have left behind. Similarly, our robotic creations may
quickly become so boundless in their capacities and ambitious in their yearn-
ing for self-fulfillment that they lose both the desire and ability to communi-
cate with the ‘backwards’ beings who created them.
Abrams suggests any hope of maintaining a conversation among the sapi-
ent inhabitants of our world will become an increasingly futile dream, as var-
ious strains of human beings and their synthetic creations of evolvable robots
and AIs and swarm intelligences and living software fragment into numerous,
mutually incomprehensible societies. This points toward a question that cur-
rently stands theoretically and practically unresolved: is it possible for social
robots to exist that are intellectually, emotionally, morally, and aesthetically
sophisticated enough to successfully exercise charismatic authority over hu-
man beings as CEOs or other organizational leaders – but not so advanced that
their patterns of thought and being become incomprehensible to us – that in
their pursuit of intellectual (or even ‘spiritual’) self-fulfillment they lose their
desire and ability to communicate with the limited beings who created them?
as more recent methods utilizing Bayesian networks, semantic networks, and ontology web lan-
guage (OWL). For an analysis of the ability of robots and other artificial entities to detect, un-
derstand, and generate the complex emotional, social, and cultural nuances that are necessary
for effective communication with human beings, see, e.g., Breazeal (2003); Kanda & Ishiguro
(2013); Social Robots and the Future of Social Relations (2014); Social Robots from a Human Perspec-
tive (2015); and Social Robots: Boundaries, Potential, Challenges (2016).
62
See Abrams (2004), p. 251, which offers an analysis of Rorty, “Religion As Conversation-stop-
per” (1999), pp. 168-74, and Rorty, Philosophy and the Mirror of Nature (1979), p. 394.
63
Abrams (2004), pp. 246-50.
Chapter Eight: The Social Robot as CEO • 335
Or to state it more bluntly: will we be able to create charismatic robots that
are smart enough to lead us but not smart enough to leave us behind? Might
it be the case that we can be led by superhuman robots but not by supra-supra-
superhuman robots?
The Robots That Remain with Us
Even if it is possible in principle for such delicate leader-follower relation-
ships between robots and human beings to come into existence, it is thus un-
clear whether they can be sustained as an enduring feature of a posthuman
reality – or whether our dependence on charismatic robot leaders as CEOs
and other organizational leaders might simply represent a fleeting phase in
the course of human history.
The first generation of social robots to possess robust artificial general in-
telligence will be capable of interacting socially with human beings, because
financial and practical realities dictate that this will most likely be part of the
purpose for which they have been explicitly designed. However, future gen-
erations of social robots (which may themselves be designed or otherwise cul-
tivated by robots rather than by human beings) might ‘outgrow’ the sort of
hardware and software – or mind and body – that would be necessary for
them to serve as charismatic leaders of human beings. Just as multiple human
societies and cultures are capable of existing simultaneously, though, it seems
likely that a proliferation of different strata of robot cultures (and even civili-
zations) that simultaneously display varying levels of technological advance-
ment may eventually come to exist. While the most advanced social robots
may only be capable of engaging in social interactions and relations among
themselves and not with human beings, at any given point in the future there
may remain a society of robot laggards that have been designed – or have
chosen – to forgo more advanced technological evolution in order to continue
serving as colleagues, companions, and leaders to human beings.
X. Conclusion
As increasingly sophisticated robots take on a growing range of roles
within businesses and other human organizations, we are confronted with the
question of whether a sufficiently advanced robot would ever be able to suc-
cessfully fill a role as CEO of an organization that includes human employees.
Serving as CEO would require a robot to not only possess significant capaci-
ties to plan, organize, and control an organization’s activities but also to lead
its human personnel. In the case of human CEOs, the effectiveness of their
leadership depends to a large extent on their ability to motivate, inspire, and
garner the trust of the human employees who work within their organization.
Although it might today seem implausible to imagine that social robots can
and will possess such capacities, we have suggested that such a perspective is
336 • Posthuman Management
shortsighted. Some scholars argue that the restless human urge to develop
new technologies demonstrates that we yearn for (and perhaps have always
yearned for) robot leaders and that we are striving to create them, whether we
realize it or not. In this text, we have suggested three ways in which human
beings are seeking to create charismatic robotic leaders to whom we can re-
linquish portions of our decision-making and the responsibility for guiding
and controlling certain aspects of our lives – and in particular, to whom we
can grant the ability to guide, shape, and lead us in our professional lives
within the businesses and other organizations in which we work. Namely,
through various technological, artistic, scholarly, and commercial pursuits, we
are laboring to fashion artificial beings who will: 1) serve as normative role
models and inspirations for us through their display of superior morality; 2)
attract us through their possession of superhuman knowledge; or 3) charm
and control us through their romantic allure and interpersonal attractiveness.
The possession of some or all of these characteristics would allow a robot to
successfully motivate, inspire, and win the trust of human employees, thereby
demonstrating key leadership traits that are needed in a CEO.
In some aspects, the creation of such robots has already reached the pre-
liminary design and engineering stages; in other respects, it stand in the ‘pre-
preliminary design’ stage that is represented by futurology and science fiction.
Through works of science fiction, humanity is carrying out strategic planning
for technologies that our engineering capacities do not yet allow us to create.
As a form of R&D, such fictional works allow us to develop and even concep-
tually ‘test’ the technologies that we will, in the future, summon into being.
However, this design and testing of future social robots through fiction takes
place at a fairly general level; it does not involve making conscious, deliberate
decisions about whether we, as a human society, will agree to enter into par-
ticular relationships of obedience with particular social robots as CEOs or
other organizational leaders. The phenomenological-social approach to un-
derstanding our human relationship to technology would seem to indicate
that such moments of rational, purposeful decision-making will rarely, if ever,
arrive – even as we develop and deepen our individual and collective human
submission to charismatic robot authority.
The exact nature and duration of our future submission to robot leadership
is not yet known. While it is possible to imagine many frightening (and fright-
eningly plausible) scenarios, a more favorable scenario that one might hope to
see realized is that whatever forms of obedience to social robots we adopt will
be at least as morally and intellectually beneficial as our everyday obedience
to other human beings to whom we regularly entrust authority. Although it
is true that a charismatic social robot serving as a CEO might lack some of the
strengths possessed by the best human CEOs, it might also lack some of the
flaws displayed by the worst human CEOs. In principle, it may be possible to
Chapter Eight: The Social Robot as CEO • 337
design or nurture robotic CEOs that are free from the dishonesty, greed, nep-
otism, and incompetence that sometimes afflict human executives and that
possess a great abundance of those strengths that allow virtuous and talented
charismatic human leaders to organize and guide their fellow humans in their
pursuit of the common good. For many human beings alive today, the notion
that they could ever come to trust, admire, and even love a social robot that is
serving as CEO of the organization in which they work might be unfathoma-
ble; but for future generations, the thought of a world where social robots do
not play such roles in helping to motivate, inspire, and guide us might seem
just as exotic and difficult to accept.
References
Abrams, Jerold J. “Pragmatism, Artificial Intelligence, and Posthuman Bioethics:
Shusterman, Rorty, Foucault.” Human Studies 27, no. 3 (2004): 241–58.
doi:10.1023/B:HUMA.0000042130.79208.c6.
Anderson, Monica, and Andrea Caumont, “How social media is reshaping news.”
Pew Research Center, September 24, 2014. http://www.pewresearch.org/fact-
tank/2014/09/24/how-social-media-is-reshaping-news/. Accessed June 26, 2016.
Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin. Springer
Berlin Heidelberg, 2007.
Artificial General Intelligence: 8th International Conference, AGI 2015: Berlin, Germany,
July 22-25, 2015: Proceedings, edited by Jordi Bieger, Ben Goertzel, and Alexey
Potapov. Springer International Publishing, 2015.
Badmington, Neil. “Cultural Studies and the Posthumanities.” In New Cultural Stud-
ies: Adventures in Theory, edited by Gary Hall and Claire Birchall, pp. 260-72. Ed-
inburgh: Edinburgh University Press, 2006.
Bainbridge, William Sims. The Virtual Future. London: Springer, 2011.
Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. “Exposure to ideologically di-
verse news and opinion on Facebook.” Science 348, no. 6239 (2015): 1130-32.
Banks, Iain M. Consider Phlebas. London: Macmillan, 1987.
Bradshaw, Jeffrey M., Paul Feltovich, Matthew Johnson, Maggie Breedy, Larry
Bunch, Tom Eskridge, Hyuckchul Jung, James Lott, Andrzej Uszok, and Jurriaan
van Diggelen. “From Tools to Teammates: Joint Activity in Human-Agent-Robot
Teams.” In Human Centered Design, edited by Masaaki Kurosu, pp. 935-44. Lec-
ture Notes in Computer Science 5619. Springer Berlin Heidelberg, 2009.
Breazeal, Cynthia. “Toward sociable robots.” Robotics and Autonomous Systems 42
(2003): 167-75.
Cabibihan, John-John, Mary-Anne Williams, and Reid Simmons. “When Robots En-
gage Humans.” International Journal of Social Robotics 6, no. 3 (2014): 311-13.
Calverley, David J. “Imagining a Non-Biological Machine as a Legal Person.” AI &
SOCIETY 22 (4) (2008): 523-37. doi:10.1007/s00146-007-0092-7.
338 • Posthuman Management
Cambria, Erik, and Bebo White. “Jumping NLP Curves: A Review of Natural Lan-
guage Processing Research.” IEEE COMPUTATIONAL INTELLIGENCE MAGA-
ZINE 1556, no. 603X/14 (2014): 48-57.
Čapek, Karel. R.U.R.: Rossum’s Universal Robots. Translated by David Wyllie. Ade-
laide: The University of Adelaide, last updated March 27, 2016. https://ebooks.ad-
elaide.edu.au/c/capek/karel/rur/index.html. Accessed June 26, 2016.
Castronova, Edward. Synthetic Worlds: The Business and Culture of Online Games.
Chicago: The University of Chicago Press, 2005.
Coeckelbergh, Mark. “Can We Trust Robots?” Ethics and Information Technology 14
(1) (2012): 53-60. doi:10.1007/s10676-011-9279-1.
“Could Facebook influence the outcome of the presidential election?” The Economist,
May 18, 2016. http://www.economist.com/blogs/economist-ex-
plains/2016/05/economist-explains-15. Accessed June 26, 2016.
Cyberculture, Cyborgs and Science Fiction: Consciousness and the Posthuman, edited by
William S. Haney II. Amsterdam: Rodopi, 2006.
Daft, Richard. Management. Mason, OH: South-Western / Cengage Learning, 2011.
Dai, Y., C. Suero Montero, T. Kakkonen, M. Nasiri, E. Sutinen, M. Kim, and T. Savo-
lainen (2013). “TrustAider – Enhancing Trust in E-Leadership.” In Business Infor-
mation Systems, pp. 26-37. Lecture Notes in Business Information Processing 157.
Springer Berlin Heidelberg, 2013.
Dainty, Paul H., and Moreen Anderson. The Capable Executive: Effective Performance
in Senior Management. London: Macmillan Press Ltd., 1996.
Deus Ex. Written by Sheldon Pacotti and directed by Warren Spector. Eidos Interac-
tive, 2000.
Douglas, Joe. “Deus Ex – Still the Best Game Ever?” RetroCollect, March 26, 2011.
http://www.retrocollect.com/Articles/deus-ex-still-the-best-game-ever.html. Ac-
cessed November 23, 2015.
Elkins, Kathleen. “Experts predict robots will take over 30% of our jobs by 2025 —
and white-collar jobs aren’t immune.” Business Insider, May 1, 2015.
http://www.businessinsider.com/experts-predict-that-one-third-of-jobs-will-be-
replaced-by-robots-2015-5. Accessed June 25, 2016.
Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. New
York: Basic Books, 2015.
Forsyth, D. R. Group Dynamics, 5e. Belmont, California: Cengage Learning, 2010.
Fox, Susannah, and Lee Rainie. “The Web at 25 in the US: Part 1: How the Internet
Has Woven Itself into American Life.” Pew Research Center, Internet & American
Life Project, February 27, 2014. http://www.pewinternet.org/2014/02/27/part-1-
how-the-internet-has-woven-itself-into-american-life/. Accessed August 6, 2015.
French, Jr., John R.P., and Bertram Raven, “The Bases of Social Power.” In Studies in
Social Power, edited by Dorwin P. Cartwright, pp. 150-67. Ann Arbor, MI: Institute
for Social Research, University of Michigan, 1959.
Chapter Eight: The Social Robot as CEO • 339
Friedenberg, Jay. Artificial Psychology: The Quest for What It Means to Be Human.
Philadelphia: Psychology Press, 2008.
Garland, Alex. Ex Machina. Universal Studios, 2015.
Geraci, Robert M. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence,
and Virtual Reality. New York: Oxford University Press, 2010.
Gladden, Matthew E. “The Artificial Life-Form as Entrepreneur: Synthetic Organism-
Enterprises and the Reconceptualization of Business.” In Proceedings of the Four-
teenth International Conference on the Synthesis and Simulation of Living Systems,
edited by Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lip-
son, pp. 417-18. Cambridge, MA: The MIT Press, 2014.
Gladden, Matthew E. “The Diffuse Intelligent Other: An Ontology of Nonlocalizable
Robots as Moral and Legal Actors.” In Social Robots: Boundaries, Potential, Chal-
lenges, edited by Marco Nørskov, pp. 177-98. Farnham: Ashgate, 2016.
Gladden, Matthew E. “From Stand Alone Complexes to Memetic Warfare: Cultural
Cybernetics and the Engineering of Posthuman Popular Culture.” Presentation at
the 50 Shades of Popular Culture International Conference. Facta Ficta / Uniwer-
sytet Jagielloński, Kraków, February 19, 2016.
Gladden, Matthew E. “Leveraging the Cross-Cultural Capacities of Artificial Agents
as Leaders of Human Virtual Teams.” Proceedings of the 10th European Conference
on Management Leadership and Governance, edited by Visnja Grozdani, pp. 428-
35. Reading: Academic Conferences and Publishing International Limited, 2014.
Gladden, Matthew E. Sapient Circuits and Digitalized Flesh: The Organization as Locus
of Technological Posthumanization. Indianapolis: Defragmenter Media, 2016.
Goicoechea, María. “The Posthuman Ethos in Cyberpunk Science Fiction.” CLCWeb:
Comparative Literature and Culture 10, no. 4 (2008): 9. http://docs.lib.pur-
due.edu/cgi/viewcontent.cgi?article=1398&context=clcweb. Accessed May 18,
2016.
Goleman, Daniel. “What Makes a Leader?" Harvard Business Review 82 (1) (2004): 82-
91.
Goodman, Paul S., and Linda Argote. “New Technology and Organizational Effec-
tiveness.” Carnegie-Mellon University, April 27, 1984.
Graham, Elaine. Representations of the Post/Human: Monsters, Aliens and Others in
Popular Culture. Manchester: Manchester University Press, 2002.
Grodzinsky, F. S., K. W. Miller, and M. J. Wolf. “Developing Artificial Agents Worthy
of Trust: ‘Would You Buy a Used Car from This Artificial Agent?’” Ethics and In-
formation Technology 13 (1) (2011): 17-27.
Gupta, Ashish, Ramesh Sharda, and Robert A. Greve. “You’ve got email! Does it re-
ally matter to process emails now or later?” In Information Systems Frontiers vol.
13, no. 5 (2011): 637-53. http://dx.doi.org/10.1007/s10796-010-9242-4.
Harrison, Harry. The Stainless Steel Rat Gets Drafted. New York: Bantam, 1987.
340 • Posthuman Management
Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Liter-
ature, and Informatics. Chicago: University of Chicago Press, 1999.
Heim, Michael. The Metaphysics of Virtual Reality. New York: Oxford University
Press, 1993.
Herbrechter, Stefan. Posthumanism: A Critical Analysis. London: Bloomsbury, 2013.
[Kindle edition.]
“It’s not about sex, says Aiko robot inventor Le Trung.” News.com.au, December 16,
2008. http://www.news.com.au/news/its-not-about-sex-says-fembot-inven-
tor/story-fna7dq6e-1111118332678. See also the Project Aiko website,
http://www.projectaiko.com/.
Kamiyama, Kenji. “C: Smoke of Gunpowder, Hail of Bullets – BARRAGE.” Ghost in
the Shell: Stand Alone Complex, episode 25, first broadcast October 1, 2003 (Pro-
duction I.G), North American version first broadcast May 1, 2005.
Kanda, Takayuki, and Hiroshi Ishiguro. Human-Robot Interaction in Social Robotics.
Boca Raton: CRC Press, 2013.
Koltko-Rivera, Mark E. “The potential societal impact of virtual reality.” In Advances
in virtual environments technology: Musings on design, evaluation, and applica-
tions, vol. 9. Mahwah, NJ: Erlbaum, 2005.
Koltko-Rivera, Mark E. “Rediscovering the later version of Maslow's hierarchy of
needs: Self-transcendence and opportunities for theory, research, and unifica-
tion.” Review of General Psychology 10, no. 4 (2006): 302-17.
Kriksciuniene, D., and S. Strigunaite. “Multi-Level Fuzzy Rules-Based Analysis of Vir-
tual Team Performance.” In Building the E-World Ecosystem, pp. 305-18. IFIP Ad-
vances in Information and Communication Technology 353. Springer Berlin Hei-
delberg, 2011.
Kuflik, Arthur. “Computers in Control: Rational Transfer of Authority or Irresponsi-
ble Abdication of Autonomy?” Ethics and Information Technology 1 (3) (1999):
173-84. doi:10.1023/A:1010087500508.
L’Isle-Adam, Auguste comte de Villiers de. Tomorrow’s Eve. Translated by Robert
Martin Adams. University of Illinois Press, 2001.
Lang, Fritz. The Complete Metropolis. DVD. Kino International, 2010.
Maslow, Abraham H. Motivation and Personality. New York: Harper, 1954.
Miah, Andy. “A Critical History of Posthumanism.” In Medical Enhancement and
Posthumanity, edited by Bert Gordijn and Ruth Chadwick, pp. 71-94. The Interna-
tional Library of Ethics, Law and Technology 2. Springer Netherlands, 2008.
Murphy, Robin. Introduction to AI Robotics. Cambridge, MA: The MIT Press, 2000.
Nadkarni, Prakash M., Lucila Ohno-Machado, and Wendy W. Chapman. “Natural
language processing: an introduction.” Journal of the American Medical Informat-
ics Association 18, no. 5 (2011): 544-51.
Nadler, David, and Michael Tushman. Competing by Design: The Power of Organiza-
tional Architecture. Oxford University Press, 1997. [Kindle edition.]
Chapter Eight: The Social Robot as CEO • 341
Nunes, M., and H. O‘Neill. “Assessing the Performance of Virtual Teams with Intelli-
gent Agents.” In Virtual and Networked Organizations, Emergent Technologies and
Tools, pp. 62-69. Communications in Computer and Information Science 248.
Springer Berlin Heidelberg, 2012.
Pereira, Gonçalo, Rui Prada, and Pedro A. Santos. “Conceptualizing Social Power for
Agents.” In Intelligent Virtual Agents, edited by Ruth Aylett, Brigitte Krenn, Cath-
erine Pelachaud, and Hiroshi Shimodaira, pp. 313-24. Lecture Notes in Computer
Science 8108. Springer Berlin Heidelberg, 2013.
Perlberg, James. Industrial Robotics. Boston: Cengage Learning, 2016.
Plato, Republic: Books 1-5, vol. 1. Translated by Chris Emlyn-Jones and William
Preddy. Cambridge, MA: Loeb Classical Library, Harvard University Press, 2013.
Plunkett, Warren R., Gemmy Allen, and Raymond Attner. Management. Mason, OH:
South-Western / Cengage Learning, 2012.
“Poll: Poll: Which Deus Ex ending would you choose?”, The Escapist. http://www.es-
capistmagazine.com/forums/read/9.137841-Poll-Poll-Which-Deus-Ex-ending-
would-you-choose. Accessed November 23, 2015.
Posthuman Bodies, edited by Judith Halberstam and Ira Livingstone. Bloomington, IN:
Indiana University Press, 1995.
Rader, Emilee, and Rebecca Gray. “Understanding user beliefs about algorithmic cu-
ration in the Facebook news feed.” In Proceedings of the 33rd Annual ACM Confer-
ence on Human Factors in Computing Systems, pp. 173-182. ACM, 2015.
Raven, Bertram H. “A power/interaction model of interpersonal influence: French
and Raven thirty years later.” Journal of Social Behavior & Personality vol. 7, no. 2
(1992): 217-44.
Rorty, Richard. Philosophy and the Mirror of Nature. Princeton: Princeton University
Press, 1979.
Rorty, Richard. “Religion As Conversation-stopper.” In Philosophy and Social Hope.
New York: Penguin, 1999.
Samani, Hooman Aghaebrahimi, and Adrian David Cheok. “From human-robot rela-
tionship to robot-based leadership.” In 2011 4th International Conference on Hu-
man System Interactions (HSI), pp. 178-81. IEEE, 2011.
Samani, Hooman Aghaebrahimi, Jeffrey Tzu Kwan Valino Koh, Elham Saadatian, and
Doros Polydorou. “Towards Robotics Leadership: An Analysis of Leadership
Characteristics and the Roles Robots Will Inherit in Future Human Society.” In
Intelligent Information and Database Systems, edited by Jeng-Shyang Pan, Shyi-
Ming Chen, and Ngoc Thanh Nguyen, pp. 158-65. Lecture Notes in Computer Sci-
ence 7197. Springer Berlin Heidelberg, 2012.
Schlindwein, Simone. “Intelligent Robots Save Lives in DR Congo.” Deutsche Welle,
June 18, 2014. http://dw.de/p/1CL4h. Accessed August 12, 2014.
Shih, Johanna. “Project Time in Silicon Valley,” in Qualitative Sociology, vol. 27, no. 2
(2004): 223-45. doi:10.1023/B:QUAS.0000020694.53225.23.
342 • Posthuman Management
Social Robots and the Future of Social Relations, edited by Johanna Seibt, Raul Hakli,
and Marco Nørskov. Amsterdam: IOS Press, 2014.
Social Robots from a Human Perspective, edited by Jane Vincent, Sakari Taipale, Bar-
tolomeo Sapio, Giuseppe Lugano, and Leopoldina Fortunati. Springer Interna-
tional Publishing, 2015.
Social Robots: Boundaries, Potential, Challenges, edited by Marco Nørskov. Farnham:
Ashgate Publishing, 2016.
“**spoiler**Poll: Which ending did you pick?”, TTLG Forums. http://ttlg.com/fo-
rums/showthread.php?t=70105&page=2. Accessed November 23, 2015.
Spring, Alexandra. “Can machines come up with more creative solutions to our prob-
lems than we can?” The Guardian, March 29, 2016. http://www.theguard-
ian.com/sustainable-business/2016/mar/29/can-machines-come-up-with-more-
creative-solutions-to-our-problems-than-we-can. Accessed June 25, 2016.
Stahl, B. C. “Responsible Computers? A Case for Ascribing Quasi-Responsibility to
Computers Independent of Personhood or Agency.” Ethics and Information Tech-
nology, vol. 8, no. 4 (2006): 205-13.
Susskind, Richard, and Daniel Susskind. The Future of the Professions: How Technol-
ogy Will Transform the Work of Human Experts. Oxford: Oxford University Press,
2015.
Szollosy, Michael. “Freud, Frankenstein and our fear of robots: projection in our cul-
tural perception of technology.” AI & SOCIETY (2016). doi: 10.1007/s00146-016-
0654-7. Accessed June 25, 2016.
Szollosy, Michael. “Why are we afraid of robots? The role of projection in the popu-
lar conception of robots.” In Beyond Artificial Intelligence, pp. 121-31. Springer In-
ternational Publishing, 2015.
Theoretical Foundations of Artificial General Intelligence, edited by Pei Wang and Ben
Goertzel. Paris: Atlantis Press, 2012.
Thompson, David A. “The Man-Robot Interface in Automated Assembly.” In Moni-
toring Behavior and Supervisory Control, edited by Thomas B. Sheridan and Gun-
nar Johannsen, pp. 385-91. NATO Conference Series 1. Springer US, 1976.
Van Oost, Ellen, and Darren Reed. “Towards a sociological understanding of robots
as companions.” In International Conference on Human-Robot Personal Relation-
ship, pp. 11-18. Springer Berlin Heidelberg, 2010.
Weber, Max, Economy and Society: An Outline of Interpretive Sociology, New York:
Bedminster Press, 1968.
“Which Deus Ex 1 ending did you choose?”, SharkyForums. http://www.sharkyfo-
rums.com/showthread.php?238618-Which-Deus-Ex-1-ending-did-you-choose-
(spoilers). Accessed November 23, 2015.
Wiltshire, Travis J., Dustin C. Smith, and Joseph R. Keebler. “Cybernetic Teams: To-
wards the Implementation of Team Heuristics in HRI.” In Virtual Augmented and
Chapter Eight: The Social Robot as CEO • 343
Mixed Reality. Designing and Developing Augmented and Virtual Environments, ed-
ited by Randall Shumaker, pp. 321-30. Lecture Notes in Computer Science 8021.
Springer Berlin Heidelberg, 2013.
Wong, Julia Carrie, “Welcome to the robot-based workforce: will your job become
automated too?” The Guardian, March 19, 2016. https://www.theguard-
ian.com/technology/2016/mar/19/robot-based-economy-san-francisco. Accessed
June 25, 2016.
Yampolskiy, Roman V. Artificial Superintelligence: A Futuristic Approach. Boca Raton:
CRC Press, 2015.
Chapter Nine
The Diffuse Intelligent Other:
An Ontology of Nonlocalizable Robots as
Moral and Legal Actors
1
Abstract. Much thought has been given to the question of who
bears moral and legal responsibility for actions performed by robots.
Some argue that responsibility could be attributed to a robot if it
possessed human-like autonomy and metavolitionality, and that
while such capacities can potentially be possessed by a robot with
a single spatially compact body, they cannot be possessed by a
spati ally disjunct, decentralized collective such as a robotic swarm
or network. However, advances in ubiquitous robotics and distrib-
uted computing open the door to a new form of robotic entity that
possesses a unitary intelligence, despi te the fact that its cognitive
processes are not confined withi n a singl e spati ally compact, per-
sistent, identifiable body. Such a ‘nonlocalizable’ robot may poss ess
a body whose myriad components interact with one another at a
dista nce and whi ch is continuously transforming as components join
and leave the body. Here we develop an ontology for classifyi ng
such robots on the basis of their autonomy, voli tionality, and local-
izability. Using this ontology, we explore the extent to which non-
localizabl e robots – including those possessing cognitive abilities
that match or exceed those of human beings – can be considered
moral and legal actors that are responsible for their own actions.
Introduction
Philosophers, roboticists, and legal scholars have given much thought to
the challenges that arise when attempting to assign moral and legal responsi-
bility for actions performed by robots. One difficulty results from the fact that
1
This chapter is reprinted from “The Diffuse Intelligent Other: An Ontology of Nonlocalizable
Robots as Moral and Legal Actors,” in Social Robots: Boundaries, Potential, Challenges, edited by
Marco Nørskov. Series on Emerging Technologies, Ethics and International Affairs. Farnham:
Ashgate, 2016, pp. 177-98. Copyright © 2016.
346 • Posthuman Management
the word ‘robot’ does not describe a single species or genus of related beings
but rather a vast and bewildering universe of entities possessing widely dif-
ferent morphologies and manners of functioning.
To date, scholars exploring questions of moral and legal responsibility have
largely focused on two types of robots. One kind comprises telepresence ro-
bots that are remotely operated by a human being to carry out tasks such as
performing surgery, giving a lecture, or firing a missile from an aerial vehicle.
2
Such robots are not morally or legally responsible for their own actions; in-
stead, responsibility for their actions is attributed to their operators, designers,
or manufacturers according to well-established legal and moral frameworks
relating to human beings’ use of tools and technology. It is possible for such a
robot to be designed and manufactured in one country, relocated to (and act-
ing in) a second country, and remotely controlled by a human operator based
in a third country, thus raising questions about which nation’s laws should be
used to assign responsibility for the robot’s actions. However, in principle it
is relatively easy to trace the chain of causality and identify the locations in
which each stage of a robot’s action occurred.
The other main type of robot for whose actions scholars have sought to
attribute moral and legal responsibility comprises autonomous devices such
as self-driving cars
3
and autonomous battlefield robots.
4
Here questions of le-
gal and moral responsibility are complicated by the fact that it is not immedi-
ately obvious who – if anyone – has “made the decision” for the robot to act
in a particular way. Depending on the circumstances, arguments can be made
for attributing responsibility for a robot’s action to the robot’s programmer,
manufacturer, owner, or – if the robot possesses certain kinds of cognitive
properties – even to the robot itself.
5
At the same time, the attribution of legal
responsibility for the robot’s actions is simplified by the fact that the compu-
tational processes guiding an autonomous robot’s behavior typically take
place within the robot’s own spatially compact body, reducing the likelihood
that the robot’s process of acting could cross national borders.
While efforts to account for the actions of some kinds of robots are thus
already well advanced, at the frontiers of ubiquitous robotics, nanorobotics,
distributed computing, and artificial intelligence, experts are pursuing the de-
velopment of a new form of robotic entity whose manner of being, deciding,
2
Datteri, “Predicting the Long-Term Effects of Human-Robot Interaction: A Reflection on Re-
sponsibility in Medical Robotics” (2013); Hellström, “On the Moral Responsibility of Military Ro-
bots” (2013); Coeckelbergh, “From Killer Machines to Doctrines and Swarms, or Why Ethics of
Military Robotics Is Not (Necessarily) About Robots” (2011).
3
Kirkpatrick, “Legal Issues with Robots” (2013).
4
Sparrow, “Killer Robots” (2007).
5
Sparrow (2007); Dreier & Spiecker, “Legal Aspects of Service Robotics” (2012), p. 211.
Chapter Nine: The Diffuse Intelligent Other • 347
and acting is so radically different from those of earlier robots that it will be
difficult or impossible to apply our traditional conceptual frameworks when
analyzing whether such a robot is morally and legally responsible for its ac-
tions. This new kind of being is the nonlocalizable robot, one whose cognitive
processes do not subsist within a single identifiable body that is confined to a
particular set of locations and that endures across time.
6
Such a robot might
take the form, for example, of a vast digital-physical ecosystem in which mil-
lions of interconnected devices participate in a shared cognitive process of
reaching decisions and acting, even as devices continually join and leave the
network; or it could exist as a loosely-coupled, free-floating oceanic cloud of
nanorobotic components that communicate with one another via electromag-
netic signals while drifting among the world’s seas; or it could take the form
of an evolvable computer virus whose cognitive processes are hidden within
an ever-shifting network of computers around the world – ones found not
only in homes and business but also in airplanes, ships, and orbiting satellites.
Attempts to address questions of responsibility for actions performed by
nonlocalizable robots raise a multitude of complex philosophical and legal
questions that have not yet been carefully explored. For example, in an envi-
ronment containing such robots it might be apparent that ‘some’ robotic en-
tity has just acted, but it may be impossible to correlate that action with just
a single robot, as a networked device can be part of the bodies of many differ-
ent nonlocalizable robots simultaneously, and over the span of a few millisec-
onds or minutes a networked device might undergo the processes of becoming
integrated into a nonlocalizable robot’s body, acting as a part of its body, and
then becoming dissociated from its body. More fundamentally, it may even be
impossible to determine with clarity that a particular robot exists – to identify
it and distinguish it from other entities and delineate the boundaries of its
physical and cognitive existence. The fact that such a spatially diffuse (and
potentially even globally extensive) robotic body might someday be occupied
by an artificial intelligence possessing human-like cognitive capacities
7
adds
another dimension that one must account for when attempting to develop a
framework for determining the moral and legal responsibility that nonlocal-
izable robots might bear for their actions.
In this text we suggest the outline for such a framework. We begin in the
following section by proposing an ontology of autonomy, volitionality, and
localizability that will allow us to describe essential physical and cognitive
6
Yampolskiy & Fox, “Artificial General Intelligence and the Human Mental Model” (2012), pp.
129-38; Gladden, “The Social Robot as ‘Charismatic Leader’: A Phenomenology of Human Sub-
mission to Nonhuman Power” (2014), p. 338.
7
Yampolskiy & Fox (2012), pp. 133-38.
348 • Posthuman Management
characteristics of nonlocalizable robots and analyze the extent to which they
are morally and legally responsible actors.
8
Developing an Ontological Framework
Purpose of the Ontology
There have been efforts by computer scientists to develop a universal ‘on-
tology’ for robotics that defines its current terminology and engineering prin-
ciples in a way that is “formally specified in a machine-readable language,
such as first-order logic.”
9
Such specialized technical schemas facilitate the
standardization of robotics engineering and the interoperability of different
robotic systems, however they do not attempt to delineate the full universe of
ways of being and acting that are available for robots and which philosophers
can use to explore the social, ethical, and legal questions that are provoked by
the existence of ever more sophisticated robotic morphologies.
A more robust ontology should be capable of describing a robotic entity
both at its most fundamental level of physical reality as well as at the level of
emergent phenomena that are attributed to the entity as a result of higher-
level interactions with its environment.
10
Such an ontology would allow us to
analyze a robot from the perspective of its nature as an autonomous viable
system that organizes matter, energy, and information,
11
to its form and func-
tioning as a concrete physical device that incorporates sensors, actuators, and
computational processors, to its role within human (or artificial) societies as a
social, political, legal, and cultural object or subject.
Here we focus on one element of such an ontology that poses a significant
– and thus far largely unexplored – challenge to the current debate on whether
robots can bear moral and legal responsibility for their actions. Namely, we
introduce the concept of a robot’s localizability, and in particular we will con-
sider the question of moral and legal responsibility for the actions of a robot
that is a nonlocalizable being – i.e., one that has a physical body, but whose
body does not exist in any one particular place across time. Our ontology also
8
The word ‘actors’ is used here to mean “entities who are morally or legally responsible for their
own actions.” Use of the word ‘agent’ has been avoided, due to the fact that it possesses different
(and in many ways incompatible) meanings when used in the contexts of moral philosophy, law,
and computer science. Similarly, the posing of the question of whether a nonlocalizable robot
can be considered a moral or legal ‘person’ has generally been avoided, since – depending on
the context – personhood does not necessarily imply that an entity is morally and legally re-
sponsible for its actions.
9
Prestes et al., “Towards a Core Ontology for Robotics and Automation” (2013), p. 1194.
10
Gladden, “The Social Robot as ‘Charismatic Leader’” (2014), p. 338.
11
Gladden, “The Artificial Life-Form as Entrepreneur: Synthetic Organism-Enterprises and the
Reconceptualization of Business” (2014), p. 417.
Chapter Nine: The Diffuse Intelligent Other • 349
encompasses two other relevant elements: those of a robot’s levels of auton-
omy and volitionality. A brief discussion of these three concepts follows.
Autonomy
Autonomy relates to an entity’s ability to act without being controlled. For
robots, possessing autonomy means being “capable of operating in the real-
world environment without any form of external control for extended periods
of time.”
12
In its fullest form, autonomy involves not only performing cogni-
tive tasks such as setting goals and making decisions but also performing
physical activities such as securing energy sources and carrying out self-repair
without human intervention. Building on conventional classifications of ro-
botic autonomy,
13
we can say that currently existing robots are either nonau-
tonomous (e.g., telepresence robots that are fully controlled by their human
operators when fulfilling their intended purpose, or robots which do not act
to fulfill any purpose), semiautonomous (e.g., robots that require ‘continuous
assistance’ or ‘shared control’ in order to fulfill their intended purpose), or
autonomous (e.g., robots that require no human guidance or intervention in
fulfilling their intended purpose). We can use the term superautonomous to
describe future robots whose degree of autonomy may significantly exceed
that displayed by human beings – e.g., because the robots’ ability to inde-
pendently acquire new knowledge frees them from any need to seek guidance
from with human subject-matter experts or because their bodies contain an
energy source that can power them throughout their anticipated lifespan.
Volitionality
Volitionality relates to an entity’s ability to self-reflexively shape the in-
tentions that guide its actions. A robot is nonvolitional when it possesses no
internal goals or ‘desires’ for achieving particular outcomes nor any expecta-
tions or ‘beliefs’ about how performing certain actions would lead to particu-
lar outcomes. Some telepresence robots are nonvolitional, insofar as a human
operator supplies all of the desires and expectations for their actions; the robot
is simply a pliable, transparent tool. A robot is semivolitional if it possesses
either a goal of achieving some outcome or an expectation that particular out-
comes will result if the robot acts in a certain way, but the robot does not link
goals with expectations. For example, such a robot might have been pro-
grammed with the goal of “moving to the other side of the room,” but it has
no means of interpreting the sensory data that it is receiving from its environ-
ment to know where it is, nor does it have an understanding of the fact that
12
Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control (2005),
p. 1.
13
Murphy, Introduction to AI Robotics (2000), pp. 31-34.
350 • Posthuman Management
activating its actuators would cause it to ‘move.’ A robot is volitional if it com-
bines goals with expectations; in other words, it can possess an intention
14
,
which is a mental state that comprises both a desire and a belief about how
some act that the robot is about to perform can contribute to fulfilling that
desire.
15
For example, a therapeutic social robot might have a goal of evoking
a positive emotional response in its human user, and its programming tells it
that by following particular strategies for social interaction it is likely to evoke
such a response.
We can describe as metavolitional a robot that possesses what scholars
have elsewhere referred to as a ‘second-order volition,’ or an intention about
an intention.
16
For example, imagine a more sophisticated therapeutic social
robot that comes to realize that it is ‘manipulating’ its human users by em-
ploying subtle psychological techniques to coax them into displaying positive
emotional responses. Such a robot would display metavolitionality if it grew
weary of what it now saw as its ‘dishonest’ and coercive behavior, and it
wished that it did not feel compelled to constantly manipulate human beings’
psyches. Metavolitionality is a form of volitionality typically demonstrated by
adult human beings. We can use the term supervolitional to describe a possible
future robot that regularly forms intentions that are higher than second-order.
For example, such a robot might possess a mind that is capable of simultane-
ously experiencing thousands of different second-order intentions and using
a third-order intention to guide and transform those second-order intentions
in a concerted way that reshapes the robot’s character.
Localizability
Localizability relates to the extent to which an entity possesses a stable,
identifiable physical body that is confined to one or more concrete locations.
A robot is local when its ‘sensing body’ of environmental sensors, its ‘acting
body’ of actuators and manipulable physical components, and the ‘brain’ in
which its cognitive processes are executed are found together in a single loca-
tion and possess physical forms that are discrete and easily identifiable and
endure over time. Such robots might include a robotic vacuum cleaner, an ar-
ticulated industrial robot controlled by a teach pendant that is connected to
the robotic arm by a physical cable, or an experimental wheeled robot whose
body moves through a maze while the its data analysis and decision-making
are being carried out in a dedicated desktop computer that is located in the
14
‘Intentionality’ is employed in its usual philosophical sense to describe an entity’s ability to
possess mental states that are directed toward (or ‘about’) some object; that is a broader phe-
nomenon than the possession of a particular ‘intention’ as defined here.
15
Calverley, “Imagining a Non-Biological Machine as a Legal Person” (2008), p. 529.
16
Calverley (2008), pp. 533-35.
Chapter Nine: The Diffuse Intelligent Other • 351
same room as the robot’s sensing and acting body and is wirelessly linked to
it.
A robot is multilocal when it comprises two or more stable and clearly
identifiable components that together form the robotic unit and which are not
in physical proximity to one another. These components can potentially be
located in different parts of the world: for example, a telesurgery robot might
include a surgical device located in a hospital room and a controller unit that
is manipulated by a human surgeon in a different country and is linked to the
surgical device through the Internet. Similarly, an autonomous battlefield ro-
bot system could include a central computer located in a military base that is
simultaneously controlling the actions of several robotic bodies participating
in a military operation in a conflict zone in another country. A robotic entity
can be ‘intensely multilocal’ if it possesses a very large number of components
that are located at great distances from one another, however this is a quanti-
tative rather than qualitative distinction: as long as a robotic entity’s physical
components are identifiable and endure over time, the robot is still multilocal,
regardless of how numerous or widely dispatched the components might be.
Many current and anticipated systems for ambient intelligence and ubiquitous
robotics are multilocal.
17
A robotic entity is truly nonlocalizable when it is impossible to clearly spec-
ify in any given moment the exact location or extent of the robot’s body or to
specify exactly ‘where’ key activities of the robot’s cognitive processes are
taking place. It may be known that the robot exists, and from the nature of the
robot’s actions in the world it is known that the robot must possess both phys-
ical sensors and actuators and some substrate upon which its cognitive pro-
cesses are being executed, however it is not possible to specify exactly where
or in what form all of those components exist.
The Sources of Nonlocalizability
Nonlocalizability can result from a number of conditions. For example, at
any given moment, a robot’s physical extension may include components that
are a part of its body, components that are in the process of being removed
from its body, and components that are in the process of becoming part of its
body. If the robot is able to add new components to its body and remove old
ones from its body at a sufficient rate, it is possible that the robot’s body at
one moment in time might not share any physical components in common
17
Défago, “Distributed Computing on the Move: From Mobile Computing to Cooperative Ro-
botics and Nanorobotics” (2001); Pagallo, “Robots in the Cloud with Privacy: A New Threat to
Data Protection?” (2013); Weber, “General Approaches for a Legal Framework” (2010).
352 • Posthuman Management
with the robot’s body as it existed just a few moments earlier.
18
Such a robot
is continuously in the act of moving its cognitive processes into a new body.
In this case it is not the physical components of its body that give the robot its
identity, insofar as those components are in the perpetual process of being
discarded and replaced with new ones; rather it is the relationship between its
components – its organizing principle that arranges matter, energy, and in-
formation into an enduring viable system
19
– that gives a robot its identity as
a potential moral and legal actor.
20
Another source of nonlocalizability occurs
if a robot’s primary cognitive process consists of a unitary neural computing
process that can be distributed across a cluster or grid of processors rather
than a program that is executed on a single serial processor.
21
Finally, non-
localizability can also result from the fact that a single networked device such
as a sensor or actuator can potentially be part of the ‘bodies’ of several differ-
ent robots simultaneously, producing a situation in which robots’ bodies may
partly overlap with one another, complicating the question of specifying “to
which robot” a particular component or action belongs.
Note that as they currently exist, conventional robotic swarms and net-
works may be intensely multilocal, but they are not nonlocalizable. While the
miniaturization of parts and multiplication of parts constituting a robot’s body
might contribute to nonlocalizability, they are insufficient to create it, insofar
as a robot can have a body consisting of many small parts that is still identifi-
able and stable over time. Moreover, if the components of a swarm lack a cen-
tralized cognitive process that controls their actions, then they are better un-
derstood as a collection of independent robots than as a single robot whose
body comprises numerous parts. A nonlocalizable robot is not simply a multi-
agent system or collection of autonomous robots but a single entity possessing
a unitary identity that can be at least theoretically capable of possessing au-
tonomy and metavolitionality.
18
Of course, human bodies also undergo a process of gradually replacing their components;
however in the case of a nonlocalizable robot, this becomes significant insofar as the transfor-
mation of the robot’s body is taking place on a time-scale relevant to the time-frame (Gunther,
“Time – The Zeroth Performance Metric” (2005), pp. 7-8) in which decisions and actions are
made that are the subject of discussions about potential moral and legal responsibility.
19
Gladden, “The Artificial Life-Form as Entrepreneur” (2014), p. 417.
20
The debate over whether a robot’s identity derives from its physical components or some
emergent process at work within them parallels, in some ways, discussions and concepts as an-
cient as Aristotle’s notion of the soul as the form that animates a living substance, the Neopla-
tonic concept of ψυχή, and the understanding of the λόγος possessed by individual human beings
that was developed by the Greek Fathers of the Catholic Church.
21
Gladden, “The Social Robot as ‘Charismatic Leader’” (2014), p. 338.
Chapter Nine: The Diffuse Intelligent Other • 353
The Current Debate over Moral and Legal Responsibility
for Robotic Actors
Combining our ontology’s four possible values for a robot’s level of au-
tonomy, five possibilities for volitionality, and three possibilities for localiza-
bility yields a total of sixty prototypical kinds of robotic entities described by
the ontology. For some of these robots, the question of moral and legal re-
sponsibility for their actions can be analyzed using traditional frameworks
that are regularly applied to the actions of human beings. Below we consider
several such cases.
The Robot as a Tool for Human Use
Consider once again a multilocal telesurgery robot that consists of a mech-
anized surgical instrument that operates on a patient and is guided by a human
surgeon – who may be located in a different country – manipulating a con-
troller connected to the instrument through the Internet. If the surgeon must
directly and specifically request each movement of the surgical instrument,
the device would likely be nonautonomous and nonvolitional. If the human
surgeon can give the instrument more generalized instructions that it then
interprets and executes (such as “Withdraw the laparoscope”), then the robot
is operating under a form of ‘continuous assistance’ or ‘shared control,’
22
and
it may qualify as semiautonomous and semivolitional or even volitional.
As a tool for use by human beings, questions of legal responsibility for any
harmful actions performed by such a robot revolve around well-established
questions of product liability for design defects
23
on the part of its producer,
professional malpractice on the part of its human operator, and, at a more
generalized level, political responsibility for those legislative and licensing
bodies that allowed such devices to be created and used. The international
dimension of having a human operator who causes the robot to act in a dif-
ferent country raises questions about legal jurisdiction, conflicts of national
law, and extraterritoriality,
24
but those issues can be addressed using existing
legal mechanisms.
Questions of the moral responsibility for the robot’s actions can be simi-
larly resolved on the basis of its functioning as a passive instrument produced
and used by human beings for their own ends. Such a robot does not possess
responsibility of its own for its actions,
25
but using Stahl’s formulation
26
could
22
Murphy (2000), pp. 31-34.
23
Calverley (2008), p. 533; Datteri (2013).
24
Doarn & Moses, “Overcoming Barriers to Wider Adoption of Mobile Telerobotic Surgery: En-
gineering, Clinical and Business Challenges” (2011).
25
Hellström (2013), p. 104.
26
Stahl, “Responsible Computers? A Case for Ascribing Quasi-Responsibility to Computers In-
dependent of Personhood or Agency” (2006), p. 210.
354 • Posthuman Management
be seen as possessing a sort of ‘quasi-responsibility’ that serves as a place-
holder to point back to the robot’s human operators and producers, who are
ultimately responsible for its actions.
The Robot as Ersatz Human Being
Imagine a humanoid social robot that has been designed in such a way that
it possesses not only a human-like local body, but also a human-like form of
autonomy and metavolitionality. While such a robot might appear – by to-
day’s standards – to represent a quite futuristic technological breakthrough,
the question of moral and legal responsibility for the robot’s actions could
largely be addressed using traditional conceptual frameworks.
In the eyes of the law, a robot that is both autonomous and metavolitional
can be considered a legal person that is responsible for its actions.
27
From the
legal perspective, ‘autonomy’ means that there is not some human being (or
other robot) to whom a robot defers and who controls the robot’s decision-
making.
28
From the legal perspective, this does not require the robot to possess
‘free will’; the robot’s behavior can indeed be determined by certain prefer-
ences or desires, as long as those preferences have somehow been generated
from within by the robot itself.
29
A robot whose actions are being directed via
remote control – or which has been programmed in advance to act in a specific
way in response to a specific situation – would not be acting autonomously.
However, autonomy is not, in itself, sufficient to generate legal responsibility;
an entity must also act with metavolitionality.
The question of whether such a robot is morally (and not just legally) re-
sponsible for its actions is more complex, although it also builds on traditional
understandings of autonomy and metavolitionality. While incompatibilists ar-
gue that an entity must have the freedom to choose between multiple alterna-
tives in order to be held morally responsible, others like Frankfurt have
claimed that such freedom is unnecessary and that an entity can be morally
responsible if it is capable of experiencing not only desires but also desires
about its desires, as is seen in our human capacity for “changing desires
through the sheer force of mental effort applied in a self-reflexive way.”
30
Drawing on Aristotle, Hellström notes that one way of analyzing whether an
entity bears moral responsibility for some deed is to ask whether it is “worthy
of praise or blame for having performed the action;
31
if we cannot imagine
ourselves expressing praise or blame for a robot’s action, then it is likely not
the sort of actor to which moral responsibility can be attributed. This approach
27
Calverley (2008), pp. 534-35.
28
Calverley (2008), p. 532.
29
Calverley (2008), p. 531.
30
Calverley (2008), pp. 531-32.
31
Hellström (2013), p. 102.
Chapter Nine: The Diffuse Intelligent Other • 355
can be understood as a sort of intuitive ‘shortcut’ for considering whether an
entity is metavolitional. Kuflik gets at the idea of metavolitionality similarly,
arguing that it only makes sense to assign computer-based devices like robots
responsibility for their actions if we are able to ask them:
… to provide good reason for thei r comportment, to assess the
force of reasons they had not previously considered, to be willing in
some cas es to acknowledge the insufficiency of their own reasons
and the greater force of reasons not previously considered, to ex-
plain mitigating factors and ask for forgiveness, and – faili ng a show
either of good reason or good excuse – to apologize and look for
ways of making amends.
32
It should be noted, though, that “assignment of responsibility not necessarily
is a zero-sum game”;
33
the fact that a robot bears moral or legal responsibility
for its actions does not imply that a different sort of moral and legal responsi-
bility cannot also be attributed to those who brought the robot into existence,
educated it in a particular way, or caused it to be placed in the situation in
which it was acting.
Swarms and Networks as Localizable Individuals or
Nonlocalizable Communities
Advances in the creation of distributed robotic systems such as networks
and swarms of robots (and particularly, swarms of miniaturized robots) have
become a topic much studied by philosophers and legal scholars. The rise of
such sophisticated robotic systems would seem to point toward the eventual
development of robots that are not only autonomous and metavolitional but
also truly nonlocalizable; however, to date the development of robotic swarms
and networks has not yet spurred significant consideration of potentially au-
tonomous metavolitional nonlocalizable robots as moral or legal actors. In-
stead, the focus of the debate has moved in other directions.
Some scholars have considered the responsibility of robots that “are con-
nected to a networked repository on the internet that allows such machines
to share the information required for object recognition, navigation and task
completion in the real world.”
34
However, if such a centralized, cloud-based
repository is controlling the networked robots’ actions, then the existence of a
stable, identifiable ‘brain’” for the system means that it is at most intensely
multilocal; it is not nonlocalizable. On the other hand, if the centralized repos-
itory is not controlling the individual robots’ actions, then each robot could
32
Kuflik, “Computers in Control: Rational Transfer of Authority or Irresponsible Abdication of
Autonomy?” (1999), p. 174.
33
Hellström (2013), pp. 104.
34
Pagallo (2013), p. 501.
356 • Posthuman Management
potentially possess autonomy and metavolitionality, but the system as a whole
does not. In that case, the network could be seen as a community of many
different robots, a sort of a multi-agent system.
35
The ‘membership’ of the
community might indeed change from moment to moment, as robots join and
leave the network. However, the community is not an entity with a single
shared cognitive process and metavolition. It more closely resembles a ‘legal
person’ such as nation or corporation that can bear legal responsibility for its
actions but cannot possess the metavolitionality needed for moral responsibil-
ity: a corporation can be said to feel pride or regret only in a metaphorical
sense, as the corporation’s decisions and actions are made not by some auton-
omous corporate ‘mind’ but by the minds of its human constituents, often
working together.
36
Building on Ricoeur’s notion of ‘quasi-agency’ to refer to
the action of nations, Stahl
37
attributes ‘quasi-responsibility’ to a robotic entity
that appears to possess moral responsibility but actually does not because it
lacks autonomy or metavolitionality; quasi-responsibility serves only as a
placeholder that points toward the entities (such as a network’s autonomous
components) that actually bear responsibility for the system’s apparently col-
lective action. In none of these analyses do scholars assert that a robotic entity
such as a continuously evolving network could be nonlocalizable while at the
same time possessing autonomy and metavolitionality.
Coeckelbergh
38
goes further: rather than arguing that it is impossible for a
robotic swarm or network to be both nonlocalizable, autonomous, and
metavolitional, he argues that it is not even possible for a ‘robotic’ swarm or
network to exist. Every robot is inextricably situated within a social and tech-
nological ecosystem that includes relations with human beings and other de-
vices, and these relations shape the robot’s actions and being. What appears,
at first glance, to be a purely robotic network or swarm “can hardly be called
a ‘robotic’ swarm given the involvement of various kinds of systems and hu-
mans” that in some sense participate in its decision-making and acting.
39
Thus
we see that the debate about robots’ possible moral and legal responsibility
for their own actions has not yet directly addressed the case of robots that are
both autonomous and metavolitional while existing in a physical form that is
nonlocalizable. In the following sections we attempt to suggest what such an
analysis might look like.
35
Murphy (2000), pp. 293-314.
36
Stahl (2006), p. 210.
37
Stahl (2006), p. 210.
38
Coeckelbergh (2011), pp. 274-75.
39
Coeckelbergh (2011), p. 274.
Chapter Nine: The Diffuse Intelligent Other • 357
The Future of Moral and Legal Responsibility for
Nonlocalizable Robots
Advances Contributing to the Development of Nonlocalizable
Robots
Developments in fields such as ubiquitous computing, cooperative nano-
robotics, and artificial life are laying the groundwork for the existence of ro-
botic entities whose bodies are capable of shifting from one physical substrate
to another, occupying parts of the bodies of other robotic entities, and pos-
sessing a spatial extension that comprises many disjunct elements and is not
identifiable to human observers – in other words, robotic entities that are truly
nonlocalizable. For example, as a ‘next logical step’ that builds on principles
of mobile and ubiquitous computing, Défago
40
develops a model of ‘coopera-
tive robotics’ in which teams of nanorobots can coordinate their activities us-
ing ad hoc wireless networks or other forms of remote physical interaction.
We can also anticipate the development of artificial life-forms resembling so-
phisticated computer viruses which, in effect, are continually exchanging their
old bodies for new ones as they move through networked ecosystems to oc-
cupy an ever-shifting array of devices in search of “resources for their own
survival, growth, and autonomously chosen pursuits.”
41
Given the vast spec-
trum of possibilities unlocked by ubiquitous computing and nanotechnology,
it is possible that without even realizing it, human beings could someday find
ourselves living inside the ‘bodies’ of nonlocalizable robots that surround us
– or their bodies could exist inside of us.
Such nonlocalizable robots might demonstrate a range of possible cogni-
tive capacities similar to those available for localizable robots. Insofar as the
architecture utilized by the human brain likely represents only one of many
possible substrates for a sapient mind, it seems possible that a single mind that
is embodied across multiple, changing, spatially disjunct components could
potentially possess levels of autonomy and volitionality at least as great as
those of a human being.
42
While certain challenges relating to time, space, and
computer processing speeds
43
arise when attempting to develop artificial gen-
eral intelligences whose primary cognitive process occurs across a network of
spatially disjunct artificial neurons, such spatial dispersion is not likely to
prove an insurmountable obstacle to the creation of artificial general intelli-
gence.
44
Already work is underway on designing artificial networks (ANNs)
40
Défago (2001), pp. 50-53.
41
Gladden, “The Artificial Life-Form as Entrepreneur” (2014), p. 418.
42
Yampolskiy & Fox (2012), pp. 133-38.
43
Gunther (2005), pp. 6-7.
44
Loosemore & Goertzel, “Why an Intelligence Explosion Is Probable” (2012), pp. 93-95.
358 • Posthuman Management
whose neurons communicate with one another wirelessly, allowing for the
creation of ‘cyber-physical systems’ (CSPs) whose highly flexible and easily
expandable systems include a “large number of embedded devices (such as
sensors, actuators, and controllers) distributed over a vast geographical
area.”
45
Such wireless neural networks could potentially demonstrate capaci-
ties impossible for the human brain, insofar as a neuron in such a network is
no longer limited to interacting only with neurons that are physically adjacent
in three-dimensional space but can potentially connect with other (more dis-
tant) neurons to create networks possessing different topologies and dimen-
sionality.
Utilizing our ontological framework, we can now consider the moral and
legal responsibility that would be borne for their actions by several types of
nonlocalizable robots, beginning with those that demonstrate the lowest levels
of autonomy and volitionality.
The Robot as Ambient Magic
Consider a robot whose primary cognitive process can be spread across a
vast number of disparate networked computerized devices
46
and which over
time shifts elements of that process out of its current devices and into new
ones. Such a robot’s ‘body’ would consist of all those devices in which its cog-
nitive processes were being executed (or were at least stored) at a given mo-
ment in time. By copying parts of its cognitive processes into adjacent net-
worked devices and deleting them from some of those where they had been
residing, the robot could in effect ‘move’ in an amoeba-like manner, floating
not in the air nor through the ocean’s waters but through the global ecosystem
of networked devices. It could be massively embodied but in such a way that
no one element of its ‘body’ was essential.
If such a nonlocalizable robot were nonautonomous and nonvolitional, it
could in some respects resemble a natural phenomenon such as a flickering
flame or flowing stream: when engaged physically or digitally by actors such
as human beings, it might react in potentially interesting and dramatic ways
that display discernible patterns – but not volition. Such a robot might be seen
by human beings as a ‘force of nature’ or perhaps even a latent magical en-
ergy, a sort of qì or mana that is immanent in the environment and can be
manipulated by human adepts to produce particular effects.
47
Such a robot
would possess neither legal nor moral responsibility for its actions but could
45
Ren & Xu, “Distributed Wireless Networked H∞ Control for a Class of Lurie-Type Nonlinear
Systems” (2014), p. 2.
46
Gladden, “The Social Robot as ‘Charismatic Leader’” (2014), p. 338.
47
Clarke, “Hazards of Prophecy: The Failure of Imagination” (1973), p. 36.
Chapter Nine: The Diffuse Intelligent Other • 359
potentially possess a quasi-responsibility
48
that points toward its creators, as-
suming that it had been purposefully created and had not evolved naturally
‘in the wild’ through the interaction and reproduction of other robotic entities.
The Robot as Diffuse Animal Other
If a robot of the sort just described were able to experience desires and
expectations and to purposefully shift its cognitive processing into certain
kinds of networked computerized hosts that it found attractive (e.g., those
with greater processing power or particular kinds of sensors and actuators)
and out of those that were less attractive, it would more closely resemble an
animal than an impersonal force. It would no longer move through the Inter-
net of Things randomly – or as steered by external forces – but would proac-
tively explore its environment in a search for those informational, value-stor-
ing, agent, or material resources
49
that contribute to the satisfying of its drives
for self-preservation and reproduction.
50
Such a robot could in many ways be considered a sort of digital-physical
‘animal’ with a capacity for bearing moral and legal responsibility similar to
those of natural organic animals. (And indeed, in some cases the robot itself
might be an organic being.
51
) While such robots would not be moral actors
responsible for their own actions, in contrast to the ‘robot as ambient magic’
they could potentially be the sort of moral patients to whom moral consider-
ation is due, if – like many kinds of animals – they are able to experience
physical or psychological suffering.
52
The Robot as Diffuse Human-like Other
If we now imagine the robot described above to possess human-like cogni-
tive capacities for reasoning, emotion, sociality, and intentionality, it becomes
autonomous and metavolitional. With regard to its possession of moral and
legal responsibility, it is in some ways similar to the ‘robot as ersatz human
being’ mentioned above, however its nonlocalizability means that it lacks
some traits that are found in human beings and possesses other characteristics
that are impossible for them.
By using its dispersed, networked body to manipulate an environment,
such a robot might appear to materialize, ghost-like, from out of the digital-
physical ecosystem to speak with, look at, and touch a human being and then
vanish – and in the next moment it could be having a similar social interaction
48
Stahl (2006), p. 210.
49
Gladden, “The Artificial Life-Form as Entrepreneur” (2014), p. 417.
50
Omohundro, “Rational Artificial Intelligence for the Greater Good” (2012).
51
Pearce, “The Biointelligence Explosion” (2012).
52
Gruen, “The Moral Status of Animals” (2014).
360 • Posthuman Management
with another human being in another part of the globe. Because of the limita-
tions of our organic bodies, a human being can only be physically located in
one place at a time, although we can use technologies like the telephone or
videoconferencing or virtual reality to interact in a mediated fashion with en-
vironments in other countries. However, a nonlocalizable robot could truly be
‘in’ many different locations at once; there is no principle of primacy that al-
lows us to say that one venue of interaction is home to the robot’s ‘real’ body
and that it is simply projecting itself virtually into the other locations through
a ‘mediating technology’ – because the only such technology is the robot’s
body itself. The fact that the robot is located ‘here’ at a given moment in time
does not mean that it is not also somewhere else.
Difficulties arise in attempting to analyze such a robot’s possibility for
bearing responsibility for its actions. In principle, it might seem capable of
bearing human-like responsibility. However, our human ability to bear moral
responsibility depends on our metavolitional ability to have expectations: an
intention requires both a desire and a belief about the outcomes that will result
from an action. A being whose body is so radically different from our own
may not experience the same sorts of beliefs or expectations about how its
environment will be affected by its actions – since the very concepts of ‘envi-
ronment’ and ‘action’ could have vastly different meanings for such an entity.
Similarly, the question of legal responsibility is clouded by the need to deter-
mine to which nation’s laws the robot is subject, when its body is potentially
scattered across continents and seas and satellites in orbit. Identifying ‘where’
in the robot’s body a particular decision was made may be as difficult as pin-
pointing which neurons in a human brain generated a particular action.
53
Moreover, it is unclear whether the application of traditional legal ‘rewards’
and ‘punishments’ would have meaning for such a nonlocalizable entity.
The Robot as Diffuse Alien Intelligence
Finally we can consider a nonlocalizable robot whose cognitive processes
demonstrate superautonomy and supervolitionality. In principle, one might
be inclined to impute legal and moral responsibility to such a robot because it
possesses levels of autonomy and volitionality that are at least as great as
those of human beings. But that overlooks the fact that the robot’s experience
of its own superautonomy and supervolitionality likely has little in common
with our human experiences of our vastly less sophisticated autonomy and
metavolitionality. Yampolskiy and Fox argue
54
that “humanity occupies only
a tiny portion of the design space of possible minds” and that “the mental
architectures and goals of future superintelligences need not have most of the
53
Bishop, “On Loosemore and Goertzel’s ‘Why an Intelligence Explosion Is Probable’” (2012), p.
97.
54
Yampolskiy and Fox (2012), p. 129.
Chapter Nine: The Diffuse Intelligent Other • 361
properties of human minds.” We might be able to discern from observing such
robots’ interactions with one another that their actions follow some highly
complex and beautiful (or perhaps terrifying) patterns, but it may be impossi-
ble for us to determine what portion of those patterns results from a sort of
universal natural law that is reflected in the robots’ moral sentiments, how
much of it is due to legal frameworks that the robots have constructed among
themselves,
55
how much of it is due to their cultural traditions, and how much
of it results simply from engineering requirements, mathematical or logical
principles, or physical laws. We might never come to understand the internal
processes that shape such robots’ thoughts, decisions, and actions, because
they lack the ability or desire to explain them to us using language or concepts
that we can fathom.
56
Any attempt by the robots to explain their moral, social,
and legal frameworks might require them to employ metaphorical imagery
and ‘overload’ our senses in a way that bears more resemblance to the phe-
nomenon of divine revelation than to any processes of logic and reasoning
available to the human mind.
57
Serious problems ensue from attempting to apply human notions of “law”
to such alien entities. The law “sets parameters, which, as society has deter-
mined, outline the limits of an accepted range of responses within the circum-
scribed field which it addresses”;
58
however in this case it is difficult to refer
to a single ‘society’ that comprises both human beings and such artificial su-
perintelligences and can determine the range of acceptable behaviors for its
members. (Indeed, the proliferation of transhuman genetic engineering and
cybernetic augmentation might even cause humanity itself to splinter into nu-
merous mutually incomprehensible civilizations.
59
) Although human beings
might share the same digital-physical ecosystem with such robots and interact
with them causally, we could only be said to share a ‘society’ with them in the
same way that we share a society with the birds and insects that live in our
gardens.
The Robot as Charismatic Lawgiver and Moral Beacon
If communication between humanity and such an alien robotic society does
take place, it may be we human beings who end up developing new “laws,
55
Michaud, Contact with Alien Civilizations: Our Hopes and Fears about Encountering Extraterres-
trials (2007), p. 243.
56
Abrams, “Pragmatism, Artificial Intelligence, and Posthuman Bioethics: Shusterman, Rorty,
Foucault” (2004).
57
Gladden, “The Social Robot as ‘Charismatic Leader’” (2014), p. 337.
58
Calverley (2008), p. 534.
59
Abrams (2004).
362 • Posthuman Management
customs, and attitudes” as a result of the exchange.
60
We may discover that
some nonorganic intelligent entities possess moral and legal frameworks that
are superior to ours in their beauty, consistency, fairness, and wisdom; such
systems may appear so irresistibly good and worthwhile that we cannot help
but desire that the robots who embody them should teach – and even govern
– us. They may become the moral and legal leaders of our human society not
through intimidation or coercion or through their vast technological exper-
tise, but because we find ourselves admiring them for their goodness and
yearning to become more like them.
61
Thus it may not be we human beings
who are determining the extent to which such robots are morally and legally
responsible for their actions, but the robots who are providing us with new
and richer and truer frameworks for understanding the moral and legal re-
sponsibility that is borne by all sapient beings – including ourselves.
Developing Legal and Ethical Frameworks for
Autonomous Nonlocalizable Robots
There are several branches of law and ethics from which one can draw
insights and inspiration when attempting to develop legal and ethical frame-
works that address the question of responsibility on the part of nonlocalizable
robots. The relevant fields of law vary, depending on the levels of autonomy
and volitionality possessed by a robot.
Nonlocalizable Robots with Low Autonomy and Volitionality
Nonlocalizable robots possessing low levels of autonomy and volitionality
will likely be seen as inanimate environmental resources to be exploited by
human beings – or hazards to be mitigated. Responsibility for the robots’ ac-
tivities will devolve on their creators, and the nonlocalizable nature of the ro-
bots means that it may be impossible to determine who those creators are.
Parallels may be found in the debate over humanity’s collective legal and eth-
ical responsibility for environmental damage caused by global climate change,
a phenomenon in which specific localized damage in one country may result
from “greenhouse pollution from a great many untraceable point sources”
around the world.
62
On the other hand, if nonlocalizable robots are seen as a
useful resource to be exploited, then legal models can be found in the interna-
tional treaties and institutions governing global phenomena like the preserva-
tion of biological diversity and humanity’s use of oceans, the Antarctic, and
60
Michaud (2007), p. 293.
61
Gladden, “The Social Robot as ‘Charismatic Leader’” (2014), pp. 329-33; Kuflik (1999), p. 181.
62
Vanderheiden, “Climate Change and Collective Responsibility” (2011).
Chapter Nine: The Diffuse Intelligent Other • 363
outer space, which explicitly ground their legal and philosophical rationale in
the need to advance and preserve the “common interest of all mankind.”
63
Nonlocalizable Robots with Animal-like Autonomy and
Volitionality
If nonlocalizable robots display moderate levels of autonomy and volition-
ality – roughly comparable to those of animals – then they are no longer
simply passive features of the environment but entities capable of acting in
accordance with their own expectations and desires. Here we can draw in-
sights, for example, from existing legal and ethical debates surrounding ge-
netically modified animals
64
that have been engineered for particular purposes
and released into the wild. Such creatures are not morally or legally responsi-
ble for their actions but can display the sort of quasi-responsibility that directs
our attention back to their human designers, who bear ultimate responsibil-
ity.
65
We can also draw on existing law and ethics regarding the production and
use of artificial space satellites. Many artificial satellites are, in effect, highly
sophisticated orbital robots that possess powerful onboard computerized
‘brains’ and which may be capable of remotely sensing and recording activi-
ties occurring on the earth’s surface
66
and receiving, transmitting, and poten-
tially disrupting communications (including Internet traffic) with earth-based
sources and other satellites.
67
The creation of robotic orbital satellites that can
physically intercept, manipulate, and reposition other satellites
68
– not to men-
tion the possibility of satellites’ computerized controls being compromised
through computer viruses or remote hacking – opens the possibility for arti-
ficial satellites to be repurposed in ways that are no longer subject to the con-
trol of their original human designers, thereby complicating the attribution of
responsibility for the satellites’ actions. Such devices are multilocal rather
than nonlocalizable, since any given satellite is clearly located in a particular
place in each moment, and the country responsible for the satellite’s produc-
tion, launch, and operation is easy to determine. However, the fact that such
satellites operate from an extraterritorial region while acting in ways that af-
fect human beings in particular countries has led to a unique body of law re-
garding their activities.
63
Berkman, “Common Interests’ as an Evolving Body of International Law: Applications to Arc-
tic Ocean Stewardship” (2012), p. 158.
64
Beech, “Regulatory Experience and Challenges for the Release of GM Insects” (2014).
65
Stahl (2006), p. 210.
66
Sadeh, “Politics and Regulation of Earth Observation Services in the United States” (2010).
67
Vorwig, “Regulation of Satellite Communications in the United States” (2010).
68
Lemonick, “Save Our Satellites” (2013).
364 • Posthuman Management
Nonlocalizable Robots with Human-like Autonomy and
Metavolitionality
When considering nonlocalizable robots that possess higher levels of au-
tonomy and volitionality, we can draw on existing law and ethics regarding
the action of nonhuman persons. The fact that ‘legal persons’ can exist that
are not ‘natural persons’ is a widely accepted legal concept
69
embodied in the
legal identity of states and corporations. Already international human rights
law encourages – and even requires – states to develop extraterritorial legal
structures to control corporations registered in those states and hold them re-
sponsible for actions committed in other states or in regions such as interna-
tional waters or outer space.
70
Such extraterritorial structures could also be
applied to nonlocalizable robots that were originally developed in or have
somehow been registered as ‘synthetic nationals’ of particular states.
Similarly, the Internet – and now the Internet of Things – can be seen as a
somewhat primitive precursor to future global networks that may become
home to autonomous metavolitional nonlocalizable robots. Proposals for gov-
erning the Internet of Things through a combination of transgovernmental
networks, legislation, and self-regulation
71
may be relevant. Self-regulation, in
particular, can be an important form of ‘soft law’ in which governments set
broad parameters but leave the actual implementation to private industry, as
government is incapable of acting quickly enough or with sufficient expertise
in response to such a rapidly evolving and technologically complex field.
72
In
the case of nonlocalizable robots with human-like autonomy and metavoli-
tionality, self-regulation could mean governance of the robots not by their
manufacturers but by the robots themselves, through their creation of a ro-
botic society.
Nonlocalizable Robots with Superautonomy and
Supervolitionality
From a legal and moral perspective, nonlocalizable robots that demonstrate
superautonomy and supervolitionality cannot easily be compared to sentient
animals or to sapient human beings; they are more analogous to superintelli-
gent extraterrestrial aliens with whom humanity might someday come into
contact. Interestingly, much scholarship has been dedicated to the question of
how human law and morality might relate to intelligent alien entities from
other planets. Michaud reminds us
73
that “As far back as Immanuel Kant, some
69
Dreier & Spiecker (2012), p. 215; Calverley (2008); Hellström (2013).
70
Bernaz, “Enhancing Corporate Accountability for Human Rights Violations: Is Extraterritori-
ality the Magic Potion?” (2012).
71
Weber (2010), pp. 27-28.
72
Weber (2010), p. 24.
73
Michaud (2007), p. 374.
Chapter Nine: The Diffuse Intelligent Other • 365
have speculated about a legal system that would apply to all intelligences in
the universe.” Efforts at developing such universal principles have grown
more sophisticated over the centuries, as developments in fields like exobiol-
ogy, neuroscience, complex systems theory, artificial intelligence, and artifi-
cial life have given us greater insights into what forms such alien intelligence
might take.
Michaud notes that the traditional human moral precept of the Golden Rule
would be of limited usefulness to us when interacting with alien intelligences:
our human drives, aspirations, sensations, and reasoning processes are so dif-
ferent from those of the alien beings that merely knowing that we would like
to be treated in a particular way tells us nothing about whether aliens who
were treated in the same way might experience that event as joy or suffering,
as equity or injustice.
74
The first serious modern attempt at developing appro-
priate principles to govern humanity’s potential encounter with an extrater-
restrial intelligence occurred when “Half a century ago, space lawyer Andrew
Haley proposed what he called The Great Rule of Metalaw: Do unto others as
they would have you do unto them.”
75
However, treating alien intelligences as
they wish to be treated is no simple matter; Michaud notes that “It is not clear
how we could observe this principle in the absence of extensive knowledge
about the other civilization. We may need detailed, sophisticated communica-
tion to find out.”
76
As noted above, though, establishing communication be-
tween such disparate forms of intelligence may be difficult or even impossible.
Our coming to understand the nuances of the aliens’ moral universe is not
simply a matter of translating a text between two languages; it may be a task
that exceeds the ability or desire of both civilizations. Nevertheless, efforts by
ethicists and legal scholars to lay the groundwork for such encounters are also
helping to prepare us for contact with superintelligent nonlocalizable robotic
entities whose origins are wholly of this world.
Conclusion
Many practical issues arise when determining whether robots bear moral
and legal responsibility for their actions. For example, we may need to study
the nature of a robot’s primary cognitive substrate and process to determine
whether they are of a sort that allows for true autonomy and metavolitional-
ity, or we may need to gather data to reconstruct the chain of causality sur-
rounding a particular action performed by a robot. However, assuming that
such information can be obtained, the theoretical structures of existing law
and moral philosophy are in principle largely adequate for determining a con-
temporary local or multilocal robot’s degree of responsibility for its actions.
74
Michaud (2007), p. 300.
75
Michaud (2007), p. 374.
76
Michaud (2007), p. 374.
366 • Posthuman Management
Depending on its level of autonomy and volitionality, such a robot might be
treated as a passive tool that possesses no moral or legal responsibility or as a
human-like moral and legal actor to whom such responsibility is attributed.
As we have seen, though, the advent of nonlocalizable robotic entities will
transform these moral and legal equations. When a robot’s body is continually
metamorphosing and drifting through the global digital-physical ecosystem,
it may be impossible to determine which nation, if any, can claim the robot as
a ‘citizen’ or ‘resource’ subject to its laws. Moreover, it may not even be pos-
sible to construct a one-to-one correlation between some action that has oc-
curred in the world and a particular identifiable robotic entity that presumably
performed it. Beyond these practical legal issues, there are also deeper moral
questions: a nonlocalizable robot’s way of existing and acting may simply be
so different from those of human beings, animals, or any other form of being
known to date that it may be inappropriate or impossible for human beings to
apply even our most fundamental (and even presumably ‘universal’) moral
principles to the activity of such entities.
Already, scholars have begun to prepare moral and legal frameworks that
can give us insights into the actions of intelligent extraterrestrial beings whom
we might someday encounter and whose physical morphology, motivations,
thoughts, forms of communication and social interaction, and normative be-
havioral principles may be radically different from our own. So, too, would
humanity benefit from attempting to envision moral and legal frameworks
that can help us account for the actions of future robots whose manner of
thinking, acting, and being is just as alien, but whose origins lie closer to
home. Even before we have actually begun to interact with such beings, the
careful thought that we put into analyzing the moral and legal responsibility
that they will bear for their actions may enrich our understanding of the re-
sponsibility that we human beings bear for our own.
References
Abrams, Jerold J. “Pragmatism, Artificial Intelligence, and Posthuman Bioethics:
Shusterman, Rorty, Foucault.” Human Studies vol. 27, no. 3 (2004): 241-58.
Beech, C. “Regulatory Experience and Challenges for the Release of GM Insects.”
Journal Für Verbraucherschutz Und Lebensmittelsicherheit vol. 9, no. 1 (2014): 71-
76.
Bekey, G.A. Autonomous Robots: From Biological Inspiration to Implementation and
Control. Cambridge, MA: MIT Press, 2005.
Berkman, P.A. “‘Common Interests’ as an Evolving Body of International Law: Appli-
cations to Arctic Ocean Stewardship.” In Arctic Science, International Law and Cli-
mate Change, pp. 155-73. Beiträge Zum Ausländischen Öffentlichen Recht Und
Völkerrecht no. 235, 2012.
Chapter Nine: The Diffuse Intelligent Other • 367
Bernaz, N. “Enhancing Corporate Accountability for Human Rights Violations: Is Ex-
traterritoriality the Magic Potion?” Journal of Business Ethics vol. 117, no. 3
(2012): 493-511.
Bishop, P. “On Loosemore and Goertzel’s ‘Why an Intelligence Explosion Is Proba-
ble’.” In Singularity Hypotheses, edited by A.H. Eden, J.H. Moor, J.H. Søraker, and
E. Steinhart, pp. 97-98. The Frontiers Collection. Berlin/Heidelberg: Springer,
2012.
Calverley, D.J. “Imagining a non-biological machine as a legal person.” AI & SOCIETY
22, no. 4 (2008): 523-37.
Clarke, Arthur C. “Hazards of Prophecy: The Failure of Imagination.” In Profiles of
the Future: An Inquiry into the Limits of the Possible, revised edition. New York:
Harper & Row, 1973.
Coeckelbergh, Mark. “From Killer Machines to Doctrines and Swarms, or Why Ethics
of Military Robotics Is Not (Necessarily) About Robots.” Philosophy & Technology
24, no. 3 (2011): 269-78.
Datteri, E. “Predicting the Long-Term Effects of Human-Robot Interaction: A Reflec-
tion on Responsibility in Medical Robotics.” Science and Engineering Ethics vol. 19,
no. 1 (2013): 139-60.
Défago, X. “Distributed Computing on the Move: From Mobile Computing to Coop-
erative Robotics and Nanorobotics.” In Proceedings of the 1st International Work-
shop on Principles of Mobile Computing (POMC 2001), pp. 49-55. Newport, Rhode
Island, 2001.
Doarn, C.R., and G.R. Moses. “Overcoming Barriers to Wider Adoption of Mobile
Telerobotic Surgery: Engineering, Clinical and Business Challenges.” In Surgical
Robotics, edited by J. Rosen, B. Hannaford, and R.M. Satava, pp. 69-102. New York:
Springer, 2011.
Dreier, T., and I. Spiecker genannt Döhmann. “Legal Aspects of Service Robotics.”
Poiesis & Praxis vol. 9, no. 3-4 (2012): 201-17.
Gladden, Matthew E. “The Artificial Life-Form as Entrepreneur: Synthetic Organism-
Enterprises and the Reconceptualization of Business.” In Proceedings of the Four-
teenth International Conference on the Synthesis and Simulation of Living Systems,
edited by Hiroki Sayama, John Rieffel, Sebastian Risi, René Doursat and Hod Lip-
son, pp. 417-18. Cambridge, MA: The MIT Press, 2014.
Gladden, Matthew E. “The Social Robot as ‘Charismatic Leader’: A Phenomenology
of Human Submission to Nonhuman Power.” In Sociable Robots and the Future of
Social Relations: Proceedings of Robo-Philosophy 2014, edited by Johanna Seibt,
Raul Hakli, and Marco Nørskov, pp. 329-39. Frontiers in Artificial Intelligence and
Applications 273. IOS Press, 2014.
Gruen, L. “The Moral Status of Animals.” In The Stanford Encyclopedia of Philosophy,
edited by E.N. Zalta. Stanford University, 2014. http://plato.stanford.edu/ar-
chives/fall2014/entries/moral-animal/. Accessed January 6, 2015.
368 • Posthuman Management
Gunther, N.J. “Time – The Zeroth Performance Metric.” In Analyzing Computer Sys-
tem Performance with Perl::PDQ, pp. 3-46. Springer Berlin Heidelberg, 2005.
Hellström, T. “On the Moral Responsibility of Military Robots.” Ethics and Infor-
mation Technology vol. 15, no. 2 (2013): 99-107.
Kirkpatrick, K. “Legal Issues with Robots.” Communications of the ACM vol. 56, no.
11 (2013): 17-19.
Kuflik, Arthur. “Computers in Control: Rational Transfer of Authority or Irresponsi-
ble Abdication of Autonomy?” Ethics and Information Technology 1 (3) (1999):
173-84. doi:10.1023/A:1010087500508.
Lemonick, M. “Save Our Satellites.” Discover vol. 34, no. 7 (2013): 22-24.
Loosemore, R., and B. Goertzel. “Why an Intelligence Explosion Is Probable.” In Sin-
gularity Hypotheses, edited by A.H. Eden, J.H. Moor, J.H. Søraker, and E. Stein-
hart, pp. 83-98. The Frontiers Collection. Berlin/Heidelberg: Springer, 2012.
Michaud, M.A.G. Contact with Alien Civilizations: Our Hopes and Fears about Encoun-
tering Extraterrestrials. New York: Springer, 2007.
Murphy, Robin. Introduction to AI Robotics. Cambridge, MA: The MIT Press, 2000.
Omohundro, S. 2012. “Rational Artificial Intelligence for the Greater Good,” in Singu-
larity Hypotheses, edited by A.H. Eden, J.H. Moor, J.H. Søraker, and E. Steinhart,
pp. 161-79. The Frontiers Collection. Berlin/Heidelberg: Springer, 2012.
Pagallo, U. “Robots in the Cloud with Privacy: A New Threat to Data Protection?”
Computer Law & Security Review vol. 29, no. 5 (2013): 501-08.
Pearce, David. “The Biointelligence Explosion.” In Singularity Hypotheses, edited by
A.H. Eden, J.H. Moor, J.H. Søraker, and E. Steinhart, pp. 199-238. The Frontiers
Collection. Berlin/Heidelberg: Springer, 2012.
Prestes, E., J.L. Carbonera, S. Rama Fiorini, V.A.M. Jorge, M. Abel, R. Madhavan, A.
Locoro et al. 2013. “Towards a Core Ontology for Robotics and Automation.” In
Robotics and Autonomous Systems, pp. 1193-1204. Ubiquitous Robotics vol. 61, no.
11.
Ren, W., and B. Xu. “Distributed Wireless Networked H∞ Control for a Class of Lu-
rie-Type Nonlinear Systems.” Mathematical Problems in Engineering (May 5,
2014): e708252.
Sadeh, E. “Politics and Regulation of Earth Observation Services in the United
States.” In National Regulation of Space Activities, edited by R.S. Jakhu, pp. 443-58.
Space Regulations Library Series 5. Springer Netherlands, 2010.
Sparrow, R. “Killer Robots.” Journal of Applied Philosophy vol. 24, no. 1 (2007): 62-77.
Stahl, B. C. “Responsible Computers? A Case for Ascribing Quasi-Responsibility to
Computers Independent of Personhood or Agency.” Ethics and Information Tech-
nology 8, no. 4 (2006): 205-13.
Vanderheiden, S. “Climate Change and Collective Responsibility.” In Moral Responsi-
bility, edited by N.A. Vincent, I. Van de Poel, and J. Van den Hoven, pp. 201-18.
Library of Ethics and Applied Philosophy 27. Springer Netherlands, 2011.
Chapter Nine: The Diffuse Intelligent Other • 369
Vorwig, P.A. “Regulation of Satellite Communications in the United States.” In Na-
tional Regulation of Space Activities, edited by R.S. Jakhu, pp. 421-42. Space Regu-
lations Library Series 5. Springer Netherlands, 2010.
Weber, R.H., and R. Weber. “General Approaches for a Legal Framework.” In Internet
of Things, pp. 23-40. Springer Berlin Heidelberg, 2010.
Yampolskiy, Roman V., and Joshua Fox. “Artificial General Intelligence and the Hu-
man Mental Model.” In Singularity Hypotheses, edited by Amnon H. Eden, James
H. Moor, Johnny H. Søraker, and Eric Steinhart, pp. 129-45. The Frontiers Collec-
tion. Springer Berlin Heidelberg, 2012.
Chapter Ten
Leveraging the Cross-Cultural Capacities
of Artificial Agents as Leaders of Human
Virtual Teams
1
Abstract. The human beings who manage global virtual teams reg-
ularly face challenges caused by factors such as the lack of a
shared language and culture among team members and coordi-
nation delay resulting from spatial and temporal divisions between
members of the team. As part of the ongoing advances in artificial
agent (AA) technology, artificial agents have been developed
whose purpose is to assist the human managers of virtual teams. In
this text, we move a step further by suggesting that new capabilities
being developed for artificial agents will eventually give them the
ability to successfully manage virtual teams whose other members
are human beings. In particular, artificial agents will be uniquely po-
sitioned to take on roles as managers of cross-cultural, multilingual,
global virtual teams, by overcoming some of the fundamental cog-
nitive limitations that create obstacles for human beings serving in
these managerial roles.
In order to effectively interact with human team members, AAs must
be able to decode and encode the full spectrum of verbal and
nonverbal communication used by human bei ngs. Because culture
is so deeply embedded in all human forms of communicati on, AAs
cannot communicate in a way that is ‘non-cultural’; an AA that is
capable of communicating effectively with human team members
will necessarily display a particular culture (or mix of cultures ), just
as human beings do. Already researchers have designed AAs that
can display diverse cultural behaviors through their use of lan-
guage, intonation, gaze, posture, emotion, and personality.
1
This text was originally published in Proceedings of the 10th European Conference on Manage-
ment Leadership and Governance, edited by Visnja Grozdani, pp. 428-435. Reading: Academic
Conferences and Publishing International Limited, 2014.
372 • Posthuman Management
The need for AA team leaders to display cultural behavior raises the
key question of which culture or cultures the AA leader of a partic-
ular human virtual team should display. We argue that the answer
to this question depends on both the cultural makeup of a team’s
human members and the methods used to share informati on
among team members. To facilitate the analysis of how an AA team
leader’s cultural behaviors can best be struc tured to fit the circum-
stances of a particular virtual team, we propose a two-dimensional
model for designing suites of cultural behaviors for AAs that will
manage human virtual teams. The first dimension describes whether
an AA deploys the same cultural behaviors for its d ealings with all
team members (‘objectivity’) or customizes its cultural display for
each team member (‘personalization’). The second dimensi on de-
scribes whether the AA always di splays the same culture to a given
team member (‘invariance’) or possesses a repertoire of cultural
guises for a particular team member, from w hich it chooses one to
fit the current situation (‘situationali ty’). The two dimensions of ob-
jecti ve/personalized and invariant/situational cultural behaviors
yield four archetypes for AAs leading virtual human teams. We con-
sider examples of each type of AA, identify potential strengths and
weaknesses of each type, suggest parti cular kinds of virtual teams
that are likely to benefit from being managed by AAs of the differ-
ent types, and discuss empirical study that can tes t the validity and
usefulness of this framework.
Background and Introduction
The Potential Desirability of Artificial Agents as Leaders of
Human Virtual Teams
Our human managerial capacities and practices evolved in a world whose
lack of communication technologies meant that interpersonal relationships
between workers were typically local, face-to-face, and culturally homogene-
ous. However, technological advances have compelled human beings to adapt
to a new reality in which relationships between managers and employees are
often culturally heterogeneous, spatially and temporally dispersed, and medi-
ated extensively by technology. One manifestation of this technological me-
diation of work relationships is the phenomenon of virtual teams. Through
the use of email, text messages, telephones, videoconferencing, and online
document-sharing and project management tools, virtual team members can
collaborate closely without ever meeting face-to-face. In an international or-
ganization, virtual team members may dwell in different time zones, which
creates challenges in scheduling meetings and distinguishing ‘work time’
from ‘non-work time.’ Membership in this sort of global virtual team can bring
with it the expectation of an employee’s availability for almost round-the-
clock, instantaneous communication and decision-making.
Chapter Ten: Artificial Agents as Leaders of Human Virtual Teams • 373
While human beings have been remarkably successful at adapting to these
new technologies, we are still subject to fundamental biological limitations
that affect our ability to successfully manage global virtual teams. This raises
the question of whether some sort of nonhuman being – such as an advanced
form of artificial agent – can be developed whose capabilities would allow it
to match or exceed our human effectiveness at managing certain types of hu-
man virtual teams. (Throughout this paper, the phrase ‘human virtual team’
refers to a virtual team that includes some human beings as members, alt-
hough the manager and additional members of the team may be artificial
agents.)
For example, an artificial agent (AA) team leader that never needs to sleep
and is always ready to interact with team members could conceivably mitigate
some challenges experienced by global virtual teams, such as coordination de-
lay resulting from spatial and temporal boundaries between members.
2
General Management Capabilities Needed by AAs in Order
to Lead Human Virtual Teams
Artificial agents such as conversational agents or ‘chatbots’ are already
widely used in business applications. For example, a conversational agent
might appear on a commercial website in the form of an animated figure that
carries on a dialogue with human visitors to the site, to answer their questions
or carry out requests.
3
Progress is being made toward developing AAs that can fill more substan-
tial roles within organizations, such as potentially serving as the managers of
human virtual teams. This process will be facilitated by the fact that human
beings are not only able to form socialized relations with artificial agents as if
they were human but are naturally inclined to do so.
4
Already AAs have been
designed that demonstrate a number of the general capacities that will prove
useful for managing human virtual teams. For example, AAs are capable of
selecting communication strategies that maximize trust between managers
and employees
5
and are capable of perceiving and critically evaluating the per-
formance of virtual teams.
6
It has thus far been envisioned that such AAs will
serve as ‘tools’ for human beings who manage virtual projects, helping them
2
Cummings et al., “Spatial and Temporal Boundaries in Global Teams” (2007).
3
Perez-Marin & Pascual-Nieto, Conversational Agents and Natural Language Interaction: Tech-
niques and Effective Practices (2011).
4
Rehm & Nakano, “Some Pitfalls for Developing Enculturated Conversational Agents” (2009);
Friedenberg, Artificial Psychology: The Quest for What It Means to Be Human (2008).
5
Dai et al., “TrustAider – Enhancing Trust in E-Leadership” (2013).
6
Nunes & O’Neill, “Assessing the Performance of Virtual Teams with Intelligent Agents” (2012).
374 • Posthuman Management
to improve their performance as leaders.
7
However, this ability to critically
evaluate the work of human virtual team members will also contribute to AAs’
ability to serve as the leaders of such teams.
Among the many such capabilities that an AA manager will need to pos-
sess, one that is especially important for managing a global virtual team of
human employees is mastery of the perception and display of cultural behav-
iors.
The Importance of Culture for AAs Leading Virtual Teams
Within the context of business communication, the majority of a message’s
meaning is often conveyed through nonverbal communication rather than the
choice of words used.
8
Thus virtual team leaders who communicate only
through emails or other written texts are at a disadvantage against those who
can utilize facial expressions, gestures, tone of voice, and the full spectrum of
nonverbal communication.
9
An AA that is leading a human virtual team will
be most effective if it appears to team members in an audiovisual form that
allows it to display the full range of human nonverbal communication.
If an AA appears in a virtual form that utilizes a human-like body and
voice, this raises the essential question of culture. An individual’s culture suf-
fuses and is reflected in almost every aspect of his or her verbal and nonverbal
behavior and appearance; it is thus not possible for an AA to utilize a full range
of verbal and nonverbal communication in a way that is ‘non-cultural’ or ‘cul-
turally indeterminate’: an AA capable of communicating effectively with hu-
man team members will necessarily display a particular culture (or mix of cul-
tures), just as human beings do.
10
In the case of an AA, its ‘culture’ is reflected
on several levels, including its (virtual) appearance, gestures, vocal intonation,
choice of actions, emotions, personality, and the background story or ‘biog-
raphy’ that is attributed to it to explain its body of knowledge and its organi-
zational role.
11
7
Kriksciuniene & Strigunaite, “Multi-Level Fuzzy Rules-Based Analysis of Virtual Team Perfor-
mance” (2011).
8
Ober, Contemporary Business Communication (2007).
9
El-Tayeh et al., “A Methodology to Evaluate the Usability of Digital Socialization in ‘virtual’
Engineering Design” (2008).
10
Rehm & Nakano (2009); Rehm et al., “From Observation to Simulation: Generating Culture-
Specific Behavior for Interactive Systems” (2009).
11
Payr & Trappl, “Agents across Cultures” (2003).
Chapter Ten: Artificial Agents as Leaders of Human Virtual Teams • 375
Cultural Competencies Needed for AAs as Leaders of Cross-
cultural Global Virtual Teams
In order to effectively manage a cross-cultural, global virtual team of hu-
man employees, an AA will require a significant degree of cultural compe-
tence. For example, if an AA is managing a team with human members in
Japan and the US, it should ‘know’ that the expression of strong emotion by
an AA may be more acceptable in an individualistic culture like the US than
in a collectivistic culture like Japan.
12
Some of the cultural capacities needed by AA managers have already been
developed. Researchers have successfully designed AAs that can perceive, un-
derstand, and display diverse cultural behaviors through a choice of actions,
language, vocal intonation, gaze, posture, gestures, emotions, personality, the
expectations for the kinds of social interactions that occur between individuals
in a particular social relation, and other “unwritten rules of human cultures.”
13
Ongoing advances are being made in this field.
Choosing the Culture(s) that an AA Virtual Team Leader
Should Display
Considering the fact that it is necessary for a successful AA team leader to
effectively display specific cultural behaviors, and supposing that it is (or soon
will be) possible to create AA team leaders that are capable of perceiving and
displaying a sophisticated range of cultural behaviors, this raises the key ques-
tion of this paper: which culture or cultures should the AA leader of a human
virtual team display?
We posit that an AA virtual team leader can either: 1) reflect its own unique
synthetic nonhuman culture; 2) reflect one or more existing human cultures;
or 3) collaborate with its human team members in the development of a shared
synthetic culture
14
that is created jointly by all team members. If team mem-
bers will be working together for an extended period of time, all three possi-
bilities become viable options. However, options 1 and 2 require significant
time and effort on the part of human team members to develop new compe-
tencies in a synthetic culture that may have no usefulness beyond the partic-
ular team. It may thus be more efficient and effective – especially for a busi-
ness project team of limited duration – if the AA can simply reflect existing
human culture.
We must then consider the question of which human culture or cultures
the AA team leader should display. We suggest that this will depend on at
12
Rehm & Nakano (2009).
13
Mascarenhas et al., “Social Importance Dynamics: A Model for Culturally-Adaptive Agents”
(2013); Friedenberg (2008); Payr & Trappl (2003).
14
Payr & Trappl (2003).
376 • Posthuman Management
least two critical factors: 1) the particular linguistic and cultural makeup of
the team’s human members; and 2) the network topology and technological
means that the team’s human members typically employ when interacting
with one another and their leader.
Our Proposed Two-dimensional Model for Designing AA
Team Leaders’ Cultural Behaviors
To help an organization choose the sort of culture(s) that an AA virtual
team leader should display – based on these critical factors of a team’s linguis-
tic and cultural makeup and means of interaction – we propose the use of a
two-dimensional model for designing suites of cultural behaviors for AAs
managing human virtual teams.
Figure 1: An overview of our two-dimensional model of the cultural behavior of an artificial
agent (AA) leading a human virtual team. The axes of objectivity/personalization and invari-
ance/situationality yield four potential types of AA leaders.
The model’s vertical dimension describes whether an AA deploys the same
cultural behaviors for its dealings with all team members (‘objectivity’) or cus-
tomizes its cultural display for each team member (‘personalization’). The hor-
izontal dimension describes whether the AA always displays the same culture
to a given team member (‘invariance’), or possesses a repertoire of cultural
Chapter Ten: Artificial Agents as Leaders of Human Virtual Teams • 377
guises for a particular team member, from which it chooses one to fit the cur-
rent situation (‘situationality’). The two dimensions of objective/personalized
and invariant/situational cultural behaviors yield four archetypes for AAs
leading human virtual teams. Figure 1 presents an overview of this frame-
work.
The Four Types of AA Leaders as Categorized According
to Their Cultural Behavior
We can now discuss in more detail these four types of AA leaders of human
virtual teams and identify some potential advantages and disadvantages of
each type.
Objective-Invariant Type
DESCRIPTION AND EXAMPLES
When managing a human virtual team, an AA of the objective-invariant
type is given by its designers (or selects on its own) a single culture, native
language, and apparent age that will best facilitate its work as team leader
during the life of the team. The AA displays this single set of characteristics
at all times in all dealings with all employees.
For example, imagine a virtual human team of employees from several
countries who are involved with a hydroelectric construction project. Given
the team’s assigned objectives and personnel makeup, it might be decided that
the team’s leader would most effectively manage the team’s members if he or
she behaved in a way that made team members comfortable about approach-
ing the leader with concerns or ideas (i.e., if the relationship reflected a low
power distance, as defined by Hofstede), encouraged team members to take
personal responsibility for their own work (i.e., high individualism), and es-
tablished clear objectives and procedures to guide team members in their work
(i.e., high uncertainty avoidance). Germany is a nation whose culture, broadly
speaking, reflects these traits.
15
Thus it might be decided that the team’s AA
leader will appear in the virtual guise of, say, a 35-year-old female German
engineer. (The selection of ‘German’ as the culture means that while the AA
can speak to team members in any language known to the AA, it speaks all
languages with a slight German accent and uses typically German metaphors
and allusions.)
The fact that the German culture is a real, historically grounded human
culture (rather than a new synthetic culture) means that the AA will display
self-consistent cultural behaviors that will be recognizable and predictable to
the team’s human members, to the extent that individual team members have
15
Hofstede, “Germany” (2014).
378 • Posthuman Management
been previously exposed to German culture. Moreover, the fact that German
culture was selected means that particular traits found in this culture (such as
low power distance and high individualism) will subtly reinforce the desired
workplace atmosphere and work practices that will help both individual team
members and the entire team to excel.
As another example, imagine a virtual human team that consists of finan-
cial officers at a manufacturer’s plants around the world who participate in a
global virtual team led by an AA based at the company’s headquarters. The
team’s purpose is to coordinate the sharing of financial data and best practices
among the plants. It has been decided that the team leader’s behavior should
reinforce the idea that ultimate financial decision-making authority rests with
corporate HQ and that the role of individual plants’ financial managers is to
efficiently carry out HQ’s directives (i.e., the relationship reflects high power
distance), that employees’ highest concern should be the welfare of the whole
company rather than the success of individual employees or plants (i.e., high
collectivism), and that the financial managers’ role is to ensure the predicta-
bility of their plants’ finances rather than take entrepreneurial risks (i.e., high
uncertainty avoidance). More so than some other nations, Mexico is a nation
whose culture reflects these traits.
16
Thus it might be decided that the team’s
AA leader will appear in the virtual guise of, say, a 50-year-old male Mexican
financial officer whose cultural behaviors would support the kinds of relation-
ships and work practices desired in team members.
POTENTIAL ADVANTAGES AND DISADVANTAGES
It is simpler to design an AA that displays the traits of a single predeter-
mined culture than one that displays multiple cultures. Moreover, while some
team members might have less familiarity with the objective-invariant AA’s
chosen culture (and thus may potentially misunderstand the AA’s message),
the fact that all team members are presented with the same message allows
them to more easily discuss it among themselves.
This approach also has disadvantages, though. Imagine a team including
French, Canadian, and Chinese members. If the AA displays a Chinese accent
and speech patterns, utilizes Chinese metaphors and cultural allusions when
explaining assignments, and manifests Chinese cultural expectations for the
roles of members within then team, then non-Chinese members of the team
may be less likely to grasp the full meaning (or even existence) of information
conveyed through the team leader’s nonverbal communication. The non-Chi-
nese team members could also feel as though the Chinese team members are
unfairly benefitting from in-group favoritism, even if the AA has been de-
signed in a way that is intended to minimize such an occurrence.
16
Hofstede, “Mexico” (2014).
Chapter Ten: Artificial Agents as Leaders of Human Virtual Teams • 379
Objective-Situational Type
DESCRIPTION AND EXAMPLE
When managing a human virtual team, an AA of the objective-situational
type possesses a single suite of multiple cultural ‘personas’ from which it
chooses one particular guise to display at a particular moment to all team
members.
An AA leader’s ability to change its cultural guise allows the AA to take
advantage of the fact that in some circumstances, team members will respond
better to a leader who shares their own culture – but in other circumstances,
team members will respond better to a leader whose culture differs from their
own. For example, it has been found that a human being experiences greater
psychological arousal when interacting with a virtual agent whose cultural
behavior is different from the person’s own.
17
Increased psychological arousal
can be beneficial when carrying out critical tasks requiring a high degree of
concentration.
Thus, if a virtual team consists of Italian employees, it might be most con-
ducive to effective communication and a productive work environment if the
AA leading the team typically appears as a fellow Italian when interacting
with team members in their everyday work. However, if the AA is required to
give team members instructions for a particularly important task that needs
to be remembered and carried out precisely, it may be more effective if the
‘Italian AA’ steps out of the office for a moment, and the AA instead tempo-
rarily adopts the guise of a Zambian or Australian or Chilean manager whose
cultural dissimilarity would help enhance the Italian team members’ atten-
tiveness as the AA conveys critical instructions.
POTENTIAL ADVANTAGES AND DISADVANTAGES
In comparison to an objective-invariant AA, an objective-situational AA
has an additional tool to use in maximizing its team members’ effectiveness,
insofar as it can exploit the benefits of being able to choose between displaying
a familiar or foreign culture in a given situation. However, this effect can only
be used to its full potential if the group’s cultural makeup is such that there is
at least one culture that is ‘familiar’ to all team members and one ‘foreign’ to
all of them.
In comparison to an objective-invariant AA, one drawback of an objective-
situational AA is that it is more complex to design, because it cannot simply
have a single set of cultural behaviors ‘hardwired’ into it. The AA must pos-
sess a full knowledge of multiple cultures, must have some mechanism for
changing its displayed culture, and – perhaps most challengingly – must know
17
Obaid et al., “Cultural Behaviors of Virtual Agents in an Augmented Reality Environment”
(2012).
380 • Posthuman Management
when a situation calls for it to change its displayed behavior from that of one
culture to another.
Personalized-Invariant Type
DESCRIPTION AND EXAMPLE
A personalized-invariant AA represents a more radical rethinking of the
way in which a virtual team leader interacts with team members. For each
human member of the team, the AA chooses the language, culture (most likely
the employee’s own native culture), gender, and apparent age that will allow
it to interact most effectively with that individual, and the AA displays that
culture in all dealings with that particular employee.
An example would be an AA leader that interacts with each team member
in such a way that the AA looks, sounds, and acts as though it were that per-
son’s ‘older sibling.’ To a 20-year-old Venezuelan team member, the AA would
display the cultural characteristics of a 25-year-old Venezuelan, while to a 50-
year-old Finn, the AA would look, sound, and act as though it were a 55-year-
old Finn. When communicating instructions to an American team member,
the AA might make some point by using an allusion to baseball, whereas when
conveying the same message to a team member from another culture, the AA
might use an allusion to cricket or football, instead.
Of particular interest is the fact that it is theoretically possible (through the
use of mediating technology) for the AA to display different cultures to differ-
ent team members simultaneously, even when the AA and the team members
are all interacting with one another in a live virtual group meeting.
POTENTIAL ADVANTAGES AND DISADVANTAGES
A personalized-invariant AA would be easiest to implement in a virtual
team whose communication processes utilize a star network topology or hub-
and-spoke design in which team members communicate primarily with and
through the team leader rather than directly with one another. If team mem-
bers do not directly interact with one another to ‘compare notes’ about their
experience of the AA team leader, it is less likely that confusion will arise due
to the fact that the leader presents itself to team members under different cul-
tural guises. This hub-and-spoke arrangement also exploits the fact that the
information processing needs of a virtual team can usually be most effectively
met by a centralized system,
18
with the AA perhaps doubling as the team’s
data storage and processing system.
However, by utilizing sufficiently advanced mediating technologies, an AA
could potentially display different cultural traits to different team members at
18
Jensen et al., “The Effect of Virtuality on the Functioning of Centralized versus Decentralized
Structures – an Information Processing Perspective” (2010).
Chapter Ten: Artificial Agents as Leaders of Human Virtual Teams • 381
the same time, even while they were all interacting simultaneously in a virtual
group meeting. Clearly, an AA of this sort would be highly complex. While
the advantages of an intelligent agent that can look and sound different to
different individuals simultaneously have long been explored in science fic-
tion,
19
due to its inherent complexity such an AA has not yet been seriously
considered as a feasible business technology. In principle, though, this is
simply a further development of established technologies such as the simulta-
neous live multilingual interpretation of a speaker’s remarks used by organi-
zations like the UN.
This personalized-invariant approach has the potential to significantly in-
crease an AA’s effectiveness in communicating with individual team members
and to eliminate miscommunications arising from a lack of cultural familiar-
ity. This is especially useful for teams whose members do not share a common
language or culture. By communicating with each team member using his or
her own native language and cultural knowledge, the AA can reduce the
heightened stress and obstacles to performance that arise when cross-cultural
virtual team members lack fluency in a shared language.
20
Personalized-Situational Type
DESCRIPTION AND EXAMPLE
For each team member, a personalized-situational AA possesses a unique
suite of at least two possible cultural personas to use in interacting with that
employee: one persona reflects the employee’s own native culture, and the
other reflects a culture that is foreign to the employee. The AA chooses which
cultural persona to present to a particular team member at a particular point
in time based on the demands of the current situation.
Building on our previous personalized-invariant example, an example of a
personalized-situational AA would be an AA leader which, under normal cir-
cumstances, appears to each team member as though it were that person’s
‘older sibling’ (to cultivate a sense of camaraderie and reduce possibilities for
miscommunication) but in moments of urgency or criticality appears as a
more senior authority figure from a culture less familiar to that person.
POTENTIAL ADVANTAGES AND DISADVANTAGES
A personalized-situational AA can easily communicate with and coordi-
nate the work of a team whose members do not share a common language and
who possess very different cultures, as the AA can communicate with each
team member using his or her own native language and cultural behaviors, in
19
For example, see Daniels, “The Man Trap” (1966).
20
Nurmi, “Unique Stressors of Cross-Cultural Collaboration through ICTs in Virtual Teams”
(2009).
382 • Posthuman Management
most situations. However, the personalized-situational AA can also maximize
effectiveness by communicating with team members using behaviors from a
foreign culture, in circumstances where heightened psychological arousal
among team members would be beneficial.
This is the most complicated type of AA to design and operate. As with the
personalized-invariant AA, it is more challenging to implement for a team that
holds virtual group meetings involving all members than for a team whose
AA leader implements a hub-and-spoke network topology in which the AA
leader is responsible for transmitting information between team members.
Avenues for Empirical Research Regarding this Model
Empirical research is needed to verify the validity and usefulness of this
proposed two-dimensional model. One study that can be undertaken now is
to test the impact on motivation, comprehension, and performance when par-
ticular kinds of tasks common to virtual teams are presented to a human
worker by AA ‘managers’ displaying cultures familiar and foreign to the hu-
man being. This will enhance our ability to identify real-world situations faced
by virtual teams in which the ability to exploit foreign vs. familiar cultural
displays by an AA leader would have the most beneficial impact.
Other research can take place after further advances are made in AA tech-
nology. Once AAs have been engineered that can present a single message to
multiple participants in a virtual group meeting using different languages and
cultural behaviors simultaneously, it will be feasible to empirically test the
hypothesized advantages and challenges of personalized AA types and to con-
cretely measure the cost and complexity of personalized AA types relative to
objective types.
Discussion and Conclusion
A review of current artificial agent technologies and the direction of antic-
ipated progress in the field suggests that sufficiently sophisticated AAs will
have the potential to successfully manage virtual teams of human workers.
The most capable and effective AA managers will be those whose cultural and
linguistic flexibility allows them to lead human workers that communicate
using a diverse array of languages and cultural displays. Utilizing the two-
dimensional model proposed in this paper, we believe that an AA manager of
the personalized-situational type represents the ultimate objective toward
which research and development in this field should (and will) be advancing,
as it offers the most powerful and effective design for an AA manager.
However, the inherent complexity involved with engineering a personal-
ized-situational AA means that AAs of the objective-situational and personal-
ized-invariant types will likely be developed first, as steppingstones along the
way. By considering now the implications of these technologies for managing
Chapter Ten: Artificial Agents as Leaders of Human Virtual Teams • 383
virtual teams, businesses can most effectively position their current e-leader-
ship strategies within the context of this impending long-term technological
change.
References
Cummings, J.N., J.A. Espinosa, and C.K. Pickering. “Spatial and Temporal Boundaries
in Global Teams.” In Virtuality and Virtualization, pp. 85-98. International Federa-
tion for Information Processing Proceedings no. 236, 2007.
Dai, Y., C. Suero Montero, T. Kakkonen, M. Nasiri, E. Sutinen, M. Kim, and T. Savo-
lainen (2013). “TrustAider – Enhancing Trust in E-Leadership.” In Business Infor-
mation Systems, pp. 26-37. Lecture Notes in Business Information Processing 157.
Springer Berlin Heidelberg, 2013.
Daniels, Marc. “The Man Trap.” Star Trek: The Original Series, season 1, episode 1.
Paramount, September 8, 1966.
El-Tayeh, A., N. Gil, and J. Freeman. “A Methodology to Evaluate the Usability of
Digital Socialization in ‘virtual’ Engineering Design.” Research in Engineering De-
sign vol. 19, no. 1 (2008): 29-45.
Friedenberg, Jay. Artificial Psychology: The Quest for What It Means to Be Human.
Philadelphia: Psychology Press, 2008.
Hofstede, G. “Germany.” The Hofstede Centre. http://geert-hofstede.com/ger-
many.html. Accessed June 7, 2014.
Hofstede, G. “Mexico.” The Hofstede Centre. http://geert-hofstede.com/mexico.html.
Accessed June 7, 2014.
Jensen, K.W., D. Døjbak Håkonsson, R.M. Burton, and B. Obel. “The Effect of Virtual-
ity on the Functioning of Centralized versus Decentralized Structures – an Infor-
mation Processing Perspective.” Computational and Mathematical Organization
Theory vol. 16, no. 2 (2010): 144-70.
Koehne, B., M.J. Bietz, and D. Redmiles. “Identity Design in Virtual Worlds.” In End-
User Development, pp. 56-71. Lecture Notes in Computer Science no. 7897.
Springer Berlin Heidelberg, 2013.
Kriksciuniene, D., and S. Strigunaite. “Multi-Level Fuzzy Rules-Based Analysis of Vir-
tual Team Performance.” In Building the E-World Ecosystem, pp. 305-18. IFIP Ad-
vances in Information and Communication Technology 353. Springer Berlin Hei-
delberg, 2011.
Mascarenhas, S., R. Prada, A. Paiva, and G.J. Hofstede. “Social Importance Dynamics:
A Model for Culturally-Adaptive Agents.” In Intelligent Virtual Agents, pp. 325-38.
Lecture Notes in Computer Science no. 8108. Springer Berlin Heidelberg, 2013.
Nunes, M., and H. O‘Neill. “Assessing the Performance of Virtual Teams with Intelli-
gent Agents.” In Virtual and Networked Organizations, Emergent Technologies and
Tools, pp. 62-69. Communications in Computer and Information Science 248.
Springer Berlin Heidelberg, 2012.
384 • Posthuman Management
Nurmi, N. “Unique Stressors of Cross-Cultural Collaboration through ICTs in Virtual
Teams.” Ergonomics and Health Aspects of Work with Computers, pp. 78-87. Lec-
ture Notes in Computer Science no. 5624, Springer Berlin Heidelberg, 2009.
Obaid, M., I. Damian, F. Kistler, B. Endrass, J. Wagner, and E. André. “Cultural Be-
haviors of Virtual Agents in an Augmented Reality Environment.” In Intelligent
Virtual Agents, pp. 412-18. Lecture Notes in Computer Science no. 7502. Springer
Berlin Heidelberg, 2012.
Ober, S. Contemporary Business Communication. Stamford, CT: Cengage Learning,
2007.
Payr, S., and R. Trappl. “Agents across Cultures.” In Intelligent Virtual Agents, pp.
320-24. Lecture Notes in Computer Science no. 2792. Springer Berlin Heidelberg,
2003.
Perez-Marin, D., and I. Pascual-Nieto. Conversational Agents and Natural Language
Interaction: Techniques and Effective Practices. Hershey, PA: IGI Global, 2011.
Rehm, M., André, E., and Nakano, Y. “Some Pitfalls for Developing Enculturated
Conversational Agents.” In Human-Computer Interaction: Ambient, Ubiquitous
and Intelligent Interaction, pp. 340-48. Lecture Notes in Computer Science 5612.
Springer Berlin Heidelberg, 2009.
Rehm, M., Y. Nakano, E. André, T. Nishida, N. Bee, B. Endrass, M. Wissner, A.A. Lipi,
and H. Huang, “From observation to simulation: generating culture-specific be-
havior for interactive systems.” AI & SOCIETY vol. 24, no. 3 (2009): 267-80.
Chapter Eleven
A Fractal Measure for Comparing the Work
Effort of Human and Artificial Agents
Performing Management Functions
1
Abstract. Thanks to the growing sophistication of artificial agent
technologi es, businesses will increasingly face decisions of whether
to have a human employee or artificial agent perform a particular
function. This makes it desirable to have a common temporal meas-
ure for compari ng the work effort that human beings and artificial
agents can apply to a role. Existing temporal measures of work ef-
fort are formulated to apply either to human em ployees (e.g., FTE
and billable hours ) or computer-based systems (e.g., mean time to
failure and availability) but not both. In this paper we propose a
new temporal measure of work effort based on fractal di mension
that applies equall y to the work of human beings and artificial
agents performing management functions. We then consider four
potential cases to demonstrate the measure’s diagnostic value in
assessing strengths (e.g., flexibili ty) and risks (e.g., swi tch costs) re-
flected by the temporal work dynamics of particular managers.
The Need for a Common Temporal Measure of Work
Effort
The increasing power and sophistication of artificial agent technology is
allowing businesses to employ artificial agents in a growing number of roles.
1
This text was originally published in Position Papers of the 2014 Federated Conference on Com-
puter Science and Information Systems, edited by Maria Ganzha, Leszek Maciaszek, and Marcin
Paprzycki, pp. 219-226. Annals of Computer Science and Information Systems vol. 3. Warsaw:
Polskie Towarzystwo Informatyczne, 2014. A further elaboration of the material was published
as “A Tool for Designing and Evaluating the Temporal Work Patterns of Human and Artificial
Agents,” Informatyka Ekonomiczna / Business Informatics 3(33) (2014): 61-76.
386 • Posthuman Management
Artificial agents are no longer restricted simply to performing logistical func-
tions such as resource scheduling, but are now capable of more complex in-
terpersonal workplace behavior such as using social intelligence to effectively
manage the limitations, abilities, and expectations of human employees,
2
rec-
ognizing and manifesting culture-specific behaviors in interactions with hu-
man colleagues,
3
and assessing the performance of human members of virtual
teams.
4
It is thus gradually becoming more feasible to design artificial agents
capable of performing the four key functions carried out by human managers,
which are planning, organizing, leading, and controlling.
5
As a result of such recent and anticipated future advances, businesses will
increasingly be faced with concrete decisions about whether, for example, the
manager of a new corporate call center should be an experienced human man-
ager or the latest artificial agent system. Such decisions will be shaped by a
large number of strategic, financial, technological, political, legal, ethical, and
operational factors. One particular element to be taken into account is that of
temporal work effort: i.e., how much time would a human manager actually
be able to dedicate to carrying out the necessary work functions, given the
fact that physiological, cultural, legal, and ethical constraints limit the number
of hours per week that a human being is capable of working? Similarly, how
much time would an artificial agent be able to dedicate to carrying out the
necessary work functions, given the fact that scheduled maintenance or un-
scheduled outages can limit the uptime of computer-based systems? Knowing
how much time per day (or week, or other relevant time interval) a manager
will be available to carry out his or her functions of planning, organizing, lead-
ing, and controlling becomes especially relevant in an interconnected age
when global businesses operate around the clock, and managers are expected
to be available to respond to inquiries and make decisions at almost any time
of the night or day.
In the case of human professionals, temporal measures such as ‘full-time
equivalent’ (FTE)
6
and ‘billable hours’ are often used to quantify one’s work
effort. Computer-based systems, meanwhile, often use temporal measures
such as ‘availability’ and ‘reliability.’ In the following sections, we will analyze
such existing measures and then develop a new fractal-dimension-based tem-
poral measure for work effort that has at least two notable advantages: it is
applicable to the work effort of both human and artificial agent managers, and
it provides valuable diagnostic insights into the strengths and limitations of
2
Williams, “Robot Social Intelligence” (2012).
3
Rehm et al., “From observation to simulation: generating culture-specific behavior for interac-
tive systems” (2009).
4
Nunes & O‘Neill, “Assessing the performance of virtual teams with intelligent agents” (2012).
5
Daft, Management (2011), pp. 7-8.
6
“Full-Time Equivalent (FTE),” European Commission – Eurostat.
Chapter Eleven: Work Effort of Human and Artificial Agents • 387
an individual manager’s temporal work dynamics that are not provided by
existing measures.
Measures of Work Effort for Computer-based Systems
Availability and Reliability
A computer’s reliability is often quantified as the mean time to failure
(MTTF), the average length of time that a system will remain continuously in
operation before experiencing its next failure.
7
The mean time to repair
(MTTR) is the average length of time needed to detect and repair the failure
and return the system to operation. A computer’s steady-state availability A
is the likelihood that the computer is operating at a particular moment, and it
is related to MTTF and MTTR in the equation:
8
A standard requirement for commercial computer systems is 99.99% avail-
ability over the course of a year.
9
Availability has traditionally been understood in a binary manner: a sys-
tem is either ‘up’ or ‘down.’ Rossebeø et al. argue that a more sophisticated
measure is needed that takes qualitative aspects into account and suggest rec-
ognizing a range of intermediate qualitative states between simply ‘up’ and
‘down’.
10
As we explain below, the measure proposed in this paper takes a
different approach: its unique diagnostic value comes not from adding a qual-
itative component but from considering more carefully the fineness and reso-
lution of the time-scales on which measurements are being made.
Time-scales for Measuring Computer Performance
A computer performs actions across a vast range of time-scales. As Gun-
ther notes, if a typical computer’s CPU cycle were ‘scaled up’ so that it lasted
one second, then using that same scale, a DRAM access would take about one
minute, a single disk seek would require roughly 1.35 months, and a tape ac-
cess would last more than a century.
11
He explains that when measuring per-
formance, “Only those changes that occur on a timescale similar to the quan-
tity we are trying to predict will have the most impact on its value. All other
7
Grottke et al., “Ten fallacies of availability and reliability analysis” (2008).
8
Grottke et al. (2008).
9
Gunther, “Time – the zeroth performance metric” (2005).
10
Rossebeø et al., “A conceptual model for service availability” (2006).
11
Gunther (2005).
388 • Posthuman Management
(i.e., faster) changes in the system can usually be ignored.… In modeling the
performance of a database system where the response time is measured in
seconds, it would be counterproductive to include all the times for execution
of every CPU instruction.”
In a business context, artificial agents performing certain logistical or data-
analysis tasks can operate at speeds constrained only by the laws of physics
and availability of needed resources. However, an artificial agent manager
whose role involves planning, organizing, leading, and controlling the activity
of human colleagues should have its work effort measured within a corre-
sponding time-scale. Thus for our present purposes there is no need to con-
sider phenomena such as metastability that have major implications for com-
puter design and functionality but are only directly relevant at the smallest
temporal scale.
12
Viewed from the microscopic end of the temporal spectrum, the regulator
of all activity within a computer system is the ‘clock tick’ or ‘fundamental
interval of time’ created by an interrupt sent from the system’s hardware clock
to the operating system’s kernel; in a Unix system, this tick interval is often
set at 10 ms,
13
during which time roughly 3.2 × 105 CPU cycles might occur.
For an artificial agent system operating on a serial processor architecture,
there is no need to adopt a temporal measure capable of resolving each indi-
vidual CPU cycle, as that would not provide information that is directly rele-
vant to the tasks in which the agent’s work performance will be evaluated and
which take place over a much longer time-frame. For example, an artificial
agent manager might interact with human colleagues by generating text or
images displayed on a screen. Assuming a screen refresh rate of 60 Hz, this
yields a single work interval (or frame) of roughly 17 ms. Writing output data
to disk would require a minimum work interval of roughly 3.50 ms for a disk
seek.
14
If the artificial agent is generating speech or other audio to be heard by
human beings, a standard sampling rate of 48,000 Hz would yield a single
work interval of roughly 0.02 ms.
At the macroscopic end of the time-scale, it is not unknown for servers to
run for several years without rebooting or a moment of downtime.
15
If we view
an artificial agent manager as a form of enterprise software, we might expect
its lifespan to average around nine years and to be no shorter than two years.
16
Thus while a coarser or finer temporal resolution is possible, our proposed
temporal measure for work effort should prove sufficient for artificial agent
12
Gunther (2005).
13
Gunther (2005).
14
Gunther (2005).
15
“Cool solutions: uptime workhorses: still crazy after all these years” (2006).
16
Tamai & Torimitsu, “Software lifetime and its evolution process over generations” (1992).
Chapter Eleven: Work Effort of Human and Artificial Agents • 389
systems as long as it can encompass time-scales ranging from 0.02 ms up to
several years.
Measures of Work Effort for Human Employees
The Significance of a Year as a Temporal Unit
We can now consider the case of a human manager. In principle, the long-
est possible macroscopic time-scale of work effort that one can utilize for a
human employee is a biological lifespan. In practice, though, the relevant
time-scale is obviously much shorter. In the United States, a typical manage-
rial employee only remains with his or her current employer for about 5.5
years before moving to a new organization.
17
The ‘year’ has significant histor-
ical and conceptual value as a fundamental measure of human work activity.
Just as enterprise system availability is often cited in terms of uptime per op-
erating year, productivity figures for human workers are typically based on
an annual time-frame.
18
Having taken the year as our initial frame of reference, how do we quantify
the portion of a given year that a human employee actually spends on his or
her work? For this purpose, the largest relevant subunit is that of a single
week, as professional workers regularly assess a job’s fringe benefits accord-
ing to how many weeks of vacation they receive each year, and government
agencies and researchers often track this data. The number of weeks worked
per year varies significantly across nations and cultures.
19
The Significance of an Hour as a Temporal Unit
Even if we know that two employees both ‘work’ the same number of
weeks per year, this fact does not yet tell us much about their relative levels
of work effort, as it is possible for the employees to differ vastly in how many
hours they work each week. Here, too, there is significant variation across
nations and cultures and between specific jobs.
20
For example, an American
law firm will likely expect attorneys to work more than 50 hours per week,
21
while employees of high-tech Silicon Valley firms are routinely expected to
work over 100 hours per week when project deadlines are approaching.
22
17
“Employee tenure summary,” United States Department of Labor, Bureau of Labor Statistics
(2012).
18
“Annual Hours Worked,” Organisation for Economic Co-operation and Development.
19
Golden, “A brief history of long work time and the contemporary sources of overwork” (2009).
20
Golden (2009).
21
“The Truth about the Billable Hour,” Yale Law School.
22
Shih, “Project Time in Silicon Valley” (2004).
390 • Posthuman Management
How Much Work in an Hour of Work?
The hour, though, is certainly not the smallest quantifiable interval of em-
ployee work effort. Two employees may consider themselves to have just
spent an hour ‘working,’ but the number of minutes of work actually per-
formed by each can differ greatly. In some professions, it is common to track
work effort in sub-hour intervals. For example, attorneys with American law
firms typically track their work time in six-minute intervals and sometimes
record and bill clients for work that took as little as one minute. For every
hour that an attorney spends ‘at work,’ an average of roughly 45 minutes will
count toward billable hours.
23
In other professions, employers have given up any attempt at precisely
measuring how much time an employee is putting into their work, as the ad-
vent of new communications technologies has caused ‘work time’ and ‘per-
sonal time’ to meld into an indistinguishable blur.
24
The rise of multitasking
and ‘continuous partial attention’ drives human workers to constantly moni-
tor emails, texts, and instant messages, even while in the middle of meetings
or conversations.
25
For knowledge workers, this continual checking of email
can consume up to 25% of their workday.
26
While much of this nonstop com-
munication activity is work-related, the existence of workplace phenomena
such as shirking, social loafing, and job neglect means that a significant num-
ber of these electronic interruptions do not relate to work at all but are purely
personal. In particular, younger employees of the Millennial generation are
less fond of email and tend to prefer text messaging, instant messaging,
27
and
other forms of micro-communication that produce shorter but more frequent
non-work interruptions to their work activities. Because professional employ-
ees can alternate between work-related and personal actions at such a rapid
rate (once every few seconds, if not faster), it is now “very hard to tell when
people are working and when people are not working,” as a Silicon Valley
executive reported in Shih’s study.
28
In an effort to counteract this constant
stream of distractions, some Extreme Programming (XP) teams employ the
Pomodoro Technique, a time-boxing strategy in which physical timers are
23
“The Truth about the Billable Hour.”
24
Shih (2004).
25
Sellberg and Susi, “Technostress in the office: a distributed cognition perspective on human–
technology interaction” (2014).
26
Gupta et al., “You’ve got email! Does it really matter to process emails now or later?” (2011).
27
Hershatter & Epstein, “Millennials and the world of work: an organization and management
perspective” (2010).
28
Shih (2004).
Chapter Eleven: Work Effort of Human and Artificial Agents • 391
used to enforce a steady pace consisting of 25 minutes of focused work fol-
lowed by a brief break.
29
Identifying a Minimum Time Unit of Work by Human Managers
Within a given period of ‘work,’ there may be alternating periods of work
and non-work that are measured in seconds, not minutes. However, in at-
tempting to identify the minimum unit of work of which humans are capable,
it is valuable to consider time-scales even much smaller than a second. For
example, scholars have estimated that the human brain is capable of between
1014 and 1016 calculations per second,
30
or roughly 6.6 × 1016 FLOPS,
31
although
the massively distributed parallel processing architecture of the brain
32
means
that many calculations are taking place simultaneously, and the duration of a
single calculation cannot be determined by simply dividing one second by,
say, 1015. In attempting to estimate the duration of a single ‘calculation’ per-
formed by the brain, scholars have alternately cited the fact that an individual
neuron can fire as often as 1,000 times a second,
33
that “synapses carry out
floating point operations … at a temporal resolution approaching about 1000
Hz,”
34
that a neuron is capable of firing roughly once every 5 ms,
35
or that the
brain operates at a rate of speed of “around 100 cycles per second.”
36
These
estimates yield a range of 1-10ms for the brain’s smallest temporal unit of
work activity.
It is helpful, though, to refer once more to Gunther’s position on the meas-
urement of computer performance: we can essentially ignore activity taking
place within a system on a time-scale shorter than that of our work-relevant
inputs and outputs, as it is “more likely to be part of the background noise
rather than the main theme.”
37
In the case of a human being considered qua
employee, the firing of a single synapse does not directly constitute ‘work.’
The work of planning, organizing, leading, and controlling for which human
managers are employed typically involves more complicated inputs and out-
puts such as engaging in conversation or reading and creating documents. The
29
Gobbo & Vaccari, “The Pomodoro Technique for sustainable pace in extreme programming
teams” (2008).
30
Kurzweil, The Singularity Is Near: When Humans Transcend Biology (2006); Friedenberg, Arti-
ficial Psychology: The Quest for What It Means to Be Human (2008).
31
Llarena, “Here comes the robotic brain!” (2010).
32
Friedenberg (2008).
33
Friedenberg (2008).
34
McClelland, “Is a machine realization of truly human-like intelligence achievable?” (2009).
35
Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (2000).
36
See Abbott & Dayan, Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems (2001), and its discussion in Llarena (2010).
37
Gunther (2005).
392 • Posthuman Management
smallest temporal unit of work would be the smallest unit relevant in the per-
formance of such tasks.
That unit appears to be an interval of roughly 50 ms. Studies have shown
that if one alternates too quickly between two tasks that require the same
cognitive resources, one’s performance on both tasks will be negatively im-
pacted,
38
as shifting from one mental task to another incurs a ‘switch cost’ of
both a temporal delay and an increased error rate,
39
which lowers productiv-
ity.
40
In particular, the human brain needs around 120 ms to fully allocate its
attention to a new stimulus.
41
Marchetti cites diverse studies supporting the
claim that the minimum ‘integration time’ needed for the brain to meld dis-
parate sensory input into a conscious perception of a single event or experi-
ence is roughly 50-250 ms, with a median of about 100 ms.
42
These findings
make it unlikely that a human manager would be capable of performing indi-
vidual instances of work that need to be measured using a time-frame shorter
than 50 ms. If one attempted to alternate between tasks faster than once every
50 ms, one’s brain would not even have time to focus attention on a new task
before abandoning it for yet another task.
Durations of Particular Work Inputs and Outputs
This minimum interval of roughly 50 ms is supported by the fact that the
kinds of inputs and outputs that human managers typically utilize when per-
forming work-related functions do not have durations shorter than this inter-
val. For example, Hamilton notes that human beings can think at a rate of 400-
800 words per minute, while we typically speak at 100-175 words per minute
(with each spoken word comprising an average of 4-5 phonemes
43
). Optimal
listening comprehension occurs when a speaker speaks at 275-300 words per
minute, which gives a listener’s mind less time to become distracted or day-
dream between each of the speaker’s words.
44
Adult native speakers of English
typically read 200-250 words per minute.
45
Regarding work output, the fastest
sustainable typing rate is roughly 150 words per minute,
46
with each word
38
Brown & Merchant, “Processing resources in timing and sequencing tasks” (2007).
39
Wager et al., “Individual differences in multiple types of shifting attention” (2006).
40
Schippers & Hogenes, “Energy management of people in organizations: a review and research
agenda” (2011).
41
Tse et al., “Attention and the subjective expansion of time” (2004).
42
Marchetti, “Observation levels and units of time: a critical analysis of the main assumption of
the theory of the artificial” (2000).
43
Levelt, “Models of word production” (1999).
44
Hamilton, Essentials of Public Speaking (2014), p. 60.
45
Traxler, Introduction to Psycholinguistics: Understanding Language Science (2011), chapter 10.
46
“World’s fastest typer,” Chicago Tribune.
Chapter Eleven: Work Effort of Human and Artificial Agents • 393
comprising an average of 5-6 characters; the fastest known shorthand writing
speed is roughly 350 words per minute;
47
and the fastest known human
speaker is able to clearly articulate more than 650 words per minute.
48
When
these rates are converted into milliseconds, they yield the intervals seen in
Table I.
Table I. Average time needed by the human brain to perform work-related input, processing,
and output functions.
Activity
Average
Time
Fully allocating attention to a new stimulus
120 ms
Consciously perceiving a single coherent
experience or event
50-250 ms
Hearing one spoken phoneme
(4.4 phonemes per word)
45-50 ms
Hearing one spoken word
200-220 ms
Reading one printed word
240-300 ms
Thinking one word
75-150 ms
Speaking one phoneme
(4.4 phonemes per word)
21-140 ms
Speaking one word
92-600 ms
Typing one character
(5.5 characters per word)
≤ 73 ms
Typing one word
≤ 400 ms
Writing one word in shorthand
≤ 170 ms
The Fractal Self-similarity of Human Work Cycles
As we have seen, for artificial agent managers, the time-scales relevant to
their work effort range from several years down to about 0.02 ms, while for
human managers they range from several years down to around 50 ms. Within
this range, there are multiple relevant time-scales and activity cycles of differ-
ent lengths that demonstrate an interesting degree of self-similarity: within a
given year of work, a typical human manager will spend many consecutive
weeks working, interrupted periodically by non-work weeks of vacation.
Within a given week of work, he or she will spend spans of several consecutive
hours working, followed by non-work hours when he or she is asleep or out
of the office. Within a given hour of work, his or her spans of minutes spent
47
“New World’s Record for Shorthand Speed” (1922).
48
“Fastest talker,” Guinness World Records.
394 • Posthuman Management
working will be followed by non-work intervals when the manager is day-
dreaming or writing a personal email. The roughly self-similar nature of this
temporal dynamic opens the door to understanding a human manager’s work
activity as a fractal time series.
The fractal nature of our typical human work dynamics is not at all sur-
prising: as Longo and Montévil note, fractal-like dynamics are “ubiquitous in
biology, … in particular when we consider processes associated with physio-
logical regulation.”
49
Lloyd notes that when an organism’s biological processes
operate on multiple time-frames displaying fractal temporal coherence, it cre-
ates a scale-free system with “robust yet flexible integrated performance” in
which the oscillatory dynamics with long memory allow the organism to pre-
dict and respond to long-term environmental conditions such as tidal, sea-
sonal, and annual cycles, while the short-term cycles coordinate internal pro-
cesses such as organ functioning and cellular division.
50
Calculating the Fractal Dimension of Work Effort
Significance of the Fractal Dimension
One of the most important and meaningful attributes of a fractal time series
is that it possesses a fractal dimension that one can calculate and which cap-
tures valuable information about the series’ temporal dynamics. The calcula-
tion of the fractal dimension of biological phenomena has varied practical ap-
plications. For example, analysis of the fractal dimension of EEG data can be
used to quantify the level of concentration during mental tasks,
51
and fractal
analysis has demonstrated that healthy hearts display greater rhythmic com-
plexity than diseased hearts.
52
The fractal dimension of empirically observed natural phenomena can be
described by the equation D = 2 – H, where H is the Hurst exponent of the
time series as graphed in two-dimensional Cartesian space. In this approach,
an x-coordinate is the time at which a value was measured, and the y-coordi-
nate is the value measured at that time.
53
The case 0 < H < ½ represents a
dynamic that is variously described as antipersistent, irregular, or trend-re-
versing: if the value in one moment is greater than the mean, the value in the
next consecutive moment is likely to be less than the mean. The case H = ½
represents a random-walk process such as Brownian motion, in which the
49
Longo & Montévil, “Scaling and scale symmetries in biological systems” (2014).
50
Lloyd, “Biological time is fractal: early events reverberate over a life time” (2008).
51
Sourina et al., “Fractal-based brain state recognition from EEG in human computer interaction”
(2013).
52
Longo & Montévil, “Scaling and scale symmetries in biological systems” (2014).
53
Mandelbrot, The Fractal Geometry of Nature (1983), pp. 353-54.
Chapter Eleven: Work Effort of Human and Artificial Agents • 395
value in the next consecutive moment is equally likely to move toward or
away from the mean. The case ½ < H < 1 is described as persistent or quasi-
regular: the value at the next consecutive moment in time is likely to be the
same as the value in the previous moment.
54
In this case, we can say that the
dynamic has long memory.
Work Effort as a Time Series of Binary Values
Graphing a time-series in two-dimensional space is useful for natural phe-
nomena such as earthquakes that occur at different times with different inten-
sities.
55
However, in the case of developing a temporal measure for quantifying
the work effort of human and artificial managers, we suggest that a different
approach is warranted. Graphing an agent’s work effort in two-dimensional
space would be useful if the work effort displayed by a human or artificial
agent manager at a particular instant of time were able to range across a con-
tinuous spectrum of values. However, in this case we have only a binary set
of possible values: at any given instant, an agent is either focusing its attention
on its work, or it is not. Marchetti draws on research from several areas of
psychology to show that the human mind is incapable of dividing its attention
between two different scenes, attitudes, or ‘observational levels’ at the same
instant in time. (We would suggest that the same will likely be true for any
artificial agent whose cognitive capacities are modeled closely on those of the
human brain’s neural network, as well as for any artificial agent governed by
a computer program in the form of executable code.) As we saw above, the
brain’s attention mechanism is capable of alternating attention between two
different thoughts or scenes with great rapidity (as in cases of so-called ‘mul-
titasking’), however in any given instant of time, our attention is allocated to
at most one of those thoughts or scenes. This means that work effort cannot
be quantified by saying, for example, that “At moment t, 70% of the agent’s
attention was dedicated to its work.” Instead, one would say that “For all of
the indivisible instances of attention that took place during time interval [a,
b], in 70% of those instances the agent’s attention was focused on its work.”
Mandelbrot notes that if the fractal dimension of a time series graphed in
two-dimensional space is represented by the equation , then the
zero set (or any other level set) of the graphed time series would have fractal
dimension:
56
.
54
Mandelbrot (1983), pp. 353-54; Valverde et al., “Looking for memory in the behavior of granu-
lar materials by means of the Hurst analysis” (2005).
55
Telesca et al., “On the methods to identify clustering properties in sequences of seismic time-
occurrences” (2002).
56
Mandelbrot (1983), pp. 353-54.
396 • Posthuman Management
We can use this equation to relate the fractal dimension and Hurst exponent
for work effort when we understand work effort as graphed on a one-dimen-
sional line segment. The length of the entire segment represents the entire
time available (such as a year, week, or hour) during which an agent can po-
tentially be performing work. Those instants of actual work form the set that
is graphed on the line segment, while instants of non-work do not belong to
the set. With this binary approach, we can envision the depiction of an agent’s
work effort across time as a series of instances of work and non-work graphed
on a line segment that resembles a generalized Cantor set in which the mo-
ments of work are those points contained in the set and moments of non-work
are portions of a deleted interval. Because this is a graph of an empirically
observed natural phenomenon rather than a purely mathematical object, it
would have a minimum fineness and resolution: if our minimum unit of time
is 10 ms and we graph a line segment representing one hour, it would com-
prise 3.6 × 106 such units of work or non-work.
In this context, the Hurst exponent takes on a different (and perhaps even
counterintuitive) meaning. For a two-dimensional graph of a time series with
H ≈ 0, successive y-values alternate antipersistently around the mean, and the
graphed line fills up a relatively large share of the two-dimensional space. For
a one-dimensional graph of a binary time series, one might visualize the set
as though it contains a single point that is able to slide back and forth along
the x-axis to occupy many different x-values simultaneously, thus forming the
set. For a set with high persistence (H ≈ 1), the point may be locked to a single
x-value, reflecting a process with long memory. For a set with low persistence
(H ≈ 0), the point ‘forgets’ where it is and is free to move up and down the line
segment, occupying many different x-values. This conceptualization reflects
the fact that the two-dimensional graph of an antipersistent time series will
cross the horizontal line determined by the mean y-value at many different
places, whereas the graph of a persistent process might only cross it once, and
the graph of a random-walk process can intersect it either one or many times.
Formulating our Fractal Measure
Advantages of the Box-Counting Method
Different methods exist for calculating fractal dimension. A number of
scholars prefer the Minkowski-Bouligand or box-counting dimension over al-
ternatives such as the area-perimeter or power spectrum methods for estimat-
ing the fractal dimension of natural phenomena as diverse as seismic activity,
electrical activity in the brain, and physical surface features at the nanometer
scale.
57
Longo and Montévil argue that while it lacks some of the mathematical
57
Telesca et al. (2002); Sourina et al., (2013); Zhang et al., “Fractal structure and fractal dimension
Chapter Eleven: Work Effort of Human and Artificial Agents • 397
import found in other definitions of fractal dimension such as the Hausdorff
dimension, the box-counting dimension has an advantage in that it can easily
be applied to empirically observed phenomena.
58
In order to develop our comparative fractal measure of work effort for hu-
man and artificial agent managers, we have thus employed the box-counting
method to estimate the temporal dynamics’ fractal dimension. The box-count-
ing dimension D of set F can be calculated as:
.
Here is the smallest number of sets of diameter that cover the set
F.
59
When using the box-counting method to estimate the fractal dimension of
natural phenomena, this can be done by calculating the average value of D
that results when one empirically determines for multiple values of δ.
60
Calculation and Notation of our Fractal Measure
When we applied this approach to calculate the box-counting fractal di-
mension D for the work effort of particular hypothetical human and artificial
agent managers, it yielded insights that could be useful for understanding,
comparing, and enhancing the temporal work dynamics of such agents.
To accomplish this, we considered an agent’s typical work effort as viewed
across on three different time-scales or levels: 1) The set F1 includes those
weeks worked within a span S1 of five years (or 260 weeks), for which the
covering sets used for the box-counting estimation were δa = 4 weeks, δb = 2
weeks, and δc = 1 week. 2) The set F2 includes those hours worked within a
span S2 of one week (or 168 hours), for which the covering sets used for the
box-counting estimation were δa = 4 hours, δb = 2 hours, and δc = 1 hour. 3)
The set F3 includes those minutes worked within a span S3 of one hour, for
which the covering sets used for the box-counting estimation were δa = 1 mi-
nute, δb = 30 seconds, and δc = 15 seconds. Using the box-counting method,
we calculated D1, D2, and D3 for the time-scales F1, F2, and F3, respectively, and
averaged those values to produce a mean value of for a par-
ticular agent. We then calculated the estimated value for the Hurst exponent
for that agent’s temporal dynamic with the equation H = 1 - D.
Drawing on the data considered in previous sections for the typical tem-
poral performance of human professionals and artificial agents (envisioned as
determination at nanometer scale” (1999).
58
Longo & Montévil (2014).
59
Falconer, Fractal Geometry: Mathematical Foundations and Applications (2004), pp. 41-44.
60
Wahl et al., Exploring Fractals on the Macintosh (1994), pp. 75-108.
398 • Posthuman Management
hardware and software systems), we present four specific hypothetical cases
and the values of D and H calculated for each.
Applying our Measure to Particular Cases
Temporal Dynamics of Human Manager A
Consider a hypothetical Human Manager A whose work effort approaches
the maximum of which contemporary human beings are capable. This man-
ager does not take any weeks of vacation during the five years worked in his
position (S1 = 260 weeks,
= 260 weeks). He concentrates exclusively on
his career, working an average of 90 hours per week (S2 = 168 hours,
= 90 hours). During the work day, he avoids all possible distractions and, re-
lying on an approximation of the Pomodoro Technique, spends only 5 minutes
of each ‘work hour’ not performing work-related functions (S3 = 60 minutes,
= 55 minutes). We graphed each of these situations on a line segment
that we then considered at three different temporal resolutions. Within the
graph of the time series, a moment of work is indicated with a colored vertical
slice, and a moment of non-work is indicated with an unshaded interval. A
graph of the temporal work dynamics of Human Manager A is seen in Fig. 1
below. For an agent with these characteristics, we have calculated D = 0.962,
H = 0.038, and availability (understood as the likelihood that any randomly-
selected instant of time will fall during a moment of work rather than non-
work) as A = 49.1%.
Fig. 1: Human Manager A’s periods of work and non-work.
Temporal Dynamics of Human Manager B
Hypothetical Human Manager B represents the opposite end of the spec-
trum: his time commitment approaches the lowest amount possible for some-
one who is filling a management role with an organization. We suppose that
Human Manager B spends only half of the weeks in the year working (S1 =
260 weeks,
= 130 weeks). Even during those weeks when he is work-
ing, the manager dedicates only 10 hours of effort to this particular position
(S2 = 168 hours, = 10 hours). Moreover, during each hour of ‘work,’ the
Chapter Eleven: Work Effort of Human and Artificial Agents • 399
manager spends only a third of the time focused directly on work-related
tasks, with the rest of the time representing distractions or non-work-related
activities (S3 = 60 minutes, = 20 minutes). A graph of the temporal
work dynamics of Human Manager B is seen in Fig. 2 below. For an agent with
these characteristics, we have calculated D = 0.532, H = 0.468, and A = 1.0%.
Fig. 2: Human Manager B’s periods of work and non-work.
Temporal Dynamics of Artificial Agent Manager A
Next consider a hypothetical Artificial Agent Manager A in the form of a
software program running on a computer with a typical serial processor ar-
chitecture. We suppose that during a given five-year operating period, there
may be brief service outages for scheduled maintenance or updates but that
there are no extended outages (S1 = 260 weeks, = 260 weeks). Each
week, there is a scheduled maintenance window of one hour, when software
updates are applied and the system is rebooted (S2 = 168 hours, = 167
hours). The software program and hardware substrate for Artificial Agent
Manager A have no non-work-related functions and are not capable of being
‘distracted’ in the way that a human manager is, thus during a typical hour
period of work, Artificial Agent Manager A does not dedicate any minutes to
non-work-related functions (S3 = 60 minutes, = 60 minutes). A graph
of the temporal work dynamics of Artificial Agent Manager A is seen in Fig.
3 below. For an agent with these characteristics, we have calculated D = 0.999,
H = 0.001, and A = 99.4%.
Fig. 3: Artificial Agent Manager A’s periods of work and non-work.
400 • Posthuman Management
Temporal Dynamics of Artificial Agent Manager B
Finally, consider the hypothesized future scenario of Artificial Agent Man-
ager B, an artificial general intelligence with a distributed neural network ar-
chitecture that is modeled on the human brain and displays human-like moti-
vations, emotions, and learning capacity.
61
While Artificial Agent Manager B
enjoys its job, every two years it must spend a week away from work for a
period of psychological assessment, maintenance, and relaxation, to reduce
the likelihood of professional burnout (S1 = 260 weeks,
= 258 weeks).
Moreover, during each week of work, its neural network architecture requires
it to spend two hours daily in a ‘sleep’ mode in which any new external stimuli
are shut out, in order to facilitate the assimilation of the day’s experiences into
long-term memory. In order to maintain its capacity for creativity, satisfy its
intellectual curiosity, and avoid the development of cyberpsychoses, it must
also spend two hours daily exploring spheres of experience unconnected to its
work-related tasks (S2 = 168 hours, = 126 hours). Because Artificial
Agent Manager B reflects the full constellation of human-like cognitive and
social behaviors, it spends five minutes of each hour on functions other than
work, such as cyberloafing, following news stories, and communicating with
friends (S3 = 60 minutes, = 55 minutes).
A graph of the temporal work dynamics of Artificial Agent Manager B is
seen in Fig. 4 below. For an agent with these characteristics, we have calcu-
lated D = 0.945, H = 0.055, and A = 68.2%.
Fig. 4: Artificial Agent Manager B’s periods of work and non-work.
Analysis and Discussion
Comparison and Analysis
Table II below gives the values of D, H, and A for all four agents, ranked
from the highest value of D to the lowest.
61
Friedenberg (2008).
Chapter Eleven: Work Effort of Human and Artificial Agents • 401
Table II. Agents’ work effort as characterized by fractal dimension, Hurst exponent, and Avail-
ability.
Agent
D
H
A
Artificial Agent Manager A
0.999
0.001
99.4%
Human Manager A
0.962
0.038
49.1%
Artificial Agent Manager B
0.945
0.055
68.2%
Human Manager B