ArticlePDF Available

Abstract and Figures

Human values seem to vary across time and space. What implications does this have for the future of human value? Will our human and (perhaps) post-human offspring have very different values from our own? Can we study the future of human values in an insightful and systematic way? This article makes three contributions to the debate about the future of human values. First, it argues that the systematic study of future values is both necessary in and of itself and an important complement to other future-oriented inquiries. Second, it sets out a methodology and a set of methods for undertaking this study. Third, it gives a practical illustration of what this 'axiological futurism' might look like by developing a model of the axiological possibility space that humans are likely to navigate over the coming decades.
Content may be subject to copyright.
Futures 132 (2021) 102780
Available online 12 June 2021
0016-3287/© 2021 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/).
Axiological futurism: The systematic study of the future of values
John Danaher
School of Law, NUI Galway, University Road, Galway, Ireland
ARTICLE INFO
Keywords:
Axiology
Futurism
Articial intelligence
Moral revolution
Technology risk
Philosophical methodology
ABSTRACT
Human values seem to vary across time and space. What implications does this have for the future
of human value? Will our human and (perhaps) post-human offspring have very different values
from our own? Can we study the future of human values in an insightful and systematic way? This
article makes three contributions to the debate about the future of human values. First, it argues
that the systematic study of future values is both necessary in and of itself and an important
complement to other future-oriented inquiries. Second, it sets out a methodology and a set of
methods for undertaking this study. Third, it gives a practical illustration of what this ‘axiological
futurismmight look like by developing a model of the axiological possibility space that humans
are likely to navigate over the coming decades.
1. Introduction
Axiological change is a constant feature of human history. When we look back to the moral values of our ancestors, we cannot help
but notice that they differed from our own. Previous generations have harboured moral beliefs that would count as prejudiced and
bigoted by modern standards; and we are quite likely to harbour moral beliefs that they would nd abhorrent. As we go further back in
time, the changes become even more pronounced (Appiah, 2010; Pleasants, 2018; Pinker, 2011). Axiological variation is also
something we see today when we look around the world and take note of the different cultures and societies that care about and
prioritise different values (Flanagan, 2017). What consequences does this axiological change and variation have for the future? Should
we plan for and anticipate axiological change? Can we study the future axiological possibilities of human civilisation in a systematic
and insightful way?
This article tries to answer these questions in three stages. First, it makes the case for a systematic inquiry into the future of human
values termedaxiological futurism’ — and argues that this inquiry is both desirable in its own right and complementary to other
futurological inquiries. Second, it outlines a methodology for conducting this inquiry into the future of human values. And third, it
presents a sketch of what this inquiry might look like by proposing a tentative model of the ‘axiological possibility spacethat we are
likely to navigate over the coming decades.
In other words, this article explains why axiological futurism is needed; how we might go about doing it; and what it might look like
if we did. As we shall see, axiological futurism, by its nature, requires an interdisciplinary mindset and mode of inquiry. To be an
axiological futurist one must knit together insights from philosophy, psychology, biology, science and technology studies and more.
2. Making the case for axiological futurism
Broadly construed, axiological futurism is the inquiry into how human values could change in the future. Axiological futurism can
E-mail address: john.danaher@nuigalway.ie.
Contents lists available at ScienceDirect
Futures
journal homepage: www.elsevier.com/locate/futures
https://doi.org/10.1016/j.futures.2021.102780
Received 3 April 2021; Received in revised form 28 May 2021; Accepted 9 June 2021
Futures 132 (2021) 102780
2
be undertaken from a normative or descriptive/predictive perspective. In other words, we can inquire into how human values should
change in the future (the normative inquiry) or we can inquire into how human values will (or are likely to) change in the future (the
descriptive/predictive inquiry).
Axiological futurism is both desirable in and of itself, and complementary to other futurological inquiries. As noted in the intro-
duction, we know that the values people subscribe to have changed across time and space. This means they are likely to change again in
the future. This is true even if you think that there is a timeless and unchanging set of values (i.e. an eternal and universal set of moral
truths) that is not susceptible to change. Why so? Because even if you accept this you would still have to acknowledge that people have
changed in both their awareness of and attitude towards those timeless and unchanging values over time. Sometimes they are
committed to the same abstract value (say, justice or friendship) but they develop different concrete conceptualisations or sub-
conceptualisations of those values over time (progressive taxation; Platonic friendship).
1
Perhaps some of these changes take place
because we are getting closer (or further away) from the eternal moral truth; perhaps some are necessitated by changes in society and
context. Either way, our values, or at least our attitudes toward and particular conceptualisations of our values, are always changing
and if we want to understand the future we have to factor this change into our accounts.
To illustrate, consider some examples of historical axiological change. One clearcut example is the changing attitude toward the
moral status of different groups of people. For a very long time, most societies adopted the view that some adult human beings (e.g.
slaves and women) were morally inferior to others (adult, property-owning men). This view may always have been contested to some
extent (Pleasants, 2018, 571; Aristotle Politics 1253b20-23), but it was the received moral wisdom for the majority and was reected in
their daily beliefs and practices. This changed over the course of the 19th and 20th centuries, and although the old moral attitudes
linger in certain contexts and places, the shift has been quite dramatic (Appiah, 2010). Something similar is true for attitudes toward
practices like gruesome torture and wanton animal cruelty (Pinker, 2011).
There are other clearcut examples of moral variation if we look across cultures. Owen Flanagan points this out by comparing
Buddhist moral communities and Western liberal moral communities (Flanagan, 2017). In doing so, he highlights how members of
those respective communities have very different attitudes towards the value of the individual self and the emotion of righteous anger.
Buddhist communities usually deny or downplay the value of both; Western liberal communities embrace them.
Given the facts of value change, it is prudent to anticipate and plan for future changes. The current moral paradigm is unlikely to
remain constant over the long term and it would be nice if we know how it is likely to change. This is true even if we approach
axiological futurism from a normative perspective as opposed to a purely descriptive one. Normatively speaking, there is no guarantee
that our current moral paradigm is the correct one and so we might like to see where future moral progress lies and try to get ahead of
the curve (Williams, 2015). Indeed, this kind future-oriented moral reasoning already features in some normative decision-making. For
example, in certain constitutional law cases in the US which oftentimes engage abstract moral values like justice, fairness and
equality (Leiter, 2015) judges have reasoned their way to particular conclusions out of a desire to be on the ‘right side of history
(McClain, 2018). Conversely, even if you are convinced that the current moral paradigm is the correct one, you should still care about
the ways in which it might change in the future, if only because you want to protect against those changes.
Axiological futurism is also complementary to most other futurological inquiries. Most futuristic studies are value driven, even if
only implicitly. People want to know what the future holds because they value certain things and they want to know how those things
will fare over time. If we had no value-attachments, we probably wouldnt care so much about the future (if nothing matters then it
doesnt matter in the future either). Value attachments are common in the debates about existential risks (Bostrom, 2013; Torres,
2017). Take, for example, the inquiry into climate change. Much of the debate is driven by an attachment to a certain set of values.
Some people worry about climate change because it threatens the capitalistic conveniences and consumer lifestyles they currently
value. Others are more concerned about central or basic values, such as the value of ongoing human life, and worry that this is a put at
risk by climate change. Something similar is true in the case of AI-risk. People who worry about the impact of superintelligent AI on
human civilisation are deeply concerned about the preservation of human ourishing. They worry athat the space of possible articial
minds is vast, and that the sub-set of those minds that will be ‘friendlyto human ourishing is quite narrow (Armstrong, 2014;
Bostrom, 2014; Yudkowsky, 2011).
These risk-oriented futurological inquiries are either implicitly or explicitly value driven: they are motivated by an attachment to
certain human values and by the worry that socio-technical changes will threaten those values. It is interesting then that these
futurological inquiries often assume a relatively xed or static conception of what the future of human values might be (indeed
oftentimes a quite anthropocentric and culturally specic set of values). In other words, they assume that there will be great tech-
nological change in the future, but not necessarily great value change. Or, even if they do anticipate some value change, it is relatively
minimal or narrow in nature. There is, consequently, a danger that these inquiries suffer from an impoverished axiological imagi-
nation: they dont see the full range of possibly valuable futures. There are some exceptions to this (notably Bostrom, 2005; and, in
part, Baum et al., 2019) but even those exceptions would benet from a more thorough and systematic inquiry into the space of
possibly valuable futures (Van De Poel, 2018). Such an inquiry might encourage greater optimism about the future by showing that
the space of valuable futures is not quite so narrow or fragile as some suppose or alternatively encourage realistic pessimism by
showing how narrow and fragile it is.
As an addendum to this last point, it is also worth noting that the future is sometimes held to be a source of value. This is reected in
several ethical arguments. For example, Schefers Death and Afterlife (2012) argues that ensuring that there is a human future (which
1
I am indebted to an anonymous reviewer for drawing this distinction to my attention.
J. Danaher
Futures 132 (2021) 102780
3
he calls a ‘collective afterlife) is a valuable thing.
2
If there was no human future, then our lives would be axiologically impoverished.
Arguments in population ethics and environmental ethics play upon on this assumption too, as do arguments that insist that having ‘an
open futureor a ‘future like oursis valuable at an individual level (Marquis, 1989; Millum, 2014). A normative approach to axio-
logical futurism can draw support from these kinds of arguments. If having a future is a valuable thing then knowing something about
the values that get sustained in the future matters too. Indeed, there is an important complementarity to the two sources of value. For
example, people that insist that having an open future or a human future is valuable would probably not insist that it is valuable at all
costs. An open future in which all the options consist of immense suffering and harm would not be valuable just because it entails
having a future. Having a future may, instead, function like an ‘axiological catalyst: the things we value may matter more because they
are sustained into the future and matter less to the extent that they are not (Danaher, 2018). This is not a critical point, and not all
axiological futurists need to agree with it, but if they do it further underscores the importance of axiological futurism as a eld of
inquiry.
One could object to axiological futurism on the grounds that it is impossible to say anything meaningful or predictive about the
axiological future. Any attempt to do so will be hopelessly speculative and will more than likely get things wrong. This is, of course, a
criticism that could be thrown at all futurological inquiries. Futurists are notorious for getting the future wrong and looking somewhat
foolish as a result (Smil, 2019; Van der Duin, 2016, chapter 1). This doesnt mean the criticism is unwarranted, it just means that it is
not unique to axiological futurism. The best response to this criticism is to argue that the point of axiological futurism is not to precisely
predict the future of value. The point of axiological futurism is to map out the broad space of possible axiological trajectories that we
could take in the future; to anticipate and imagine the different scenarios; and to help us to plan for those possibilities. The goal is not to
give a precise axiological forecast; it is to engage in axiological scenario planning (cf. Baum et al., 2019 who do something similar from
a non-axiological perspective). I hope to show, rather than tell, how this might be done in the next two sections of this article, drawing
lessons from past moral revolutions for guidance (Appiah, 2010; Pleasants, 2018).
One could also object to axiological futurism on the grounds that it is, in some sense, conceptually impossible. We are all trapped
inside a particular moral paradigm.
3
These paradigms shape how we perceive and understand moral value. We cannot get outside these
paradigms and imagine other axiological possibilities. There is much discussion in psychology and philosophy of the problem of
‘imaginative resistance(Gendler & Liao, 2016; Tuna, 2020). Roughly, the idea is that people have trouble imagining alternative
realities in which the rules and norms that apply in their present reality no longer do. Imaginative resistance appears to be a particular
problem when it comes to imagining alternative moral realities. For instance, people struggle with artistic works that require them to
endorse a moral attitude or belief that is contrary to their own. That said, the nature of this phenomenon is disputed, as are its precise
contours (Tuna, 2020). It may make axiological futurism more challenging, but it does not make it impossible, as the great moral
diversity of human history and art suggests. Furthermore, this criticism is more of a concern for the normative version of axiological
futurism than the descriptive version. It may be true that our moral perceptions and emotions are so tied to a particular paradigm that
we cannot feel any moral attachment to a different one, but we can at least try to describe and understand what it might be like to
inhabit a different paradigm (Pleasants, 2018). Historians and anthropologists do this all the time they become tourists to different
worldviews, both historical and cross-cultural. As Thomas Kuhn once argued, a contemporary scientic historian might not believe in
geocentrism or the existence of phlogistan, but they can at least try to gure out what it might have been like to believe in those
theories during the relevant historical era (Kuhn, 1962). The axiological futurist can do the same: they can become tourists to new
axiological paradigms. It may even be possible for axiological futurists to genuinely feel a moral attachment to new paradigms by
taking their attachment to current values to their logical extremes, e.g. by imagining what it might be like to care about all sentient life
equally, or to care for robots in the same way that they care for human beings.
One could also take issue with axiological futurism on the grounds that it is nothing new. People have been doing it for decades,
albeit without the fancy title. For example, people who argue that we are transitioning into a ‘post-privacy, ‘transparentsociety as a
result of technological change are doing axiological futurism (Brin, 1998; Peppet, 2015). Similarly, someone like Yuval Noah Harari, in
his ‘future historybooks, is doing axiological futurism when he imagines a future ‘religionof ‘dataismin which data is valued above
all else and individualism and humanism are forgotten (Harari, 2016). More recently, Ibo Van De Poel (2018) and Kudina and Verbeek
(2019) have called for the design of technologies to be sensitive to the possibility of value change. I do not deny this nor claim that the
project envisaged in this article is wholly original. Of course, people have been imagining the future of value for quite some time. What
is distinctive about axiological futurism is that it calls for a systematic and explicit examination of the future of value. Rather than
focusing on one specic way in which technology might change our values, or on one pet theory of future value, it proposes a fullscale,
systematic exploration of future axiological possibility spaces.
Finally, one could object to axiological futurism on the grounds that it will be self-fullling or self-defeating. This might be a
particular problem if axiological futurism is pursued from a normative perspective. Imagined future axiologies can be enticing or
intimidating. For example, those who like the idea of a post-privacy society can use the idea to argue for changes to current social and
legal norms; those who hate it can lobby against any such changes. The result is that the imagined axiology either comes into being
2
Schefer supports this argument on the basis that we want the things that we currently value to continue into the future. Sean Shiffrin, in
commenting on Schefer, suggests that we also want the general practice of valuing things to continue into the future (Shiffrin in Schefer, 2012,
145ff). Shiffrins interpretation of the argument may lend even more support to the project of axiological futurism. I am indebted to two anonymous
reviewers for drawing this point to my attention.
3
As MM pointed out to me, this could also be alleged to be problem with any moral philosophical inquiry that tries to get outside a current moral
paradigm. But since such moral inquiry takes place all the time, and hence is presumably not impossible we might have a reductio of the critique.
J. Danaher
Futures 132 (2021) 102780
4
because people want it to or never gets off the ground because people dont. But this is, of course, a problem for all futurological
inquiries. Since human activity is one of the things that will determine what kind of future we have, there is always the danger that an
imagined future compels action in a particular direction (Popper, 1957). This seems unavoidable to some extent and yet still not a
reason to avoid inquiry into the future. Indeed, one potential benet of axiological futurism is that it could encourage a less knee-jerk
and emotional response to the future. By expanding our axiological horizons we might see less reason to jump to conclusions about how
desirable or undesirable the future might be.
3. The methodology of axiological futurism
How could we actually go about doing axiological futurism? Whats the methodology? In this section I will sketch an answer to that
question. I emphasise, at the outset, that this is just a sketch something that that other people can rene and improve upon.
It helps if we start with a more precise denition of axiological futurism. I dened it informally in the preceding section. A more
formal denition is now required:
3.1. Axiological futurism
The systematic and explicit inquiry into the axiological possibility space for future human (and post-human) civilisations, with a
specic focus on how civilisation might shift or change location within that axiological possibility space.
Well unpack the elements of this denition as we go along. From a methodological perspective two crucial things emerge from it
(a) that axiological futurists must provide some theory of the ‘axiological possibility spaceand (b) that axiological futurists must
identify the methods that can be used to explore that possibility space and the potential trajectories within it.
(a) What is axiological possibility space?
Lets start with the theoretical aspect of the methodology: the idea of axiological possibility space. Our goal, as axiological futurists, is
to explore this space, to gure out the ways in which it might vary and change in the future, and to identify some possible trajectories
that human civilisation might take through this possibility space. To fully understand this idea we need to get into some of the basics of
moral theory and axiology. This will help us to determine what the boundaries of the possibility space might be.
Moral theories are usually made up of two main elements: (i) a theory of what is good/bad (i.e. an axiology) and (ii) a theory of what
is right/wrong (i.e. a deontology). Moral theories are then usually directed at two kinds of entities (iii) moral patients/subjects (i.e.
those who can be benetted/harmed by what is good/bad) and (iv) moral agents (i.e. those who can perform actions that are right/
wrong). Many times these different elements coincide in a single moral theory. For example, most adult human beings are viewed as
moral patients and moral agents and hence are deemed eligible subjects for both an axiology (i.e. there are things that are good for
them) and a deontology (i.e. there are rules about what they ought to do). Nevertheless, sometimes the elements can pull apart. For
example, most young children are thought to count as moral patients, but not moral agents. They can be benetted and harmed, but
they cannot perform morally right or wrong actions. It may also be possible, under certain moral theories, for things to be good or bad
simpliciter (in and of themselves) without them being good or bad for some moral subject. Certain theories of environmental ethics, for
example, claim that features of the natural world are intrinsically valuable without being valuable for someone. That said, for the most
part, an axiology goes hand in hand with a theory about who has moral patiency and a deontology goes hand in hand with a theory
about who counts as a moral agent.
If our goal was to explore the entirety of moral possibility space, then we would have to concern ourselves with all four of these
elements (axiology; deontology; patients; and agents). But I am suggesting that we concern ourselves primarily with the axiological
elements. Why is this? Because I make an argument which can be challenged that axiology ultimately drives and determines
deontology. In other words, I maintain that you need to know what is good/bad (and who can be harmed/benetted) before you can
gure out what to do about it. If you know that animals can be harmed and benetted, then you know that our actions toward them
have a moral dimension. But if you dont know that, or if you dont accept that, you wont think of your actions toward them having a
moral dimension, at least not directly (they may have a moral dimension because of their consequences for other moral patients such as
your fellow humans or yourself). Your axiological beliefs about animals ultimately shape your deontological beliefs. This doesnt mean
they shape them in a straightforward or simple way, but they do constrain how you can think about the morality of the actions you
perform towards animals. This is why I think we should focus primarily on axiology. In doing this, we may well generate some
conclusions or hypotheses about future deontologies it would be surprising if we did not given the relationship between axiology
and deontology but this is not the primary object of inquiry. This point may be controversial among some moral philosophers.
Following the Kantian tradition, they may argue that maxims of the will dictate our deontology which in turn shape the content of our
axiological beliefs (particularly the value of autonomy). This is not the place to resolve this moral chicken-and-the-egg debate (which
comes rst: deontology or axiology?). It would take more than one article to do that. I simply offer the preceding argument as grounds
for thinking that axiological futurism is the more important eld of inquiry. This does not, however, preclude the possibility of
pursuing other forms of moral futurism.
An ‘axiologywill consist of four main things (i) some list or specication of what is good/bad; (ii) some identication or speci-
cation of who counts as an object of moral concern (i.e. who the moral patients are); (iii) some specication of the relationships
between the various elements within the axiology (whats most important/least important? what is intrinsically valuable and what is
instrumentally valuable?); and (iv) and some specication of the appropriate attitude or approach we should take toward those values
J. Danaher
Futures 132 (2021) 102780
5
(maximisation; protection and so on).
4
Axiological possibility space, then, is the full set of possible axiologies, i.e. all the different
possible combinations of goods, subjects, relationships and attitudes. Presumably, axiological possibility space is vast much larger
than anyone can really imagine. But equally, many of the ‘possibleaxiologies within this space are not that plausible or interesting: e.
g. a world in which the subjective pleasure we experience while scratching our knees is the only recognised good may be possible (in
some thin sense of the word ‘possible) but is not very plausible and should not concern us greatly.
Still, the vastness of axiological possibility space poses a challenge for the axiological futurist. We need some constraints on the
boundaries of axiological possibility space to make the project feasible. Fortunately, we can constrain the axiological possibility space
to some degree by taking advantage of the work that has already been done in dening axiologies, and by considering some of the
ongoing debates within axiological theory. Doing so, we can quickly gain some sense of the kinds of things that could be included in
any possible list of goods/bads. They would include (as ‘goods) things like: subjective pleasure, desire satisfaction, knowledge,
friendship, beauty, education, health, money, family, food and so on. We also know the kinds of entities that could count as moral
subjects. They would include: humans, cognitively complex mammals, all ‘persons(human, animal or articial), all sentient beings,
all living entities, and possibly some non-living entities of great beauty or ecological signicance. Finally, we also know the different
possible relationships that could exist between these goods/bads and the moral subjects: all could be treated equally, there could be a
clear hierarchy of goods and moral subjects, there could be a cyclical or rotating ranking of goods and moral subjects, some goods could
be valued intrinsically and some instrumentally, or there could be multiple different combinations of these relationships. So we know,
roughly, the broad constraints on axiological possibility space. It is still vast, and it will be a challenge to explore it, but this what makes
axiological futurism an important and intellectually fascinating endeavour.
Of course, it is not enough to identify the boundaries of axiological possibility space. We also have to think about how changes can
come about within that possibility space. This is key to the ‘futuristic aspect of axiological futurism. Axiological futurism is not
conceived as a purely abstract, intellectual exercise in which we map out all the plausible, possible axiologies that could be taken on by
human (and post-human) civilisations. We also want to know something about the mechanics of axiological change. How do changes
come about? Are some changes inevitable or irreversible? How are things likely to change in the future, particularly in response to
technological change? This doesnt require falling into the trap of precise prediction; but it does mean thinking carefully and sys-
tematically about how axiologies can vary over time and space.
This might seem like a daunting task, but we know that the mechanics of axiological change are relatively simple. There are three
things that can happen to change an axiology: (i) there can be some expansion or contraction of the circle of moral concern (i.e. the set
of beings who count as moral subjects); (ii) there can be some addition to or subtraction from the set of goods/bads; and (iii) there can
be some change in how we prioritise or rank goods/bads and/or moral subjects (cf Van De Poel, 2018). We see clear evidence for all
three kinds of change in human history. The end of slavery and the enfranchisement of women can be interpreted as either an
expansion of the circle of moral concern or a change in how we rank moral subjects (towards greater equality). Contrariwise, the rise of
fascism and nationalism can be interpreted as a contraction of the moral circle or a demotion in the moral ranking of certain subjects.
Likewise, the loosening grip of religion over the moral lives of industrial society has often brought with it changes to the list of
goods/bads (e.g. premarital sex or uncontracepted sex is now no longer seen as ‘bad). Examples could be multiplied.
There is considerable debate in moral philosophy as to whether the changes we see over history are, broadly, progressive, or
whether certain kinds of axiological change can be reliably identied as ‘progressive(Moody-Adams, 1999; Stokes, 2017). One
apparent lesson from history is that expansions in the circle of moral concern are usually considered progressive, and are viewed
positively in the light of history, but there is no guarantee that this trend will continue. For example, some people emphatically reject
the idea that expanding the circle of moral concern to include articial entities would be progressive (Bryson, 2018), while others are
more open to the idea (Gellers, 2020; Gunkel, 2018). There is no need to enter into these debates if axiological futurism is pursued from
a descriptive/prescriptive stance since the goal is not to get axiological changes right but rather to understand how and why they
happen. There may, however, be a need to get into these debates if axiological futurism is pursued from a normative perspective. In
that case, we want to be able say whether or not the direction of axiological change is positive or negative.
This is to talk about what must happen to change an axiology. What actually drives those changes? Ultimately, all moral change is
cashed out at the individual and institutional levels: people change in their axiological beliefs, practices and attitudes; and institutions
espouse and promote those changes. But how do individuals and institutions change? Broadly speaking, there are two main drivers of
change: intellectual drivers and material drivers. These correspond to the drivers of change that are widely discussed in other so-
ciological and ideological debates (e.g. Marxism vs Hegelianism).
Intellectual drivers of change arise from the application of fresh ideas, theories and reasons to an axiology. Sometimes they arise
from within an axiology. Applied moral philosophy is of this kind. Applied ethicists spend much of their time identifying problem cases
in moral theory and explaining why moral beliefs, attitudes and practices must change in response to these cases (Campbell, 2014). But
there are also non-moral intellectual drivers of change. For example, non-moral and non-rational methods of persuasion or
example-setting are sometimes key to moral reform (Appiah, 2010; Moody-Adams, 1999; Pleasants, 2018). People may also change
because an attractive person espouses or exemplies change; or because they are made to fear staying the same; or for some other
non-moral reason (cf Fern´
andez-Armesto, 2019 on the inuence of ideas on values over time).
4
This last element may bridge the threshold into the deontological branch of morality. Pettit (1993), for instance, has argued that consequentialist
and deontological ethics are both action oriented (i.e. about what is right and what we ought to do) but that they vary in terms of the attitude we
should take toward things of value. Consequentialists think we should promote (maximise?) that which is valuable; deontologists think we should
honour it. This may be too simplistic a gloss on the distinction between these schools of thought, but it captures something of their essence.
J. Danaher
Futures 132 (2021) 102780
6
Material drivers of change are, obviously, different. They are changes in the material conditions of existence which bubble up into
changes in axiological beliefs and practices. New technologies, for example, often make new actions possible (Currier, 2015). This can
have a signicant impact on our moral beliefs and attitudes. As Verbeek argues, through his theory of techno-moral mediation,
technologies are often infused with the moral attitudes and biases of their creators and through their form and function can reveal new
moral possibilities to their users (Verbeek, 2011, 2013). They can do this by opening up or closing off possible actions (pragmatic
mediation) or by changing the metaphors or concepts we use to understand the world around us (hermeneutical mediation). This can
change how we prioritise and rank goods/bads and moral subjects. For example, it is probably no accident that slavery was legally
abolished after the industrial revolution got going: advances in mechanisation obviated some of the need for slave labour that became
important after the shift to agriculture. Ian Morris has developed an extensive historical theory of why this happened, focusing in
particular on how changes in technologies of energy capture changed values (Morris, 2015). On a smaller scale, the technology ethicist
Shannon Vallor has argued that certain technologies bring with them their own set of ‘values-in-use, which can have a dramatic
impact on our overall axiology (Vallor, 2016). For example, she argues that advances in the global reach of technology, particularly
communications technology, means that our circle of moral concern must now be global and not local (Vallor, 2016, chapter 2). In the
next section I will outline another approach to understanding how technology might drive axiological change.
Nevertheless, we should be sceptical as to whether axiological change is ever completely intellectual or material. It is more likely
that there is a complex feedback loop between both drivers of change. Philosophers since Hume have long held that what we value (or
desire) is the ultimate cause of behaviour. Reason, alone, cannot motivate, nor can material reality. What we desire affects how we
interpret and act in the real world. That said, how we act in the real world may, in turn, affect our how we prioritise and understand our
values, which will have a further impact on our behaviour. In other words, intellectual factors might drive the creation of new
technologies, which in turn affect how we behave and what we perceive as valuable; or changes in technology might inspire our
imaginations to consider new axiological possibilities. We do not need to be doctrinaire materialists or idealists to be axiological
futurists. We can be a bit of both.
In sum, the methodological goal of axiological futurism is to inquire into the axiological possibility space for future human and
post-human civilisations. The job of the axiological futurist is to sketch different possible axiologies, and anticipate the future tra-
jectories we might take through the axiological possibility space (Table 1).
(b) What methods can we employ to explore the axiological space?
Now that we have a clearer sense of the task of axiological futurism, we can consider the methods that might assist in performing
this task. The methods are going to be a grab-bag. Axiological futurism is an inherently speculative and imaginative exercise. We
cannot experimentally control, manage and predict the future. Karl Poppers classic arguments against predictive social science apply
well to axiological futurism (Popper, 1957): human history is, ultimately, a single unique event, you cannot easily account for the
human factor in the evolution of societies (particularly since humans discover, react and respond to predictions made about their
futures), and while you may be able to eliminate some possible futures from consideration you can never really narrow it down to one,
predictable, future trajectory for human civilisation. Consequently, axiological futurism cannot be a ‘sciencein the strict sense. It is an
exercise in informed speculation. Anything that helps to inform that speculation is a viable method of doing axiological futurism.
What follows are some suggested methods of inquiry. Many of these methods are already being employed by researchers in phi-
losophy, psychology and social sciences. They are also common in forecasting studies (Van der Duin, 2016). They are just not being
employed specically in the service of axiological futurism. What I suggest here is that they can be repurposed and reconceived for that
end. This is a tentative list. The hope is that it be will added to in the future.
5
3.2. Logical space methods
The rst task of the axiological futurist is to map out the contours of axiological possibility space. One obvious way to do this is to
map out the logical space of variation for a given value or set of values. The resulting logical space will help us to identify the different
ways in which a value might be specied and how it might relate to other values. This is something that is already done by moral
philosophers with respect to individual values and pairs of values (e.g. List & Valentini, 2016; Roemer, 1998). For example, a lot of
work has been done on the logical contours of values such as ‘equalityand ‘freedom. Philosophers have identied dimensions or
parameters along which different conceptions of these values can vary. A theory of equality for example might vary along two di-
mensions: equality of opportunity and equality of outcome. Given these two parameters, a researcher can construct a simple 2 ×2
logical space for the value of equality, classifying different possible axiologies depending on whether they score high or low on those
two dimensions. More complex variations are also possible.
Constructing logical spaces is usually just a matter of carefully reading the theoretical literature and spotting the patterns and
variations among the different theories associated with different values. List and Valentini (2016) adopt this approach when trying to
understand the value of political freedom. They note that theories of political freedom tend to be concerned with interferences with
individual behaviour, but then vary depending on the kinds of interferences with which they are concerned. Some theories are
concerned only with actual interferences with individual behaviour (‘freedom as non-interferencetheories); some theories are
5
For a complementary list of methods, focused specically on AI futurism, see Shahar (2019)).
J. Danaher
Futures 132 (2021) 102780
7
concerned with possible interferences with individual behaviour (‘freedom as non-dominationtheories). Similarly, some theories are
only concerned with immoral (unjustied) interferences, whereas some theories are concerned with all possible interferences, be they
moral or immoral. This suggests to List and Valentini that theories of freedom vary along modal and moral dimensions and they then use
this to construct a 2 ×2 logical space of freedom.
Both of these examples involve logical spaces for individual values. It is also possible to use this method to construct logical spaces
to represent the different ways of valuing moral subjects and, crucially, for mapping the possible relationships between different
values. For example, you could imagine a simple axiology in which there are three main values: equality, freedom and well-being. Each
of these values represents a dimension of variance for a possible society. We can then dene a three-dimensional axiological space
within which possible societies can be classied and organised. Some societies may value all three highly and try to maximise all three;
some will value well-being over freedom and freedom over equality (and so on). It may also be the case that certain axiologies that
seem to be possible within this space are not in fact possible. For example, it may not be possible to maximise freedom and equality (or
equality and well-being) at the same time. Some tradeoffs and compromises may be (logically/physically/technologically) necessary
(e.g. Kleinberg, Mullainathan, & Raghavan, 2016 on the impossibility of reconciling different conceptions of fairness). Historical,
cross-cultural and psychological inquiries could be an important guide in this respect as they will give us a sense of what has been and
is currently possible for human societies when it comes to different combinations of values. For example, moral foundations theory,
which is a psychological theory suggesting that there are ve (maybe six) basic dimensions of value in human moral psychology, might
set an important limit on what is possible when it comes to human axiology (Flanagan, 2017; Graham et al., 2013; Haidt, 2012).
Similarly, the theory of ‘morality as cooperation, as developed by Oliver Scott Curry and his colleagues, suggests that there are seven
basic cooperative games that humans play and that all moral systems (i) propose different solutions to those games and/or (ii) combine
different solutions to different games (Curry, 2016; Curry, Alfano, Brandt, & Pelican, 2020). Whatever the case may be, thinking about
the relationships between values in terms of logical spaces allows for a more systematic and thoughtful inquiry into axiological
possibility space.
There are, however, limits to what individual humans can do when it comes to constructing and exploring logical spaces. Humans
can handle two to three dimensions of variance with relative ease. Beyond that, it gets much trickier to intuitively conceive of a logical
space of possibility. Formal and computer-assisted methods of mapping logical spaces may consequently be necessary to make the vast
space of axiological possibility more tractable and manageable.
3.3. Causal relationship methods
Mapping the logical space is a rst step. Ultimately, what we want to know are the causal relationships between potential drivers of
change in axiology and actual changes in axiology. Figuring out those causal relationships is crucial if we are to make the space of
possible future axiologies comprehensible. Working out these causal relationships will be tricky. Again, we cannot run civilisation-
wide experiments on possible future axiologies, particularly if axiologies are partly determined by forms of knowledge and technol-
ogy that are yet to be discovered and invented. To make headway on this, we have to rely on historical studies of axiological change,
cross-cultural studies of axiological variance and psychological and small-scale experimental studies of change and variance. Each kind
of study gives us a different insight into the possible causal relationships at play.
Historical studies, particularly if they are grand in scope and scale, are helpful because they give us a sense of how axiologies have
changed in response to (and in conjunction with) other social and technological changes. Leckys The History of European Morals
(1955) is a classic example of this style of inquiry, being one of the rst studies to consider how material factors drove changes in
European axiologies. But intellectual studies of axiological change should not be neglected. They show how evolving conceptions and
ideologies can drive changes in axiological belief systems. Schneewinds study on The Invention of Autonomy (1998) is a good example
of this style of inquiry. It provides a detailed map of the intellectual debates that gave rise to the modern liberal axiology, with its focus
on the autonomous individual as the ultimate locus of value. Similarly, Kwame Anthony Appiahs study of three historical moral
Table 1
Understanding Axiological Possibility Space.
Elements of an Axiology Axiological Change Drivers of Change
(i) Set of values (goods/bads), i.e. what do people care about
and promote
(ii) Moral subjects, i.e. who or what is worthy of moral
consideration
(iii) Relations between values and subjects, i.e. who or
what is most important? Do the things and subjects that
are valued matter intrinsically or instrumentally?
(i) Adding to or subtracting from the list
of values
(ii) Expansion or contraction of the circle
of moral concern (i.e. the set of moral
patients/subjects)
(iii) Reprioritisations of re-rankings of
the values and subjects
(i) Intellectual drivers of change, i.e. changes to
how people think about their axiologies
Inconsistencies and contradictions within the
axiology
Rational reasoning, teasing out the implications
or consequences of axiological belief
Non-moral, non-rational persuasion
(ii) Material drivers of change, i.e. changes to the
material circumstances of life
Axiologies that are internal to particular
technologies (e.g. Shannon Vallor Technology and
the Virtues)
Axiologies that are driven by or necessitated by
external technological change (e.g. Ian Morris
Foragers, Farmers and Fossil Fuels)
J. Danaher
Futures 132 (2021) 102780
8
revolutions makes the case for thinking that changing conceptions of honor played a key role in moral change (Appiah, 2010). Deeper
historical studies are useful too. Michael Tomasellos examination of the ‘natural historyof morality, for example, gives a sense of the
evolutionary steps that had to be taken for human moral systems to arise (Tomasello, 2016). Likewise, Ian Morriss aforementioned
study of how changes in the technology of energy capture drove changes in axiologies of violence, equality and fairness gives a sense of
the major socio-technical forces that might be at play in axiological change (Morris, 2015). This is just a small sample of the historical
inquiries that can assist axiological futurism. Examples could be multiplied and all are somewhat useful in helping us to tease out the
potential causal mechanisms behind axiological change and thereby extrapolate from the past to the future.
Cross-cultural studies of axiological variance are helpful because they give us a chance to learn from ‘natural experimentsin
axiological possibility space. Different axiological communities arise in different geographical locations, and in different socio-
technical contexts. Comparing and contrasting the axiological variance across communities can give us a sense of both (a) the
causal factors that might be responsible for this variance and (b) how broad the axiological possibility space really is (and thereby help
us to overcome the parochialism and short-sightedness that often comes with being locked in one axiological worldview). Flanagans
book The Geography of Morals (2017) is a good manifesto for this kind of inquiry, demonstrating how anthropological and psycho-
logical research can support this cross cultural analysis, and providing some detailed normative evaluations of the axiological vari-
ations between Western and Buddhist societies. There are, of course, many other ethnographic and anthropological studies that can
assist in studying axiological variance. Cross-cultural comparison can be particularly fruitful from a futuristic point of view if some
communities are further along in their socio-technical development than others. As William Gibson once famously observed, the future
is already here, it is just unevenly distributed. Axiological futurists can take advantage of that uneven distribution to further their aims.
Indeed, there are some examples of this kind of inquiry already taking place. For example, Robertsons Robo Sapiens Japanicus (2017)
gives insights into the axiological beliefs and practices of Japanese society with respect to robots. This is interesting because Japan is a
society in which robots are generally more accepted and more widely used than in most Western societies. Similarly, Virginia
Eubankss Automating Inequality (2017) takes explicit inspiration from Gibson and argues that poorer communities are subject to
much greater algorithmic surveillance and governance than wealthier communities, and so give us some insight into how axiological
beliefs and practices might change if and when algorithmic governance technologies become more widely distributed.
Finally, psychological and other experimental studies are helpful because they can provide insight into the causal mechanisms that
might underlie larger scale axiological change. Studies in moral psychology on the foundations of moral belief and practice (Curry,
2016; Curry et al., 2020; Graham et al., 2013; Greene, 2013; Haidt, 2012) and the mechanisms of moral change are obviously of great
relevance. They help the axiological futurist identify possible upper limits on the manipulability of axiological systems in response to
intellectual and material drivers of change. Studies that focus in particular on how moral beliefs and practices might change in
response to new technologies are also of particular relevance to the axiological futurist. The ‘Moral Machineexperiment, run by
researchers based in MIT could be an example of the genre, although not conceived by its authors in these terms. This experiment was a
largescale examination of how axiological (and deontological) belief systems might respond to autonomous driving technology (Awad
et al., 2018), specically how people would reason about dilemmas involving sacricing different groups of people. The experiment
helped to reveal different biases in the ranking of moral subjects across different cultures and in doing so gives us some sense of the
contours of axiological possibility space with respect to one particular use of technology.
As with the mapping of axiological possibility space, some assistance from formal computer-assisted modelling could be a useful
complement to these experimental approaches. For example, computer models of repeated games (like the Stag Hunt or the Prisoners
Dilemma) can give some insight into the causal factors that might be responsible for changing social norms over time (Alexander,
2008; Bicchieri, 2016; Skyrms, 1997).
There will always be limits to the informativeness of these experimental approaches. They will usually involve small groups of
experimental subjects or simplied model environments. Most studies will only model a handful of axiological changes and causal
factors. Even the MIT Moral Machine experiment which was impressive for the fact that it had millions of experimental participants
was limited insofar as it only focused on one type of technological change and one set of moral beliefs (specically moral responses
to so-called ‘trolley dilemmas). Limitations of this sort are inevitable and they necessitate caution when it comes to extrapolating from
these experiments to society-wide axiological changes. Still, this shouldnt negate the great importance of these studies to the axio-
logical futurist.
3.4. Collective intelligence methods
There is one nal class of methods that is worth discussing. As we have seen so far, axiological futurism is an exercise in informed
speculation in which the theorist tries to (a) map the contours of axiological possibility space, (b) determine the causal relationships
between drivers of change and resulting changes of location within that axiological space, and (c) use this to speculate about the
possible future trajectories that human and post-human civilisations will take through axiological possibility space. The axiological
futurist draws upon different disciplines and methods to assist in these three tasks, including anthropology, history, psychology and
moral philosophy.
As noted earlier, there are already some people conducting inquiries that could be classed as a type of ‘axiological futurism. What is
notable, to date, is that most of these inquiries are the product of individual authors who do not conceive of their projects in the terms
outlined in this article. This kind of solo-authored inquiry has advantages its relatively easy to do and if pursued to the hilt, and
sketched in full imaginative depth, it can be quite visceral and effective (e.g. Hanson, 2016). I mentioned some academic examples
earlier on but it is true in the case of ction too. For example, many science ction novels and stories have a strong axiological futurist
aspect to them. They often depict dystopian futures in which humanity has taken a wrong turn in axiological possibility space. They
J. Danaher
Futures 132 (2021) 102780
9
warn us against doing the same. Good examples of this would include George Orwells 1984 and Dave Eggers The Circle both of which
sketch out possible futures in which there is near perfect government and corporate surveillance. Both novels give a sense of what the
resultant social axiologies might be (a xation on transparency and conformity), and neither paints a attering picture. Utopian
science ction can do the same thing, albeit with the opposite purpose in mind.
The problem with solo-authored work of this sort is that it is often narrow-minded and biased. Individual authors have their own
axes to grind. They focus on one or two technological or social changes and consider one or two consequences for our axiologies. They
dont think about multiple changes in parallel nor the possibility of multiple different future axiologies. For example, they might focus
solely on changes to surveillance technology, and imagine what might happen if that technology gets really good, while at the same
time ignoring similar changes to other technologies such as genetic engineering, cyborgication, robotics, space exploration and so on.
Better work will try to consider multiple streams of change, but individuals are always limited in what they can do.
One way of overcoming these individual limitations is to use methods that allow groups of people to collaborate effectively on the
axiological futurist project. We can call such methods ‘collective intelligencemethods (Hogan, Hall, & Harney, 2017; Hogan,
Johnston, & Broome, 2015; Malone, 2018; Mulgan, 2017). It may seem a little odd to single these out as a distinctive subset of methods.
After all, the hope is not that groups will pursue different methods but, rather, pursue the methods outlined above more efciently and
effectively. It may also seem a little redundant: surely all research projects are ultimately pursued by groups, even if only indirectly? No
person is an island unto themselves: even quintessential lone geniuses like Einstein and Darwin had collaborators to help with their
projects.
This is all true, but it is worth addressing collective intelligence methods separately for three main reasons. First, the idea of
‘collective intelligencefeatures centrally in the proposed map of the future axiological possibility space that is outlined below in
Section 3. Foreshadowing it here will make it easier to understand what is discussed there. Second, as should be clear, axiological
futurism necessarily draws upon multiple disciplines. If we had some formal method for usefully collecting and harnessing multiple
insights from these respective disciplines we could greatly enhance the scope and depth of axiological futurism. Third, people have
already begun to argue that we should think about collective intelligence methods as their own distinct method (Hogan et al., 2017;
Malone, 2018; Mulgan, 2017). It is not enough to simply get a group of people with diverse backgrounds and different areas of
expertise together in a room and hope that they will produce insights that are greater than the sum of their parts. Good group work is
hard to do (Straus, Parker, & Bruce, 2009). Groups often fail to produce better insights than individuals. They often develop their own
biases and collective group think; particular individuals can dominate discussions and deliberations, thereby substituting their own
agenda for that of the group; people can also get ‘blockedor be overly timid in groups, resulting in them producing fewer, not more,
insights than they might achieve on their own steam. So although group work has the potential to overcome the limitations of
individualism, it can only do this if it is done in a systematic and thoughtful way.
Fortunately, people have already started to do this and have developed formal methods for enabling groups to work together
effectively. I confess to having a vested interest in this idea. In previous work, along with my colleagues, I used formal collective
intelligence methods to get an interdisciplinary group to think about how technological transformation might change the future of
social governance, and to consider the research questions that need to be answered as a result of this (Danaher et al., 2017). We did this
by organising group work into three main phases of activity: (i) an idea generation phase, in which we encouraged individuals within
the group to generate as many different ideas as possible in response to a particular research question; (ii) a deliberation and discussion
phase, in which members of the group added to and critically evaluated one anothers ideas and (iii) a convergence/consolidation
phase, in which we got the group to coordinate on producing a particular output (in our case a draft agenda of research questions keyed
Table 2
Methodology and Methods for Axiological Futurism.
Logical Space Methods Causal Relationship Methods Collective Intelligence Methods Pitfalls/Things to Avoid
Methods dedicated to working out the
contours of axiological possibility
space.
Methods dedicated to working out
the causal drivers of change
(intellectual and material) within
axiological possibility space
Methods dedicated to getting
interdisciplinary groups to collaborate
effectively on mapping the possibility
space and working out the causal drivers of
change
Establish dimensions of variance
for particular values and map the
resulting logical space.
Use historical studies that focus
on value change over time
Identify and assemble group members Narrow-framing, i.e. focusing on
only one technological driver of
change or one value/set of values
Establish dimensions of variance
for multiple values and map their
relations to one another
Develop cross-cultural studies
of natural experiments in value
change
Adopt ‘divergent thinkingmethods to
enable the group to generate diverse
insights or thoughts (e.g. responding to a
trigger statement)
Cultural and individual bias, i.e.
being too wedded to one set of
values (particularly problematic for
descriptive axiological futurism)
Use cross-cultural and historical
analysis to gure out the
variation in particular values
Use psychological and other
experimental studies to examine
relationships between causal
variables and value change
Adopt processing and collaborative
methods to enable the group to comment
on and develop one anothers ideas (e.g.
idea-writing, group deliberation/dialogue)
Group think (when collaborating
with others)
Use psychological studies on moral
psychology (e.g. moral
foundations theory) to determine
upper limits on the exibility of
moral standards.
Use computer modelling/game
theoretical studies of shifting
value equilibria
Adopt a ‘convergent thinkingmethod to
get the group to coordinate on a shared
output.
Mono-disciplinarity
J. Danaher
Futures 132 (2021) 102780
10
to relevant methods for answering those question). Breaking group work down into these phases might sound like common sense but it
is striking how infrequently it is done. Furthermore, when done explicitly and thoughtfully it is possible to plan specic group work
activities such as idea-writing and structured dialogues that make the maximum use of each phase. Doing this helped our group to
produce an output that would have been impossible if we had worked on our own. There is reason to hope that similar collective
intelligence methods could be a boon to the axiological futurist project.
This concludes the discussion of how we might go about doing axiological futurism. I have summarised the key ideas from this
section in the table below (Table 2).
4. The three intelligences: a model of the axiological possibility space
Now that we see why axiological futurism is valuable and how we might go about doing it, lets turn to what it might look like if we
did. In this section, I present a map of (a portion of) the axiological possibility space that humans are likely to navigate in the coming
decades. Included within this map will be a model of the causal relationships between technological change and axiological change.
The implicit claim made by this model is that if we promote and encourage certain technological changes, then we will also promote
and encourage certain axiological changes, and vice versa (assuming there is a feedback loop between the intellectual and material
drivers of change). The map and the model are the result of my own informed speculation, with all the caveats that entails.
The model is inspired by Ian Morriss aforementioned theory of value change (Morris, 2015). Accoding to this theory changes in the
technology of energy capture affect societal value systems. In foraging societies, the technology of energy capture is extremely basic:
foragers rely on human muscle and brain power to extract energy from an environment that is largely beyond their control. Humans
form small bands that move about from place to place. Some people within these bands (usually women) specialise in foraging and
others (usually men) specialise in hunting. As a result foraging societies tend to be quite egalitarian. They have a limited and precarious
capacity to extract food and other resources from their environments and so they have to share when the going is good. They are also
tolerant of using some violence to solve social disputes and to compete with rival groups for territory and resources. They display some
gender inequality in social roles, but they tend to be less restrictive of female sexuality than farming societies. Consequently, they can
be said to value inter-group loyalty, (relative) social equality, and bravery in combat. These are the foundations of their value systems.
Farming societies are quite different. They capture signicantly more energy than foraging societies by controlling their environments,
by intervening in the evolutionary development of plants and animals, and by fencing off land and dividing it up into estates that can be
handed down over the generations. Prior to mechanisation, farming societies relied heavily on manual labour (often slavery) to be
effective. This led to the moralisation and justication of social stratication and wealth inequality, but less overall violence. Farming
societies couldnt survive if people constantly used violence to settle disputes. There was a focus on orderly dispute resolution, though
the institutions of governance could be quite violent. There was much greater gender inequality in farming societies because (a)
women were required to take on specic roles in the home, and (b) the desire to transfer property through family lines placed a special
value on female sexual purity. This affected their foundational values around gender and wealth equality. Finally, fossil fuel societies
capture enormous amounts of energy through the combustion and exploitation of fossil fuels (and later electricity, nuclear power, and
renewable energy sources). This enabled greater social complexity, urbanisation, mechanisation, electrication and digitisation. It
became possible to sustain very large populations in relatively small spaces, and to facilitate more specialisation and mobility in
society. As a result, fossil fuel societies tend to be more egalitarian than farming societies, particularly when it comes to political and
gender equality, though less so when it comes to wealth inequality. They also tend to be very intolerant of violence, particularly within
a dened group/state.
The model I develop here takes two key ideas from Morriss theory. The rst is the notion of an ‘ideal typeof social order. Human
society is complex. We frequently use simplifying labels to make sense of it all. We assign people to general identity groups (Irish,
English, Catholic, Muslim, Black, White etc) even though we know that the experiences of any two individuals plucked from those
identity groups are likely to differ. We also classify societies under general labels (Capitalist, Democratic, Monarchical, Socialist etc)
even though we know that they have their individual quirks and variations. Max Weber argued that we need to make use of such ‘ideal
typesin social theory in order to bring order to the complexity (Weber, 1949), while beiing fully cognisant of the fact that the ideal
types do not necessarily correspond to social reality. Morris makes use of ideal types in his analysis of the differences between foraging,
farming and fossil fuel societies. He knows that there is no actual historical society that corresponds to his model of a foraging society.
But thats not the point of the model. The point is to abstract from the value systems we observe in actual foraging societies and use
them to construct a hypothetical, idealised model of a foraging societys value system. Its like a Platonic form a smoothed out,
non-material ‘ideaof something we observe in the real world but without the Platonic assumption that the form is more real than
what we nd in the world.
This brings me to the second idea. The key motivation for the model I will now develop is that one of the main determinants of our
movement through future axiological possibility space is not the technology of energy capture that we rely upon but, rather, the form
of intelligence that is prioritised and mobilised in society. I here dene ‘intelligenceas the capacity to solve problems across different
environments (Malone, 2018; Mulgan, 2017). Intelligence is a basic resource and capacity of human beings and human civilisations
(Henrich, 2015; Naam, 2013; Ridley, 2010; Tainter, 1988; Turchin, 2007). Its what we rely upon for our survival and its what makes
other forms of technological change possible. For example, the technology of energy capture that features in Morriss model is, I would
argue, itself dependent on intelligence.
I submit that there are three basic forms that intelligence can take: (i) individual, i.e. the problem-solving capacity of individual
human beings, (ii) collective, i.e. the problem-solving capacity of groups of humans working and coordinating together, and (iii)
articial, i.e. the problem-solving capacity of machines. For each kind of intelligence there is a corresponding ideal type of axiology, i.
J. Danaher
Futures 132 (2021) 102780
11
e. a system of values that protects, encourages and reinforces that particular mode of intelligence. Since these are ideal types, not actual
realities, it makes most sense to think about the axiologies we see in the real world as the product of tradeoffs or compromises between
these different modes of intelligence. Much of human history has clearly involved a tradeoff between individual and collective in-
telligence. Its only more recently that ‘articial forms of intelligence have been added to the mix. What was once a tug-of-war be-
tween the individual and the collective has now become a three-way ‘contestbetween the individual, the collective and the articial.
My contention is that the axiological possibility space that we navigate over the coming decades will be dened by these three ideal
types of axiology associated with individual, collective and articial intelligence.
Thats the model in a nutshell. It might seem a little abstract and opaque at this point. Lets clarify by translating it into a picture. In
Fig. 1, Ive drawn a triangle. Each vertex of the triangle is occupied by one of the ideal types of society: (i) the society that prioritises
individual intelligence, (ii) the society that prioritises collective intelligence, and (iii) the society that prioritises articial intelligence.
The claim being made is that societies can be classied according to their location within this triangle. For example, a society located
midway along the line joining the individual intelligence society to the collective intelligence society would prioritise technologies that
enhance both individual and collective forms of intelligence, and would have an axiology that mixed the values associated with both. A
society located at the midpoint of the triangle as a whole, would include elements of all three of the ideal types. And so on.
The value of this picture depends on what we understand by its contents. What follows is a brief sketch of each ideal type:
4.1. Individual intelligence society
Individual intelligence is the intelligence associated with individual human beings, i.e. their capacity to use mental models and
tools to solve problems and achieve goals in the world around them. In its idealised form, individual intelligence is distinct from
collective and articial intelligence. In other words, the idealised form of individual intelligence is self-reliant and self-determining. It
is promoted by any and all technologies that promote individual problem-solving capacity and self-reliance. This includes most ‘tools
and could also include technologies of individual enhancement (e.g. cognitive enhancers, cyborgication, and genetic engineering).
The associated ideal type of axiology will consequently place an emphasis on intelligent individuals as the most important moral
subjects and will try to protect their interests, identify their responsibilities, and reward them for their intelligence. It will ensure that
the individual is protected from interference (i.e. that they are free and autonomous); that he/she can benet from the fruits of their
labour; that their capacities are developed to their full potential; and that they are responsible for their own fate. In essence, it will be a
strongly liberal axiological order.
4.2. Collective intelligence society
Collective intelligence is the intelligence associated with groups of human beings, and arises from their ability to coordinate and
Fig. 1. The Intelligence Model of Axiological Space.
J. Danaher
Futures 132 (2021) 102780
12
cooperate in order to solve problems and achieve goals. Examples might include a group of hunters coordinating an attack on a deer or
bison, or a group of scientists working in lab trying to develop a medicinal drug. Collective intelligence thrives on technologies that
enable group communication and coordination, e.g. networking and information communication technologies. The idealised form of
collective intelligence sees the individual as just a cog in a collective mind. The associated ideal type of axiology is one that emphasises
the group as the most important moral subject, and values things like group cohesion, collective welfare, common ownership, and
possibly equality of power and wealth (though equality is, arguably, more of an individualistic value, and so cohesion might be the
overriding value). In essence, it will be a strongly communistic/socialistic and possibly nationalistic axiological order (cf Danaher &
Petersen, 2020 for a sketch of a ‘hivemind society that may embodies some of the values of a collective intelligence society).
I pause here to repeat the message from earlier: these are ideal types of axiological order. There never was a primordial liberal state
of nature in which individual intelligence ourished. On the contrary, it is more likely that humans have always been social creatures
and that the celebration of individual intelligence came much later on in human development (Schneewind, 1998; Siedentop, 2011).
Nevertheless, I also suspect that there has always been a compromise and back-and-forth between the two poles.
4.3. Articial intelligence society
Articial intelligence is the kind of intelligence associated with computer-programmed machines. It is inherently technological. It
mixes and copies elements from individual and collective intelligence (since humans created it and their data often fuels it), but it is
also based on some of its own tricks. It functions in forms and at speeds that are distinct from human intelligence. It is used initially as a
tool (or set of tools) for human benet: a way of lightening or sharing our cognitive burden. It can, however, function autonomously
and without human input. It is even possible that, one day, AIs will pursue goals and purposes that are not conducive to our well-being
(Bostrom, 2014). The idealised form of AI is one that is independent from human intelligence, i.e. does not depend on human intel-
ligence to assist in its problem solving abilities. The associated ideal type of axiology is, consequently, one in which human intelligence
is devalued; in which machines do all the important cognitive work; and in which we are treated as (at best) moral patients (bene-
ciaries of their successes). Think about the future of automated leisure and idleness that is depicted in a movie like Wall:E or, perhaps,
in Ian M Bankss Culture novels. Instead of focusing on individual self-reliance and group cohesion, the articially intelligent axiology
will be one that prioritises human pleasure, recreation, game-playing, idleness, and machine-mediated abundance (of material re-
sources and phenomenological experiences) (Danaher, 2019).
The sketch of this last ideal type of axiology is, admittedly, deeply anthropocentric: it assumes that humans will still be the primary
moral subjects and beneciaries of the articially intelligent social order. You could challenge this and argue that a truly articially
intelligent order would be one in which machines are treated as the primary moral subjects (Gunkel, 2018). Thats a possibility that
should be entertained. For now, I stick with the idea of humans as the primary moral subjects because I think that is more technically
and politically feasible, at least in the short to medium term. I also think that this idea gels well with the model Ive developed. It paints
an interesting picture of the arc of human history: Human society once thrived on a combination of individual and collective intel-
ligence. Using this combination of intelligences we built a modern, industrially complex society. Eventually the combination of these
intelligences allowed us to create a technology that rendered our intelligence obsolescent and managed our social order on our behalf.
This adds a new element to the axiological possibility space that we will navigate over the coming decades (Fig. 1).
There are problems with this model. Its overly simplistic; it assumes that there is only one technological determinant of our
movement through axiological possibility space; and it seems to ignore or overlook moral issues that currently animate our political
and social lives (e.g. identity politics). Still, by focusing on the abstract property of intelligence as the major driver of axiological
change, the model provides a starting point from which a more complex sketch of the axiological possibility space can be developed. I
want to close by suggesting some ways in which this model could be (and, if it has any merit, should be) developed:
Other potential dimensions of variance and/or ideal types of social order should be offered and evaluated.
A more detailed sketch of the foundational values associated with the different ideal types should be provided.
The links between the identied foundational values and different social governance systems should be mapped in more detail.
An understanding of how other technological developments might t into this ‘triangular model is needed.
A normative defence of the different extremes, as well as the importance of balancing between the extremes, is needed so that we
have some sense of what is at stake as we navigate through the possibility space. This would be essential if we are to pursue
axiological futurism from a normative stance.
In short, we need to make full use of the methods outlined in the previous section in order to explore the possibility space as best we
can. In this respect, (somewhat ironically) collective intelligence methods could be particularly valuable. Perhaps there could be a
series of mock ‘constitutional conventionsfor the future, in which such groups actually draft and debate the different possible ideal
type axiologies?
5. Conclusion
In conclusion, axiological futurism is the systematic and explicit inquiry into the axiological possibility space for future human (and
post-human) civilisations. Axiological futurism is necessary because, given the history of axiological change and variation, it is very
unlikely that our current axiological systems will remain static and unchanging in the future. Axiological futurism is also important
because it is complementary to other futurological inquiries. While it might initially seem that axiological futurism cannot be a
J. Danaher
Futures 132 (2021) 102780
13
systematic inquiry, this is not the case. Axiological futurism is an exercise in informed speculation. The job of the axiological futurist is
to map the axiological possibility space and consider how civilisations might shift and change location within that possibility space in
the future. The goal is not precise prediction but, rather, scenario planning. In doing this, the axiological futurist can call upon a
number of disciplines for assistance, including philosophy, history, anthropology, and psychology. I have tried to show how this might
be done by presenting a model of the future axiological possibility space that focuses on the role of intelligence in shaping our
foundational values. I hope that others join the cause and develop axiological futurism into a distinctive branch of research.
Acknowledgment
Many thanks to Matthijs Maas, Michael Hogan and Sven Nyholm for feedback on earlier versions of this paper. My thanks also to
two anonymous reviewers for helpful comments.
References
Alexander, J. M. (2008). The structural evolution of morality. Cambridge: Cambridge University Press.
Appiah, K. A. (2010). The honor code: How moral revolutions happen. New York: W W Norton & Co.
Armstrong, S. (2014). Smarter than us: The rise of machine intelligence. Machine Intelligence Research Institute.
Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., et al. (2018). The moral machine experiment. Nature, 563, 5964.
Baum, S., Armstrong, S., Ekenstedt, T., H¨
aggstr¨
om, O., Hanson, R., Kuhlemann, K., Maas, M. M., Miller, J., Salmela, M., Sandberg, A., Sotala, K., Torres, P.,
Turchin, A., & Yampolskiy, R. (2019). Long-term trajectories of human civilization. Foresight, 21(1), 5383. https://doi.org/10.1108/FS-04-2018-0037.
Bicchieri, C. (2016). Norms in the wild: How to diagnose, measure, and change social norms. Oxford, UK: OUP.
Bostrom, N. (2005). Transhumanist values. Review of Contemporary Philosophy, 4. May 2005.
Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 1530.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: OUP.
Brin, D. (1998). The transparent society. New York: Basic Books.
Bryson, J. (2018). Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 1526.
Campbell, R. (2014). Reective equilibrium and consistency reasoning. Australasian Journal of Philosophy, 92(3), 433453.
Currier, R. (2015). Unbound: How eight technologies made us human and brought our world to the brink. New York: Arcade Publishing.
Curry, O. S. (2016). Morality as cooperation: A problem-centred approach. In T. Shackelford, & R. Hansen (Eds.), The evolution of morality. Evolutionary psychology.
Cham: Springer. https://doi.org/10.1007/978-3-319-19671-8_2.
Curry, O. S., Alfano, M., Brandt, M., & Pelican, C. (2020). Moral molecules: Morality as a combinatorial system. (Moral Molecules: Morality as a Combinatorial System).
Preprint, available at. https://doi.org/10.31219/osf.io/xnstk.
Danaher, J. (2019). Automation and Utopia. Cambridge, MA: Harvard University Press.
Danaher, J. (2018). Moral enhancement and freedom: A critique of the little Alex problem. In Hauskeller, & Coyne (Eds.), Moral enhancement: Critical perspectives.
Cambridge, UK: Cambridge University Press.
Danaher, J., & Petersen, S. (2020). In defence of the hivemind society. Neuroethics. https://doi.org/10.1007/s12152-020-09451-7.
Danaher, J., et al. (2017). Algorithmic governance: Developing a research agenda through the power of collective intelligence. Big Data & Society. https://doi.org/
10.1177/2053951717726554.
Eubanks, V. (2017). Automating inequality. New York: St Martins Press.
Fern´
andez-Armesto, F. (2019). Out of our minds: What we think and how we came to think it. London: OneWorld.
Flanagan, O. (2017). The geography of morality. Oxford: OUP.
Gellers, J. (2020). Rights for robots: Articial intelligence, animal and environmental law. New York: Routledge.
Gendler, T. S., & Liao, S.-y. (2016). The problem of imaginative resistance. In J. Gibson, & N. Carroll (Eds.), The Routledge companion to philosophy of literature (pp.
405418). Routledge.
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., et al. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. Advances in
Experimental Social Psychology, 47, 55130.
Greene, J. (2013). Moral tribes. London: Penguin.
Gunkel, D. (2018). Robot rights. Cambridge, MA: MIT Press.
Haidt, J. (2012). The righteous mind. London: Penguin.
Hanson, R. (2016). The age of Em. Oxford: OUP.
Harari, Y. N. (2016). Homo deus. London: Harvill Secker.
Henrich, J. (2015). The secret of our success. Princeton, NJ: Princeton University Press, 2015.
Hogan, M. J., Johnston, H., & Broome, B. (2015). Consulting with citizens in the design of wellbeing measures and policies: Lessons from a systems science
application. Social Indicators Research, 123, 857887.
Hogan, M. J., Hall, T., & Harney, O. M. (2017). Collective intelligence design and a new politics of system change. Civitas Educationis, 6(1), 5178.
Kleinberg, J., Mullainathan, S., & Raghavan, M. (2016). Inherent trade-offs in the fair determination of risk scores. ArXiv:1609.05807 [Cs, Stat], September 19, 2016
http://arxiv.org/abs/1609.05807.
Kudina, O., & Verbeek, P.-P. (2019). Ethics from within: Google glass, the Collingridge dilemma, and the mediated value of privacy. Science, Technology, & Human
Values, 44(2), 291314. https://doi.org/10.1177/0162243918793711.
Kuhn, T. (1962). The structure of scientic revolutions. Chicago, IL: University of Chicago Press.
Lecky, W. (1955). The history of European morals. originally published 1869. New York: George Braziller.
Leiter, B. (2015). Constitutional law, moral judgment and the supreme court as SuperLegislature. Hastings Law Journal, 66, 16011617.
List, C., & Valentini, L. (2016). Freedom as independence. Ethics, 126(4), 10431074.
Malone, T. (2018). Superminds: The surprising power of people and computers thinking together. London: Oneworld.
Marquis, D. (1989). Why abortion is immoral. Journal of Philosophy, 86, 183202.
McClain, L. (2018). Prejudice, moral progress, and being ‘on the right side of history: Reections on Loving v. Virginia at fty. Fordham Law Review, 86, 2701.
Millum, J. (2014). The foundation of the childs right to an open future. Journal of Social Philosophy, 45(4), 522538. https://doi.org/10.1111/josp.12076.
Moody-Adams, M. M. (1999). The idea of moral progress. Metaphilosophy, 30(3), 168185.
Morris, I. (2015). Foragers, farmers and fossil fuels. Princeton NJ: Princeton University Press.
Mulgan, T. (2017). Big mind: How collective intelligence can change our world. Princeton, NJ: Princeton University Press.
Naam, R. (2013). The innite resource: The power of ideas on a nite planet. Hanover, NH: University Press of New England.
Peppet, S. (2015). Unraveling privacy: The personal prospectus and the threat of a full-disclosure future. Northwestern University Law Review, 105, 1153.
Pettit, P. (1993). Consequentialism. In P. Singer (Ed.), The Blackwell companion to ethics. Oxford: Blackwell Publishers.
Pinker, S. (2011). The better angels of our nature. London: Penguin.
Pleasants, N. (2018). The structure of moral revolutions. Social Theory and Practice, 44(4), 567592.
J. Danaher
Futures 132 (2021) 102780
14
Popper, K. (1957). The poverty of historicism. London: Routledge.
Ridley, M. (2010). The rational optimist. London: HarperCollins Publishers.
Robertson, J. (2017). Robo sapiens Japanicus. Berkeley, CA: University of California Press.
Roemer, J. (1998). Theories of distributive justice. Cambridge, MA: Harvard University Press.
Schefer, S. (2012). Death and the afterlife. Oxford: Oxford University Press.
Schneewind, J. B. (1998). The invention of autonomy. Cambridge, UK: Cambridge University Press.
Shahar, A. (2019). Exploring articial intelligence futures. Journal of AI Humanities, 2, 169194.
Siedentop, L. (2011). Inventing the individual. London: Penguin.
Skyrms, B. (1997). Evolution of the social contract. Cambridge, UK: Cambridge University Press.
Smil, V. (2019). Growth: From microorganisms to megacities. Cambridge, MA: MIT Press.
Stokes, P. (2017). Towards a new epistemology of moral progress. European Journal of Philosophy, 25(4), 18241843.
Straus, S., Parker, A., Bruce, J., & Dembosky, J. (2009). Group matters: A review of the effects of group interaction processes and outcomes in analytic teams. RAND Working
Paper. Available at: http://www.rand.org/content/dam/rand/pubs/working_papers/2009/RAND_WR580.pdf.
Tainter, J. (1988). The collapse of complex societies. Cambridge, UK: Cambridge University Press.
Tomasello, M. (2016). A natural history of morality. Cambridge, MA: Harvard University Press.
Torres, P. (2017). Morality, foresight and human ourishing: An introduction to existential risk. Durham, NC: Pitchstone Publishing.
Tuna, E. H. (2020). Imaginative resistance. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Summer 2020 Edition). URL =<https://plato.stanford.edu/
archives/sum2020/entries/imaginative-resistance/>.
Turchin, P. (2007). War and peace and war: The rise and fall of empires. New York: Plume.
Vallor, S. (2016). Technology and the virtues. Oxford: OUP.
Van De Poel, I. (2018). Design for value change. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9461-9.
Van der Duin, P. A. (Ed.). (2016). Foresight in organizations: Methods and tools. New York: Routledge.
Verbeek, P. P. (2013). Some misunderstandings about the moral signicance of technology. In P. Kroes, & P. P. Verbeek (Eds.), The moral status of technical artifacts.
Dordrecht: Springer.
Verbeek, P. P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago, IL: University of Chicago Press.
Weber, M. (1949). In E. Shils, & H. Finch (Eds.), Methodology of the social sciences. Glencoe, IL: Free Press.
Williams, E. (2015). The possibility of an ongoing moral catastrophe. Ethical Theory and Moral Practice, 18(5), 971982.
Yudkowsky, E. (2011). Complex value systems are required to realize valuable futures. available at. Machine Intelligence Research Institute https://intelligence.org/les/
ComplexValues.pdf.
J. Danaher
... Liu 2018; Kalluri 2020), as well as internationally (Horowitz 2018;Cummings et al. 2018). Less visibly but no less significant, technological change can also challenge and change entrenched norms, values, and beliefs (Danaher 2021;Köbis, Bonnefon, and Rahwan 2021). These disruptive "second-order effects" of emerging technologies 2 Matthijs Maas (mmm71@cam.ac.uk) is postdoctoral researcher at the Centre for the Study of Existential Risk at the University of Cambridge, and College Research Associate at King's College, University of Cambridge. ...
... For instance, while privacy is an essential ethical value to safeguard in the face of AI, a good case can be made that AI has contributed to a recent shift in norms-and conceptions of privacy, which, in turn, should be reflected in the ethical assessment of AI. Similar arguments might be made for the centrality of 'work' and professional identity in an increasingly automated world (Danaher 2019). More generally, it has been recognized that socially disruptive technologies can have "deep impacts" in ethics (J. ...
... The example generalizes: technological artifacts and applications are frequently designed to reflect and realize certain values, 7 but technologies may end up reshaping and disrupting our values in turn. While some of its case-studies are historical, in current scholarship the technomoral change framework is primarily used as an anticipatory framework, which serves to sketch scenarios of possible pathways of future value change; recent work has extended the approach with a systematic exploration of future axiological trajectories (Danaher 2021). As an anticipatory framework, technomoral change is part of a broader array of Ethical Foresight Approaches (Floridi and Strait 2020), which ethicists invoke to assess emerging technologies. ...
Conference Paper
Full-text available
Emerging technologies have far-reaching impacts on society, for instance by challenging ethical norms and values, and by disrupting legal and regulatory systems. There is a convergent interest among ethicists and legal scholars in these second-order technological disruptions, and in developing frameworks to respond to them. Thus far, however, their approaches have remained largely siloed, focusing on the mutual shaping of emerging technologies and either ethical or regulatory systems, respectively. In this paper, we propose to integrate these dyadic models, and shift focus to the triadic relations and mutual shaping of values, technology, and regulation. We argue that a triadic values-technology-regulation model-"the technology triad"-is more descriptively accurate, as it allows a better mapping of second-order impacts of technological changes (on values, through changes in legal systems; or on legal systems, through changes in values). Simultaneously, it serves to highlight a broader portfolio of ethical, technical, or regulatory interventions that can enable effective ethical triage of-and a more resilient response to-Socially Disruptive Technologies.
... Swierstra, Stemerding, and Boenink (2009) have proposed a broad view of technologically induced moral change. This view has triggered the discourse about ethically disruptive technologies (Hopster 2021), methodologies to anticipate future technologically induced value changes (Boenink, Swierstra, and Stemerding 2010;Swierstra 2013;Danaher 2021), and design approaches accounting for value change (Van de Poel 2021). There is thus a growing literature on value change. ...
... Within ethics of technology, which often deals with questions about design and governance of new and emerging technologies, more normative and prescriptive approaches were taken. Some scholars understand value change as a subset of moral change induced by technologies (Swierstra, Stemerding, and Boenink 2009) and develop various approaches and methodologies to deal with it (Boenink, Swierstra, and Stemerding 2010;Swierstra 2013;Van de Poel 2021;Danaher 2021). Van de Poel (2021), for instance, claims sustainability to be an example of a newly emerged value that became relevant for technological design, in particular for design of energy systems, but was previously not considered when existing energy systems were built. ...
... Van de Poel (2021), for instance, claims sustainability to be an example of a newly emerged value that became relevant for technological design, in particular for design of energy systems, but was previously not considered when existing energy systems were built. Along with others (Boenink, Swierstra, and Stemerding 2010;Swierstra 2013;Danaher 2021), Van de Poel (2021) suggests advancing anticipatory capacities of existing design approaches to address potential value changes through technological design. ...
Article
Full-text available
Changing values may give rise to intergenerational conflicts, like in the ongoing climate change and energy transition debate. This essay focuses on the interpretative question of how this value change can best be understood. To elucidate the interpretation of value change, two philosophical perspectives on value are introduced: Berlin’s value pluralism and Dworkin’s interpretivism. While both authors do not explicitly discuss value change, I argue that their perspectives can be used for interpreting value change in the case of climate change and the energy transition. I claim that Berlin’s pluralistic account of value would understand the value change as an intergenerational conflict and therefore provide a too narrow and static ground for understanding ongoing value change. Instead, by exploring Dworkin’s standpoint in moral epistemology, this essay distills a more encompassing perspective on how values may relate, converge, overlap, and change, fulfilling their functions in the course of climate change and energy transition. This perspective is further detailed by taking inspiration from Shue’s work on the (re)interpretation of equity in the climate change debate. I argue that the resulting perspective allows us to see value change as a gradual process rather than as a clash between generations and their values.
... As of yet, little reflection has been given regarding the precise nature of this requirement. Arguably, the most rigorous methodological proposal to date for studying the future dynamics of moral change has been outlined by Danaher (2021). Under the header of "axiological futurism", Danaher proposes systematically exploring "axiological possibility space" and outlines tools for navigating it. ...
... This article proposes an answer to this question, and thereby seeks to further the project of axiological futurism. Its answer, to wit, will be to take the plausibility requirement at face value: axiological futurists should seek to approximate the historically indexed notion of "real possibilities". 1 In the spirit of constructive criticism, I will argue for this claim while engaging with Danaher's (2021) framework, which provides the most rigorous methodological outline for studying future value change to date. But I take the lessons to be equally applicable to the technomoral change literature and to anticipatory approaches in the ethics of technology more broadly: these should be restricted to identifying realistic possibilities, and a historically oriented approach can provide important insights as to what such possibilities may amount to. ...
... My conceptual claim is that the possibilities that axiological futurists should be after are realistic possibilities, understood in the above sense. Danaher's (2021) approach to axiological futurism is largely consonant with this proposal: it is thoroughly multidisciplinary, takes input from evidence from various sources and pays specific attention to historical examples to calibrate axiological possibility space. But as noted, Danaher's reliance on conceptual claims about the goods contained in all axiologies might be unduly restrictive and insufficiently anchored in an historical account of what state-of-affairs might evolve starting from our current position. ...
Preprint
Full-text available
The co-shaping of technology and values is a topic of increasing interest among philosophers of technology. Part of this interest pertains to anticipating future value change, or what Danaher (2021) calls the investigation of "axiological futurism". However, this investigation faces a challenge: "axiological possibility space" is vast, and we currently lack a clear account of how this space should be demarcated. It stands to reason that speculations about how values might change over time should exclude farfetched possibilities and be restricted to possibilities that can be dubbed to be realistic instead. But what does this realism criterion entail? This paper introduces the notion of realistic possibilities as a key conceptual advancement to the study of axiological futurism and offers suggestions as to how realistic possibilities of future value change might be identified. Additionally, I propose two slight modifications to the approach of axiological futurism. First, I argue that axiological futurism can benefit from a thoroughly historicised understanding of moral change. Secondly, I argue that when employed in normative contexts, the axiological futurist should seek to identify realistic possibilities that come along with substantial normative risks.
... While some have previously conceived care as something restricted to human-human relations, this might change, and our accompanying ideas of what constitutes quality care (refer to Figure 1 above) could simultaneously change [62]. Danaher [68] provides the foundation of such research in a recent article examining axiological futurism, in which value change as a result of technological change. In a similar vein, Saetra [69] deals with how robots designed for the use for love might change the very concept of love, and empirical research to test the strength of such hypotheses are required to accurately evaluate the fears related to certain negative effects of digitalisation in the healthcare sector. ...
Article
Full-text available
Digital technologies have profound effects on all areas of modern life, including the workplace. Certain forms of digitalisation entail simply exchanging digital files for paper, while more com-plex instances involve machines performing a wide variety of tasks on behalf of humans. While some are wary of the displacement of humans that occurs when, for example, robots perform tasks previously performed by humans, others argue that robots only perform the tasks that ro-bots should have carried out in the very first place and never by humans. Understanding the im-pacts of digitalisation in the workplace requires an understanding of the effects of digital tech-nology on the tasks we perform, and these effects are often not foreseeable. In this article, the changing nature of work in the health care sector is used as a case to analyse such change and its implications on three levels: the societal (macro), organisational (meso), and individual level (micro). Analysing these transformations by using a layered approach is helpful for understand-ing the actual magnitude of the changes that are occurring and creates the foundation for an in-formed regulatory and societal response. We argue that, while artificial intelligence, big data, and robotics are revolutionary technologies, most of the changes we see involve technological substitution and not infrastructural change. Even though this undermines the assumption that these new technologies constitute a fourth industrial revolution, their effects on the micro and meso level still require both political awareness and proportional regulatory responses.
... In the context of long term governance, Level 4 appears so deeply unfathomable as to defy conceptualisation from our present standpoint. It suggests the minimum requirement to reconsider our contemporary axiological configuration (Danaher, 2020), and questions the desirability of our present notions, techniques and objectives for governance . ...
... In the context of long term governance, Level 4 appears so deeply unfathomable as to defy conceptualisation from our present standpoint. It suggests the minimum requirement to reconsider our contemporary axiological configuration (Danaher, 2020), and questions the desirability of our present notions, techniques and objectives for governance . ...
Article
Full-text available
Change is hardly a new feature in human affairs. Yet something has begun to change in change. In the face of a range of emerging, complex, and interconnected global challenges, society’s collective governance efforts may need to be put on a different footing. Many of these challenges derive from emerging technological developments – take Artificial Intelligence (AI), the focus of much contemporary governance scholarship and efforts. AI governance strategies have predominantly oriented themselves towards clear, discrete clusters of pre-defined problems. We argue that such ‘problem-solving’ approaches may be necessary, but are also insufficient in the face of many of the ‘wicked problems’ created or driven by AI. Accordingly, we propose in this paper a complementary framework for grounding long-term governance strategies for complex emerging issues such as AI into a ‘problem-finding’ orientation. We first provide a rationale by sketching the range of policy problems created by AI, and providing five reasons why problem-solving governance approaches to these challenges fail or fall short. We conversely argue that that creative, ‘problem-finding’ research into these governance challenges is not only warranted scientifically, but will also be critical in the formulation of governance strategies that are effective, meaningful, and resilient over the long-term. We accordingly illustrate the relation between and the complementarity of problem-solving and problem-finding research, by articulating a framework that distinguishes between four distinct ‘levels’ of governance: problem-solving research generally approaches AI (governance) issues from a perspective of (Level 0) ‘business-as-usual’ or as (Level 1) ‘governance puzzle-solving’. In contrast, problem-finding approaches emphasize (Level 2) ‘governance Disruptor-Finding’; or (Level 3) ‘Charting Macrostrategic Trajectories’.We apply this theoretical framework to contemporary governance debates around AI throughout our analysis to elaborate upon and to better illustrate our framework. We conclude with reflections on nuances, implications, and shortcomings of this long-term governance framework, offering a range of observations on intra-level failure modes, between-level complementarities, within-level path dependencies, and the categorical boundary conditions of governability (‘Governance Goldilocks Zone’). We suggest that this framework can help underpin more holistic approaches for long-term strategy-making across diverse policy domains and contexts, and help cross the bridge between concrete policies on local solutions, and longer-term considerations of path-dependent societal trajectories to avert, or joint visions towards which global communities can or should be rallied.
Article
The co-shaping of technology and values is a topic of increasing interest among philosophers of technology. Part of this interest pertains to anticipating future value change, or what Danaher (2021) calls the investigation of ‘axiological futurism’. However, this investigation faces a challenge: ‘axiological possibility space’ is vast, and we currently lack a clear account of how this space should be demarcated. It stands to reason that speculations about how values might change over time should exclude farfetched possibilities and be restricted to possibilities that can be dubbed realistic. But what does this realism criterion entail? This article introduces the notion of ‘realistic possibilities’ as a key conceptual advancement to the study of axiological futurism and offers suggestions as to how realistic possibilities of future value change might be identified. Additionally, two slight modifications to the approach of axiological futurism are proposed. First, axiological futurism can benefit from a more thoroughly historicized understanding of moral change. Secondly, when employed in service of normative aims, the axiological futurist should pay specific attention to identifying realistic possibilities that come with substantial normative risks.
Article
Full-text available
My aim in this article is to set out some counter-intuitive claims about the challenges posed by artificial intelligence (AI) applications to the protection and enjoyment of human rights and to be your guide through my unorthodox ideas. While there are familiar human rights issues raised by AI and its applications, these are perhaps the easiest of the challenges because they are already recognized by the human rights regime as problems. Instead, the more pernicious challenges are those that have yet to be identified or articulated, because they arise from new affordances rather than directly through AI modeled as a technology. I suggest that we need to actively explore the potential problem space on this basis. I suggest that we need to adopt models and metaphors that systematically exclude the possibility of applying the human rights regime to AI applications. This orientation will present us with the difficult, intractable problems that most urgently require responses. There are convincing ways of understanding AI that lock out the very possibility for human rights responses and this should be grounds for serious concern. I suggest that responses need to exploit both sets of insights I present in this paper: first that proactive and systematic searches of the potential problem space need to be continuously conducted to find the problems that require responses; and second that the monopoly that the human rights regime holds with regards to addressing harm and suffering needs to be broken so that we can deploy a greater range of barriers against failures to recognize and remedy AI-induced wrongs.
Article
Full-text available
What is morality? How many moral values are there? And what are they? According to the theory of morality-as-cooperation, morality is a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. This theory predicts that there will be as many different types of morality as there are different types of cooperation. Previous research, drawing on evolutionary game theory, has identified at least seven different types of cooperation, and used them to explain seven different types of morality: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. Here we explore the conjecture that these simple moral ‘elements’ combine to form a much larger number of more complex moral ‘molecules’, and that as such morality is a combinatorial system. For each combination of two elements, we hypothesise a candidate moral molecule, and successfully locate an example of it in the professional and popular literature. These molecules include: fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing. These findings indicate that morality – like many other physical, biological, psychological and cultural systems – is indeed a combinatorial system. Thus morality-as-cooperation provides a principled and powerful theory, that explains why there are many moral values, and successfully predicts what they will be; and it generates a systematic framework that has the potential to explain all moral ideas, possible and actual. Pursuing the many implications of this theory will help to place the study of morality on a more secure scientific footing.
Article
Full-text available
The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening-something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim-the axiological openness argument and the desirability argument-and then defend it against three major objections. "We are the Borg. Lower your shields and surrender your ships. We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us. Resistance is futile."
Preprint
Full-text available
Is morality a combinatorial system in which a small number of simple moral ‘elements’ combine to form a large number of complex moral ‘molecules’? According to the theory of morality-as-cooperation, morality is a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. As evolutionary game theory has shown, there are many types of cooperation; hence, the theory explains many types of morality, including: family values, group loyalty, reciprocity, heroism, deference, fairness and property rights. As with any set of discrete items, these seven ‘elements’ can, in principle, be combined in multiple ways. But are they in practice? In this paper, we show that they are. For each combination of two elements, we hypothesise candidate moral molecules; and we successfully locate examples of them in the professional and popular literature. These molecules include: fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing. Thus morality – like many other physical, biological, psychological and cultural systems – is indeed a combinatorial system. And morality-as-cooperation provides a principled and systematic taxonomy that has the potential to explain all moral ideas, possible and actual. Pursuing the many implications of this theory will help to place the study of morality on a more secure scientific footing.
Book
Full-text available
(Forthcoming in September 2019) Human obsolescence is imminent. Automating technologies threaten to usher in a future of human redundancy. The factories of the future will work in the dark, staffed by armies of tireless robots. The hospitals of the future will have fewer doctors, depending instead on cloud-based AI to diagnose patients and recommend treatments. The homes of the future will anticipate our wants and needs and provide all the entertainment, food, and distraction we could ever desire. Soon there will be little left for us to do but sit back and enjoy the ride. To many, this is a depressing prognosis, an image of civilization replaced by its machines. But what if an automated future is something to be welcomed rather than feared? Work is a source of misery and oppression for most people, so shouldn’t we do what we can to hasten its demise? Automation and Utopia makes the case for a world in which, free from need or want, we can spend our time inventing and playing games and exploring virtual realities more deeply engaging and absorbing than any we have experienced before, allowing us to achieve idealized forms of human flourishing. The idea that we should “give up” and retreat to the virtual may seem shocking, even distasteful. But this book urges us to embrace the possibilities of this new existence. The rise of automating technologies presents a utopian moment for humankind, providing both the motive and the means to build a better future
Book
Bringing a unique perspective to the burgeoning ethical and legal issues surrounding the presence of artificial intelligence in our daily lives, the book uses theory and practice on animal rights and the rights of nature to assess the status of robots. Through extensive philosophical and legal analyses, the book explores how rights can be applied to nonhuman entities. This task is completed by developing a framework useful for determining the kinds of personhood for which a nonhuman entity might be eligible, and a critical environmental ethic that extends moral and legal consideration to nonhumans. The framework and ethic are then applied to two hypothetical situations involving real-world technology—animal-like robot companions and humanoid sex robots. Additionally, the book approaches the subject from multiple perspectives, providing a comparative study of legal cases on animal rights and the rights of nature from around the world and insights from structured interviews with leading experts in the field of robotics. Ending with a call to rethink the concept of rights in the Anthropocene, suggestions for further research are made. An essential read for scholars and students interested in robot, animal and environmental law, as well as those interested in technology more generally, the book is a ground-breaking study of an increasingly relevant topic, as robots become ubiquitous in modern society.