BookPDF Available

Singularity Hypotheses: A Scientific and Philosophical Assessment

Authors:
Chapter 1
Singularity Hypotheses: An Overview
Introduction to: Singularity Hypotheses:
A Scientific and Philosophical Assessment
Amnon H. Eden, Eric Steinhart, David Pearce and James H. Moor
Questions
Bill Joy in a widely read but controversial article claimed that the most powerful
21st century technologies are threatening to make humans an endangered species
(Joy 2000). Indeed, a growing number of scientists, philosophers and forecasters
insist that the accelerating progress in disruptive technologies such as artificial
intelligence, robotics, genetic engineering, and nanotechnology may lead to what
they refer to as the technological singularity: an event or phase that will radically
change human civilization, and perhaps even human nature itself, before the
middle of the 21st century (Paul and Cox 1996; Broderick 2001; Garreau 2005,
Kurzweil 2005).
Singularity hypotheses refer to either one of two distinct and very different
scenarios. The first (Vinge 1993; Bostrom to appear) postulates the emergence of
artificial superintelligent agents—software-based synthetic minds—as the ‘singular’
outcome of accelerating progress in computing technology. This singularity results
A. H. Eden (&)
School of Computer Science and, Electronic Engineering,
University of Essex, Colchester, CO4 3SQ, UK
e-mail: eden@essex.ac.uk
E. Steinhart
Department of Philosophy, William Paterson University, 300 Pompton Road, Wayne, NJ
07470, USA
e-mail: esteinhart1@nyc.rr.com
D. Pearce
knightsbridge Online, 7 Lower Rock Gardens, Brighton, USA
e-mail: dave@hedweb.com
J. H. Moor
Department of Philosophy, Dartmouth College, 6035 Thornton, Hanover, NH 03755, USA
e-mail: james.h.moor@dartmouth.edu
A. H. Eden et al. (eds.), Singularity Hypotheses, The Frontiers Collection,
DOI: 10.1007/978-3-642-32560-1_1, Springer-Verlag Berlin Heidelberg 2012
1
from an ‘intelligence explosion’ (Good 1965): a process in which software-based
intelligent minds enter a ‘runaway reaction’ of self-improvement cycles, with each
new and more intelligent generation appearing faster than its predecessor. Part I of
this volume is dedicated to essays which argue that progress in artificial intelligence
and machine learning may indeed increase machine intelligence beyond that of any
human being. As Alan Turing (1951) observed, ‘‘at some stage therefore we should
have to expect the machines to take control, in the way that is mentioned in Samuel
Butler’s ‘Erewhon’ ’: the consequences of such greater-than-human intelligence
will be profound, and conceivably dire for humanity as we know it. Essays in Part II
of this volume are concerned with this scenario.
A radically different scenario is explored by transhumanists who expect pro-
gress in enhancement technologies, most notably the amplification of human
cognitive capabilities, to lead to the emergence of a posthuman race. Posthumans
will overcome all existing human limitations, both physical and mental, and
conquer aging, death and disease (Kurzweil 2005). The nature of such a singu-
larity, a ‘biointelligence explosion’, is analyzed in essays in Part III of this volume.
Some authors (Pearce, this volume) argue that transhumans and posthumans will
retain a fundamental biological core. Other authors argue that fully functioning,
autonomous whole-brain emulations or ‘uploads’ (Chalmers 2010; Koene this
volume; Brey this volume) may soon be constructed by ‘reverse-engineering’ the
brain of any human. If fully functional or even conscious, uploads may usher in an
era where the notion of personhood needs to be radically revised (Hanson 1994).
Advocates of the technological singularity have developed a powerful inductive
Argument from Acceleration in favour of their hypothesis. The argument is based
on the extrapolation of trend curves in computing technology and econometrics
(Moore 1965; Moravec 1988, Chap. 2; Moravec 2000, Chap. 3; Kurzweil 2005,
Chaps. 1 and 2). In essence, the argument runs like this: (1) The study of the
history of technology reveals that technological progress has long been acceler-
ating. (2) There are good reasons to think that this acceleration will continue for at
least several more decades. (3) If it does continue, our technological achievements
will become so great that our bodies, minds, societies, and economies will be
radically transformed. (4) Therefore, it is likely that this disruptive transformation
will occur. Kurzweil (2005, p. 136) sets the date mid-century, around the year
2045. The change will be so revolutionary that it will constitute a ‘rupture in the
fabric of human history’ (Kurzweil 2005, p. 9).
Critics of the technological singularity dismiss these claims as speculative and
empirically unsound, if not pseudo-scientific (Horgan 2008). Some attacks focus
on the premises of the Argument from Acceleration (Plebe and Perconti this
volume), mostly (2). For example, Modis (2003; this volume) claims that after
periods of change that appear to be accelerating, technological progress always
levels off. Other futurists have long argued that we are heading instead towards a
global economic and ecological collapse. This negative scenario was famously
developed using computer modelling of the future in The Limits to Growth
(Meadows et al. 1972, 2004).
2 A. H. Eden et al.
Mocked as the ‘rapture of the nerds’’, many critics take (3) to be yet another
apocalyptic fantasy, a technocratic variation on the usual theme of doom-and-
gloom fuelled by mysticism, science fiction and even greed. Some conclude that
the singularity is a religious notion, not a scientific one (Horgan 2008; Proudfoot
this volume; Bringsjord et al. this volume). Other critics (Chaisson this volume)
accept acceleration as an underlying law of nature but claim that, in perspective,
the significance of the claimed changes is overblown. That is, what is commonly
described as the technological singularity may well materialize, with profound
consequences for the human race. But on a cosmic scale, such a mid-century
transition is no more significant then whatever may follow.
Existential risk or cultist fantasy? Are any of the accounts of the technological
singularity credible? In other words, is the technological singularity an open
problem in science?
We believe that before any interpretation of the singularity hypothesis can be
taken on board by the scientific community, rigorous tools of scientific enquiry
must be employed to reformulate it as a coherent and falsifiable conjecture. To
this end, we challenged economists, computer scientists, biologists, mathemati-
cians, philosophers and futurists to articulate their concepts of the singularity. The
questions we posed were as follows:
1. What is the [technological] singularity hypothesis? What exactly is being
claimed?
2. What is the empirical content of this conjecture? Can it be refuted or cor-
roborated, and if so, how?
3. What exactly is the nature of a singularity: Is it a discontinuity on a par with
phase transition or a process on a par with Toffler’s ‘wave’? Is the term
singularity appropriate?
4. What evidence, taken for example from the history of technology and eco-
nomic theories, suggest the advent of some form of singularity by 2050?
5. What, if anything, can be said to be accelerating? What evidence can reliably
be said to support its existence? Which metrics support the idea that ‘progress’
is indeed accelerating?
6. What are the most likely milestones (‘major paradigm shifts’) in the count-
down to a singularity?
7. Is the so-called Moore’s Law on par with the laws of thermodynamics? How
about the Law of Accelerating Returns? What exactly is the nature of the
change they purport to measure?
8. What are the necessary and sufficient conditions for an intelligence explosion
(a runaway effect)? What is the actual likelihood of such an event?
9. What evidence support the claim that machine intelligence has been rising?
Can this evidence be extrapolated reliably?
10. What are the necessary and sufficient conditions for machine intelligence to be
considered to be on a par with that of humans? What would it take for the
‘general educated opinion [to] have altered so much that one will be able to
1 Singularity Hypotheses: An Overview 3
speak of machines thinking without expecting to be contradicted’ (Turing
1950, p. 442)?
11. What does it mean to claim that biological evolution will be replaced by
technological evolution? What exactly can be the expected effects of aug-
mentation and enhancement, in particular over our cognitive abilities? To
which extent can we expect our transhuman and posthuman descendants to be
different from us?
12. What evidence support the claim that humankind’s intelligence quotient has
been rising (‘‘Flynn effect’’)? How this evidence relate to a more general claim
about a rise in the ‘intelligence’ of carbon-based life? Can this evidence be
extrapolated reliably?
13. What are the necessary and sufficient conditions for a functioning whole brain
emulation (WBE) of a human? At which level exactly must the brain be
emulated? What will be the conscious experience of a WBE? To which extent
can they be said to be human?
14. What may be the consequences of a singularity? What may be its effect on
society, e.g. in ethics, politics, economics, warfare, medicine, culture, arts, the
humanities, and religion?
15. Is it meaningful to refer to multiple singularities? If so, what can be learned
from past such events? Is it meaningful to claim a narrow interpretation of
singularity in some specific domain of activity, e.g. a singularity in chess
playing, in face recognition, in car driving, etc.?
This volume contains the contributions received in response to this challenge.
Towards a Definition
Accounts of a technological singularity—henceforth the singularity—appear to
disagree on its causes and possible consequences, on timescale, and even on its
nature: the emergence of machine intelligence or of posthumans? An event or a
period? Is the technological singularity unique or have there been others? The
absence of a consensus on basic questions casts doubt whether the notion of
singularity is at all coherent.
The term in its contemporary sense traces back to von Neumann, who is quoted
as saying that ‘the ever-accelerating progress of technology and changes in the
mode of human life gives the appearance of approaching some essential sin-
gularity in the history of the race beyond which human affairs, as we know them,
could not continue’ (in Ulam 1958). Indeed, the twin notions of acceleration and
discontinuity are common to all accounts of the technological singularity, as
distinguished from a space–time singularity and a singularity in a mathematical
function.
Acceleration refers to a rate of growth in some quantity such as computations per
second per fixed dollar (Kurzweil 2005), economic measures of growth rate (Hanson
4 A. H. Eden et al.
1994; Miller this volume) or total output of goods and services (Toffler 1970), and
energy rate density (Chaisson this volume). Others describe quantitative measures of
physical, biological, social, cultural, and technological processes of evolution:
milestones or ‘paradigm shifts’ whose timing demonstrates an accelerating pace of
change. For example, Sagan’s Cosmic Calendar (1977, Chap. 1) names milestones in
biological evolution such as the emergence of eukaryotes, vertebrates, amphibians,
mammals, primates, hominidae, and Homo sapiens, which show an accelerating
trend. Following Good (1965) and Bostrom (to appear), Muehlhauser and Salamon
(this volume), Arel (this volume), and Schmidhuber (this volume) describe devel-
opments in machine learning which seek to demonstrate that progressively more
‘intelligent’ problems have been solved during the past few decades, and how such
technologies may further improve, possibly even in a recursive process of self-
modification. Some authors attempt to show that many of the above accounts of
acceleration are in fact manifestations of an underlying law of nature (Adams 1904;
Kurzweil 2005, Chaisson this volume): quantitatively or qualitatively measured,
acceleration is commonly visualized as an upwards-curved mathematical graph
which, if projected into the future, is said to be leading to a discontinuity.
Described either as an event that may take a few hours (e.g., a ‘hard takeoff’,
Loosemore and Goertzel, this volume) or a period of years (e.g., Toffler 1970), the
technological singularity is taken to mark a discontinuity or a turning-point in
human history. The choice of word ‘singularity’ appears to be motivated less by
the eponymous mathematical concept (Hirshfeld 2011) and more by the onto-
logical and epistemological discontinuities idiosyncratic to black holes. Seen as a
central metaphor, a gravitational singularity is a (theoretical) point at the centre of
black holes at which quantities that are otherwise meaningful (e.g., density and
spacetime curvature) become infinite, or rather meaningless. The discontinuity
expressed by the black hole metaphor is thus used to convey how the quantitative
measure of intelligence, at least as it is measured by traditional IQ tests (such as
Wechsler and Stanford-Binet), may become a meaningless notion for capturing the
intellectual capabilities of superintelligent minds. Alternatively, we may say a
graph measuring average intelligence beyond the singularity in terms of IQ score
may display some form of radical discontinuity if superintelligence emerges.
Furthermore, singularitarians note that gravitational singularities are said to be
surrounded by an event horizon: a boundary in spacetime beyond which events
cannot be observed from outside, and a horizon beyond which gravitational pull
becomes so strong that nothing can escape, even light (hence ‘black’’)—a point of
no return. Kurzweil (2005) and others (e.g., Pearce this volume) contend that,
since the minds of superintelligent intellects may be difficult or impossible for
humans to comprehend (Fox and Yampolskiy this volume), a technological sin-
gularity marks an epistemological barrier beyond which events cannot be predicted
or understood—an ‘event horizon’ in human affairs. The gravitational singularity
metaphor thus reinforces the view that the change will be radical and that its
outcome cannot be foreseen.
The combination of acceleration and discontinuity is at once common and
unique to the singularity literature in general and to the essays in this volume in
1 Singularity Hypotheses: An Overview 5
particular. We shall therefore proceed on the premise that acceleration and dis-
continuity jointly offer necessary and sufficient conditions for us to take a man-
uscript to be concerned with a hypothesis of a technological singularity.
Historical Background
Many philosophers have portrayed the cosmic process as an ascending curve of
positivity (Lovejoy 1936, Chap. 9). Over time, the quantities of intelligence, power
or value are always increasing. These progressive philosophies have sometimes been
religious and sometimes secular. Secular versions of progress have sometimes been
political and sometimes technological. Technological versions have sometimes
invoked broad technical progress and have sometimes focused on more specific
outcomes such as the possible recursive self-improvement of artificial intelligence.
For some philosophers of progress, the rate of increase remains relatively
constant; for others, the rate of increase is also increasing—progress accelerates.
Within such philosophies, the singularity is often the point at which positivity
becomes maximal. It may be an ideal limit point (an omega point) either at infinity
or at the vertical asymptote of an accelerating trajectory. Or sometimes, the sin-
gularity is the critical point at which the slope of an accelerating curve passes
beyond unity.
Although thought about the singularity may appear to be very new, in fact such
ideas have a long philosophical history. To help increase awareness of the deep
roots of singularitarian thought within traditional philosophy, it may be useful to
look at some of its historical antecedents.
Perhaps the earliest articulation of the idea that history is making progress
toward some omega point of superhuman intelligence is found in The Phenome-
nology of Spirit, written by Hegel (1807). Hegel describes the ascent of human
culture to an ideal limit point of absolute knowing. Of course, Hegel’s thought is
not technological. Yet it is probably the first presentation, however abstract, of
singularitarian ideas. For the modern Hegelian, the singularity looks much like the
final self-realization of Spirit in absolute knowing (Zimmerman 2008).
Around 1870, the British writer Samuel Butler used Darwinian ideas to develop a
theory of the evolution of technology. In his essay ‘Darwin among the Machines’
and in his utopian novel Erewhon: Or, Over the Range (Butler 1872), Butler argues
that machines would soon evolve into artificial life-forms far superior to human
beings. Threatened by superhuman technology, the Erewhonians are notable for
rejecting all advanced technology. Also writing in the late 1800s, the American
philosopher Charles Sanders Peirce developed an evolutionary cosmology
(see Hausman 1993). Peirce portrays the universe as evolving from an initial chaos
to a final singularity of pure mind. Its evolution is accelerating as this tendency to
regularity acts upon itself. Although Pierce’s notion of progress was not based on
technology, his work is probably the earliest to discuss the notion of accelerating
6 A. H. Eden et al.
progress itself. Of course, Peirce was also a first-rate logician; and as such, he was
among the first to believe that minds were computational machines.
Around 1900, the American writer Henry Adams
1
was probably the first writer
to describe a technological singularity. Adams was almost certainly the first person
to write about history as a self-accelerating technological process. His essay ‘The
Law of Acceleration’ (Adams 1904) may well be the first work to propose an
actual formula for the acceleration of technological change. Adams suggests
measuring technological progress by the amount of coal consumed by society. His
law of acceleration prefigures Kurzweil’s law of accelerating returns. His later
essay ‘The Rule of Phase’’ (Adams 1909) portrays history as accelerating through
several epochs—including the Instinctual, Religious, Mechanical, Electrical, and
Ethereal Phases. This essay contains what is probably the first illustration of
history as a curve approaching a vertical asymptote. Adams provides a mathe-
matical formula for computing the duration of each technological phase, and the
amount of energy that will consumed during that phase. His epochs prefigure
Kurzweil’s evolutionary epochs. Adams uses his formulae to argue that the sin-
gularity will be reached by about the year 2025, a forecast remarkably close to
modern singularitarians.
Much writing on the singularity owes a great debt to Teilhard de Chardin (1955;
see Steinhart 2008). Teilhard is among the first writers seriously to explore the
future of human evolution. He advocates both biological enhancement technolo-
gies and artificial intelligence. He discusses the emergence of a global computa-
tion-communication system (and is said by some to have been the first to have
envisioned the Internet). He proposes the development of a global society and
describes the acceleration of progress towards a technological singularity (which
he termed ‘‘the critical point’’). He discusses the spread of human intelligence into
the universe and its amplification into a cosmic-intelligence. Much of the more
religiously-expressed thought of Kurzweil (e.g. his definition of ‘God’ as the
omega point of evolution) ultimately comes from Teilhard.
Many of the ideas presented in recent literature on the singularity are fore-
shadowed in a prescient essay by George Harry Stine. Stine was a rocket engineer
and part-time science fiction writer. His essay ‘Science Fiction is too Conserva-
tive’ was published in May 1961 in Analog. Analog was a widely read science-
fiction magazine. Like Adams, Stine uses trend curves to argue that a momentous
and disruptive event is going to happen in the early 21st Century.
In 1970, Alvin and Heidi Toffler observed both acceleration and discontinuity
in their influential work, Future Shock. About acceleration, the Tofflers observed
that ‘the total output of goods and services in advanced societies doubles every
15 years, and that the doubling times are shrinking’ (Toffler 1970, p. 25). They
demonstrate accelerating change in every aspect of modern life: in transportation,
size of population centres, family structure, diversity of lifestyles, etc., and most
1
descendant of President John Quincy Adams.
1 Singularity Hypotheses: An Overview 7
importantly, in the transition from factories as ‘means of production’ to knowledge
as the most fundamental source of wealth (Toffler 1980). The Tofflers conclude
that the transition to knowledge-based society ‘‘is, in all likelihood, bigger, deeper,
and more important than the industrial revolution. Nothing less than the second
great divide in human history, the shift from barbarism to civilization’ (Toffler
1970, p. 11).
During the 1980s, unprecedented advances in computing technology led to
renewed interest in the notion that technology is progressing towards some kind of
tipping-point or discontinuity. Moravec’s Mind Children (1988) revived research
into the nature of technological acceleration. Many more books followed, all
arguing for extraordinary future developments in robotics, artificial intelligence,
nanotechnology, and biotechnology. Kurzweil (1999) developed his law of
accelerating returns in The Age of Spiritual Machines. Broderick (2001) brought
these ideas together to argue for a future climax of technological progress that he
termed the spike. All these ideas were brought into public consciousness with the
publication of Kurzweil’s (2005) The Singularity is Near and its accompanying
movie. As the best-known defence of the singularity, Kurzweil’s work inspired
dozens of responses. One major assessment of singularitarian ideas was delivered
by Special Report: The Singularity in IEEE Spectrum (June, 2008). More recently,
notable work on the singularity has been done by the philosopher David Chalmers
(2010) and the discussion of the singularity it inspired (The Journal of Con-
sciousness Studies 19, pp. 1–2). The rapid growth in singularity research seems set
to continue and perhaps accelerate.
Essays in this Volume
The essays developed by our authors divide naturally into several groups. Essays
in Part I hold that a singularity of machine superintelligence is probable. Luke
Muehlhauser and Anna Salamon of the Singularity Institute of Artificial Intelli-
gence argue that an intelligence explosion is likely and examine some of its
consequences. They make recommendations designed to ensure that the emerging
superintelligence will be beneficial, rather than detrimental, to humanity. Itamer
Arel, a computer scientist, argues that artificial general intelligence may become
an extremely powerful and disruptive force. He describes how humans might
shape the emergence of superhuman intellects so that our relations with such
intellects are more cooperative than competitive. Juergen Schmidhuber, also a
computer scientist, presents substantial evidence that improvements in artificial
intelligence are rapidly progressing towards human levels. Schmidhuber is opti-
mistic that, if future trends continue, we will face an intelligence explosion within
the next few decades. The last essay in this part is by Richard Loosemore and Ben
Goertzel who examine various objections to an intelligence explosion and con-
clude that they are not persuasive.
8 A. H. Eden et al.
Essays in Part II are concerned with the values of agents that may result from a
singularity of artificial intellects. Luke Muehlhauser and Louie Helm ask what it
would mean for artificial intellects to be friendly to humans, conclude that human
values are complex and difficult to specify, and discuss techniques we might use to
ensure the friendliness of artificial superintelligent agents. Joshua Fox and Roman
Yampolskiy consider the psychologies of artificial intellects. They argue that
human-like mentalities occupy only a very small part of the space of possible
minds. If Fox and Yampolskiy are right, then it is likely that such minds, especially
if superintelligent, will scarcely be recognizable to us at all. The values and goals
of such minds will be alien, and perhaps incomprehensible in human terms. This
strangeness creates challenges, some of which are discussed in James Miller’s
essay. Miller examines the economic issues associated with a singularity of arti-
ficial superintelligence. He shows that although the singularity of artificial
superintelligence may be brought about by economic competition, one paradoxical
consequence might be the destruction of the value of money. More worryingly,
Miller suggests that a business that may be capable of creating an artificial
superintelligence would face a unique set of economic incentives likely to push it
deliberately to make it unfriendly. To counter such worries, Steve Omohundro
examines how market forces may affect their behaviour. Omohundro proposes a
variety of strategies to ensure that any artificial intellects will have human-friendly
values and goals. Eliezer Yudkowsky concludes this part by considering the ways
that artificial superintelligent intellects may radically differ from humans and the
urgent need for us to take those differences into account.
Whereas essays in Parts I and II are concerned with the intelligence explosion
scenario—a singularity deriving from the evolution of intelligence in silicon, the
essays in Part III are concerned with the evolution that humans may undergo via
enhancement, amplification, and modification, and with the scenario in which a
race of superintelligent posthumans emerges. David Pearce conceives of humans
as ‘recursively self-improving organic robots’ poised to re-engineer their own
genetic code and bootstrap their way to full-spectrum superintelligence. Hyper-
social and supersentient, the successors of archaic humanity may phase out the
biology of suffering throughout the living world. Randal Koene examines how the
principles of evolution apply to brain emulations. He argues that intelligence
entails autonomy, so that future ‘substrate-independent minds’ (SIMs), may hold
values that humans find alien. Koene nonetheless hopes that, since SIMs will
originate from our own brains, human values play significant roles in superintel-
ligent, ‘disembodied’ minds. Dennis Bray examines the biochemical mechanisms
of the brain. He concludes that building fully functional emulations by reverse-
engineering human brains may entail much more than modelling neurons and
synapses. However, there are other ways to gain inspiration from the evolution of
biological intelligence. We may be able to harness brain physiology and natural
selection to evolve new types of intelligence, and perhaps superhuman intelli-
gence. David Roden worries that the biological moral heritage of humanity may
1 Singularity Hypotheses: An Overview 9
disappear entirely after the emergence of superintelligent intellects, whether arti-
ficial or of biological origin. Such agents may emerge with utterly novel features
and behaviour that cannot be predicted from their evolutionary histories.
The essays in Part IV of the volume are skeptical about the singularity, each
focusing on a particular aspect such as the intelligence explosion or the prospects
of acceleration continuing over the next few decades. A report developed by the
American Association for Artificial Intelligence considers the future development
of artificial intelligence (AI). While optimistic about specific advances, the report
is highly skeptical about grand predictions of an intelligence explosion, of a
‘coming singularity’, and about any loss of human control. Alessio Plebe and
Pietro Perconti argue that the trends analysis as singularitarians present it is faulty:
far from rising, the pace of change is not accelerating but in fact slowing down,
and even starting to decline. Futurist Theodore Modis is deeply skeptical about any
type of singularity. He focuses his skepticism on Kurzweil’s work, arguing that
analysis of past trends does not support long-term future acceleration. For Modis,
technological change takes the form of S-curves (logistic functions), which means
that its trajectory is consistent with exponential acceleration for only a very short
time. Modis expects computations and related technologies to slow down and level
off. While technological advances will continue to be disruptive, there will be no
singularity. Other authors go further and argue that most literature on the singu-
larity is not genuinely scientific but theological. Focusing on Kurzweil’s work,
Diane Proudfoot’s essay develops the notion that singularitarianism is a kind of
millenarian ideology (Bozeman 1997; Geraci 2010; Steinhart 2012) or ‘the reli-
gion of technology’ (Noble 1999). Selmer Bringsjord, Alexander Bringsjord, and
Paul Bello compare belief in the singularity to fideism in traditional Christianity,
which denies the relevance of evidence or reason.
The last essay in Part IV offers an ambitious theory of acceleration that attempts
to unify cosmic evolution with biological, cultural and technological evolution.
Eric Chaisson argues that complexity can be shown consistently to increase from
the Big Bang to the present, and that the same forces that drive the rise of com-
plexity in Nature generally also underlie technological progress. To support this
sweeping argument, Chaisson defines the physical quantity of energy density rate
and shows how it unifies the view of an accelerating grow along physical,
biological, cultural, and technological evolution. But while Chaisson accepts the
first element of the technological singularity, acceleration, he rejects the second,
discontinuity—hence the singularity: ‘there is no reason to claim that the next
evolutionary leap forward beyond sentient beings and their amazing gadgets will
be any more important than the past emergence of increasingly intricate complex
systems.’ Chaisson reminds us that our little planet is not the only place in the
universe where evolution is happening. Our machines may achieve superhuman
intelligence. But perhaps a technological singularity will happen first elsewhere in
the cosmos. Maybe it has already done so.
10 A. H. Eden et al.
Conclusions
History shows time and again that the predictions made by futurists (and econo-
mists, sociologists, politicians, etc.) have been confounded by the behaviour of
self-reflexive agents. Some forecasts are self-fulfilling, others self-stultifying.
Where, if at all, do predictions of a technological singularity fit into this typology?
How are the lay public/political elites likely to respond if singularitarian ideas gain
widespread currency? Will the 21st century mark the end of the human era? And if
so, will biological humanity’s successors be our descendants? It is our hope and
belief that this volume will help to move these questions beyond the sometimes
wild speculations of the blogosphere and promote the growth of singularity studies
as a rigorous scholarly discipline.
References
Adams, H. (1909). The rule of phase applied to history. In H. Adams & B. Adams (Eds.) (1920)
The degradation of the democratic dogma. (pp. 267–311). New York: Macmillan.
Adams, H. (1904). A law of acceleration. In H. Adams (1919) The education of Henry Adams.
New York: Houghton Mifflin, Chap. 34.
Bostrom, N. to appear. ‘Intelligence explosion’’.
Bozeman. J. (1997). Technological millenarianism in the United States. In T. Robbins & S.
Palmer (Eds.) (1997) Millennium, messiahs, and mayhem: contemporary apocalyptic
movements (pp. 139–158). New York: Routledge.
Butler, S. (1872/1981). Erewhon: or, over the range. In H. P. Breuer & D. F. Howard. Newark:
University of Delaware Press.
Broderick, D. (2001). The spike: how our lives are being transformed by rapidly advancing
technologies. New York: Tom Doherty Associates.
Chalmers, D. (2010). The singularity: a philosophical analysis. Journal of Consciousness Studies
17, 7–65.
Garreau, J. (2005). Radical evolution. New York: Doubleday.
Geraci, R. (2010). Apocalypic AI: visions of heaven in robotics, artificial intelligence, and virtual
reality. New York: Oxford University Press.
Good, I. (1965). Speculations concerning the first ultraintelligent machine. In Alt, F., Rubinoff,
M. (Eds.) Advances in Computers Vol. 6. New York: Academic Press.
Hanson, R. (1994). If uploads come first: crack of a future dawn. Extropy 6 (1), 10–15.
Hausman, C. (1993). Charles S. Peirce’s evolutionary philosophy. New York: Cambridge
University Press.
Hirshfeld, Y. (2011). A note on mathematical singularity and technological singularity. The
singularity hypothesis, blog entry, 5 Feb. Available http://singularityhypothesis.blogspot.co.
uk/2011/02/note-on-mathematical-singularity-and.html.
Horgan, J. (2008). The consciousness conundrum. IEEE Spectrum 45(6), 36–41.
Hegel, G. W. F. (1807/1977). Phenomenology of spirit. Trans. A. V. Miller. New York: Oxford
University Press.
Joy, B. (2000). Why the future doesn’t need us. http://www.wired.com/wired/archive/8.04/joy.
html.
Kurzweil, R. (1999). The age of spiritual machines. New York: Penguin.
Kurzweil, R. (2005). The singularity is near: when humans transcend biology. New York: Viking.
Lovejoy, A. (1936). The great chain of being. Cambridge: Harvard University Press.
1 Singularity Hypotheses: An Overview 11
Meadows, D. H., Meadows, D. L, Randers, J., & Behrens, W. (1972). The limits to growth: a
report for the club of Rome project on the predicament of mankind. New York: Universe
Books.
Meadows, D. H., Randers, J., & Meadows, D. L (2004). The limits to growth: the thirty year
update. White River Junction, VT: Chelsea Green Books.
Modis, T. (2003). The limits of complexity and change. The futurist (May–June), 26–32.
Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics 38(8),
114–117.
Moravec, H. (1988). Mind children: the future of robot and human intelligence. Cambridge:
Harvard University Press.
Moravec, H. (2000). Robot: mere machine to transcendent mind. New York: Oxford University
Press.
Noble, D. F. (1999). The religion of technology: the divinity of man and the spirit of invention.
New York: Penguin.
Paul, G. S., E. D. Cox (1996). Beyond humanity: cyberevolution and future minds. Rockland,
MA: Charles River Media.
Sagan, C. (1977). The dragons of Eden. New York: Random House.
Steinhart, E. (2008). Teilhard de Chardin and transhumanism. Journal of Evolution and
Technology 20, 1–22. Online at \ jetpress.org/v20/steinhart.htm [.
Steinhart, E. (2012). Digital theology: is the resurrection virtual? In M. Luck (Ed.) (2012) A
philosophical exploration of new and alternative religious movements. Farnham, UK:
Ashgate.
Stine, G. H. (1961). Science fiction is too conservative. Analog Science Fact and Fiction LXVII
(3), 83–99.
Teilhard de Chardin, P. (1955/2002). The phenomenon of man. Transactions B. Wall. New York:
Harper Collins. Originally written 1938–1940.
Toffler, A. (1970). Future shock. New York: Random House.
Toffler, A. (1980). The third wave. New York: Bantam.
Turing, A. M. (1950). Computing machinery and intelligence. Mind 59 (236), 433–460.
Turing, A. M. (1951). Intelligent machinery, a heretical theory. The 51 Society. BBC programme.
Ulam, S. (1958) Tribute to John von Neumann. Bulletin of the American mathematical society 64
(3.2), 1–49.
Vinge, V. (1993). The coming technological singularity: how to survive in the post-human era.
In Proc. Vision 21: interdisciplinary science and engineering in the era of cyberspace,
(pp. 11–22). NASA: Lewis Research Center.
Zimmerman, M. (2008). The singularity: a crucial phase in divine self-actualization? Cosmos and
history: The Journal of Natural and Social Philosophy 4(1–2), 347–370.
12 A. H. Eden et al.

Chapters (19)

Bill Joy in a widely read but controversial article claimed that the most powerful 21st century technologies are threatening to make humans an endangered species. Indeed, a growing number of scientists, philosophers and forecasters insist that the accelerating progress in disruptive technologies such as artificial intelligence, robotics, genetic engineering, and nanotechnology may lead to what they refer to as the technological singularity: an event or phase that will radically change human civilization, and perhaps even human nature itself, before the middle of the 21st century.
In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.
Once introduced, Artificial General Intelligence (AGI) will undoubtedly become humanity’s most transformative technological force. However, the nature of such a force is unclear with many contemplating scenarios in which this novel form of intelligence will find humans an inevitable adversary. In this chapter, we argue that if one is to consider reinforcement learning principles as foundations for AGI, then an adversarial relationship with humans is in fact inevitable. We further conjecture that deep learning architectures for perception in concern with reinforcement learning for decision making pave a possible path for future AGI technology and raise the primary ethical and societal questions to be addressed if humanity is to evade catastrophic clashing with these AGI beings.
The abstract should summarize the contents of the paper and should contain at least 70 and at most 150 words. It should be set in 9-point font size and should be inset 1.0 cm from the right and left margins. There should be two blank (10-point) lines before and after the abstract. This document is in the required format.
Many researchers have argued that a self-improving artificial intelligence (AI) could be-come so vastly more powerful than humans that we would not be able to stop it from achieving its goals. If so, and if the AI's goals differ from ours, then this could be dis-astrous for humans. One proposed solution is to program the AI's goal system to want what we want before the AI self-improves beyond our capacity to control it. Unfortu-nately, it is difficult to specify what we want. After clarifying what we mean by "intel-ligence," we offer a series of "intuition pumps" from the field of moral philosophy for our conclusion that human values are complex and difficult to specify. We then survey the evidence from the psychology of motivation, moral psychology, and neuroeconomics that supports our position. We conclude by recommending ideal preference theories of value as a promising approach for developing a machine ethics suitable for navigating an intelligence explosion or "technological singularity."
When the first artificial general intelligences are built, they may improve themselves to far-above-human levels. Speculations about such future entities are already affected by anthropomorphic bias, which leads to erroneous analogies with human minds. In this chapter, we apply a goal-oriented understanding of intelligence to show that humanity occupies only a tiny portion of the design space of possible minds. This space is much larger than what we are familiar with from the human example; and the mental architectures and goals of future superintelligences need not have most of the properties of human minds. A new approach to cognitive science and philosophy of mind, one not centered on the human example, is needed to help us understand the challenges which we will face when a power greater than us emerges.
A business that created an artificial general intelligence (AGI) could earn trillions for its investors, but might also bring about a “technological Singularity” that destroys the value of money. Such a business would face a unique set of economic incentives that would likely push it to behave in a socially sub-optimal way by, for example, deliberately making its software incompatible with a friendly AGI framework.
Today’s technology is mostly preprogrammed but the next generation will make many decisions autonomously. This shift is likely to impact every aspect of our lives and will create many new benefits and challenges. A simple thought experiment about a chess robot illustrates that autonomous systems with simplistic goals can behave in anti-social ways. We summarize the modern theory of rational systems and discuss the effects of bounded computational power. We show that rational systems are subject to a variety of “drives” including self-protection, resource acquisition, replication, goal preservation, efficiency, and self-improvement. We describe techniques for counteracting problematic drives. We then describe the “Safe-AI Scaffolding” development strategy and conclude with longer term strategies for ensuring that intelligent technology contributes to the greater human good.
By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: “A curious aspect of the theory of evolution is that everybody thinks he understands it”. Nonetheless the problem seems to be unusually acute in Artificial Intelligence.
This essay explores how recursively self-improving organic robots will modify their own genetic source code and bootstrap our way to full-spectrum superintelligence. Starting with individual genes, then clusters of genes, and eventually hundreds of genes and alternative splice variants, tomorrow’s biohackers will exploit “narrow” AI to debug human source code in a positive feedback loop of mutual enhancement. Genetically enriched humans can potentially abolish aging and disease; recalibrate the hedonic treadmill to enjoy gradients of lifelong bliss, and phase out the biology of suffering throughout the living world.
More important than debates about the nature of a possible singularity is that we successfully navigate the balance of opportunities and risks that our species is faced with. In this context, we present the objective to upload to substrate-independent minds (SIM). We emphasize our leverage along this route, which distinguishes it from proposals that are mired in debates about optimal solutions that are unclear and unfeasible. We present a theorem of cosmic dominance for intelligence species based on principles of universal Darwinism, or simply, on the observation that selection takes place everywhere at every scale. We show that SIM embraces and works with these facts of the physical world. And we consider the existential risks of a singularity, particularly where we may be surpassed by artificial intelligence (AI). It is unrealistic to assume the means of global cooperation needed to the create a putative "friendly" super-intelligent AI. Besides, no one knows how to implement such a thing. The very reasons that motivate us to build AI lead to machines that learn and adapt. An artificial general intelligence (AGI) that is plastic and at the same time implements an unchangeable "friendly" utility function is an oxymoron. By contrast, we note that we are living in a real world example of a Balance of Intelligence between members of a dominant intelligent species. We outline a concrete route to SIM through a set of projects on whole brain emulation (WBE). The projects can be completed in the next few decades. So, when we compare this with plans to "cure aging" in human biology, SIM is clearly as feasible in the foreseeable future – or more so. In fact, we explain that even in the near term life extension will require mind augmentation. Rationality is a wonderful tool that helps us find effective paths to our goals, but the goals arise from a combination of evolved drives and interests developed through experience. The route to a new Balance of Intelligence by SIM has this additional benefit, that it does acknowledges our emancipation and does not run counter to our desire to participate in advances and influence future directions.
Many biologists, especially those who study the biochemistry or cell biology of neural tissue are sceptical about claims to build a human brain on a computer. They know from first hand how complicated living tissue is and how much there is that we still do not know. Most importantly a biologist recognizes that a real brain acquires its functions and capabilities through a long period of development. During this time molecules, connections, and large scale features of anatomy are modified and refined according to the person’s environment. No present-day simulation approaches anything like the complexity of a real brain, or provides the opportunity for this to be reshaped over a long period of development. This is not to deny that machines can achieve wonders: they can perform almost any physical or mental task that we set them—faster and with greater accuracy than we can ourselves. However, in practice present day intelligent machines still fall behind biological brains in a variety of tasks, such as those requiring flexible interactions with the surrounding world and the performance of multiple tasks concurrently. No one yet has any idea how to introduce sentience or self-awareness into a machine. Overcoming these deficits may require novel forms of hardware that mimic more closely the cellular machinery found in the brain as well as developmental procedures that resemble the process of natural selection.
In this essay I claim that Vinge’s idea of a technologically led intelligence explosion is philosophically important because it requires us to consider the prospect of a posthuman condition succeeding the human one. What is the “humanity” to which the posthuman is “post”? Does the possibility of a posthumanity presuppose that there is a ‘human essence’, or is there some other way of conceiving the human-posthuman difference? I argue that the difference should be conceived as an emergent disconnection between individuals, not in terms of the presence or lack of essential properties.
The AAAI 2008-09 Presidential Panel on Long-Term AI Futures was organized by the president of the Association for the Advancement of Artificial Intelligence (AAAI) to bring together a group of thoughtful computer scientists to explore and reflect about societal aspects of advances in machine intelligence (computational procedures for automated sensing, learning, reasoning, and decision making). The panelists are leading AI researchers, well known for their significant contributions to AI theory and practice. Although the final report of the panel has not yet been issued, we provide background and high-level summarization of several findings in this interim report.
The concept of a Singularity as described in Ray Kurzweil’s book cannot happen for a number of reasons. One reason is that all natural growth processes that follow exponential patterns eventually reveal themselves to be following S-curves thus excluding runaway situations. The remaining growth potential from Kurzweil’s “knee”, which could be approximated as the moment when an S-curve pattern begins deviating from the corresponding exponential, is a factor of only one order of magnitude greater than the growth already achieved. A second reason is that there is already evidence of a slowdown in some important trends. The growth pattern of the U.S. GDP is no longer exponential. Had Kurzweil been more rigorous in his fitting procedures, he would have recognized it. Moore’s law and the Microsoft Windows operating systems are both approaching end-of-life limits. The Internet rush has also ended—for the time being—as the number of users stopped growing; in the western world because of saturation and in the underdeveloped countries because infrastructures, education, and the standard of living there are not yet up to speed. A third reason is that society is capable of auto-regulating runaway trends as was the case with deadly car accidents, the AIDS threat, and rampant overpopulation. This control goes beyond government decisions and conscious intervention. Environmentalists who fought nuclear energy in the 1980s, may have been reacting only to nuclear energy’s excessive rate of growth, not nuclear energy per se, which is making a comeback now. What may happen instead of a Singularity is that the rate of change soon begins slowing down. The exponential pattern of change witnessed up to now dictates more milestone events during year 2025 than witnessed throughout the entire 20th century! But such events are already overdue today. If, on the other hand, the change growth pattern has indeed been following an S-curve, then the rate of change is about to enter a declining trajectory; the baby boom generation will have witnessed more change during their lives than anyone else before or after them.
The so-called singularity hypothesis embraces the most ambitious goal of Artificial Intelligence: the possibility of constructing human-like intelligent systems. The intriguing addition is that once this goal is achieved, it would not be too difficult to surpass human intelligence. While we believe that none of the philosophical objections against strong AI are really compelling, we are skeptical about a singularity scenario associated with the achievement of human-like systems. Several reflections on the recent history of neuroscience and AI, in fact, seem to suggest that the trend is going in the opposite direction.
According to the early futurist Julian Huxley, human life as we know it is ‘a wretched makeshift, rooted in ignorance’. With modern science, however, ‘the present limitations and miserable frustrations of our existence could be in large measure surmounted’ and human life could be ‘transcended by a state of existence based on the illumination of knowledge’ (1957b, p. 16).
We deploy a framework for classifying the bases for belief in a category of events marked by being at once weighty, unseen, and temporally removed (wutr, for short). While the primary source of wutr events in Occidental philosophy is the list of miracle claims of credal Christianity, we apply the framework to belief in The Singularity, surely—whether or not religious in nature—a wutr event. We conclude from this application, and the failure of fit with both rationalist and empiricist argument schemas in support of this belief, not that The Singularity won’t come to pass, but rather that regardless of what the future holds, believers in the “machine intelligence explosion” are simply fideists. While it’s true that fideists have been taken seriously in the realm of religion (e.g. Kierkegaard in the case of some quarters of Christendom), even in that domain the likes of orthodox believers like Descartes, Pascal, Leibniz, and Paley find fideism to be little more than wishful, irrational thinking—and at any rate it’s rather doubtful that fideists should be taken seriously in the realm of science and engineering.
Nature’s myriad complex systems—whether physical, biological or cultural—are mere islands of organization within increasingly disordered seas of surrounding chaos. Energy is a principal driver of the rising complexity of all such systems within the expanding, ever-changing Universe; indeed energy is as central to life, society, and machines as it is to stars and galaxies. Energy flow concentration—in contrast to information content and negentropy production—is a useful quantitative metric to gauge relative degree of complexity among widely diverse systems in the one and only Universe known. In particular, energy rate densities for human brains, society collectively, and our technical devices have now become numerically comparable as the most complex systems on Earth. Accelerating change is supported by a wealth of data, yet the approaching technological singularity of 21st century cultural evolution is neither more nor less significant than many other earlier singularities as physical and biological evolution proceeded along an undirectional and unpredictable path of more inclusive cosmic evolution, from big bang to humankind. Evolution, broadly construed, has become a powerful unifying concept in all of science, providing a comprehensive worldview for the new millennium—yet there is no reason to claim that the next evolutionary leap forward beyond sentient beings and their amazing gadgets will be any more important than the past emergence of increasingly intricate complex systems. Nor is new science (beyond non-equilibrium thermodynamics) necessarily needed to describe cosmic evolution’s interdisciplinary milestones at a deep and empirical level. Humans, our tools, and their impending messy interaction possibly mask a Platonic simplicity that undergirds the emergence and growth of complexity among the many varied systems in the material Universe, including galaxies, stars, planets, life, society, and machines.
... In other case, the focus from this perspective is on three aspects: 1) growing revenue; 2) growing the customer base; 3) scaling the firm to serve a large and usually global market (Sullivan, 2016). As previously highlighted, the ExOs concept refers to the exponential growth of technology, "borrowing" evidence gathered from singularity research (Kurzweil, 2001;Kurzweil, 2006;Vinge, 1993;Eden, et al., 2013). Such research has gained particular attraction, as the focus on the acceleration of technological innovation allows an easy validation. ...
Conference Paper
Exponential Organizations (ExOs) are firms able to continuously disrupt their reference markets through an extremely ambitious purpose, unconventional ways of organizing, and an adaptive culture-all of which are catalysed through a proper usage of digital technologies. Despite ExO concept is gaining momentum among practitioners, we have scant evidence on how "going exponential". This work is based on a comprehensive and systematic literature review with a twofold aim: (1) understanding and reviewing the theoretical lenses to better interpret the topic; (2) rigorously placing it in the scientific literature, understanding a potential future research agenda for further deepening it. This research relies on an inductive approach and a bibliometric analysis carried out through VOSviewer to map ExO research with co-occurrences and bibliographic coupling analysis. As the term ExO is not yet systematically used into the scientific literature, we included several similar concepts that are related to this peculiar kind of organizations. Our findings allow demystifying the ExO concept and considering it as a way of thinking that allow fostering the development of dynamic capabilities, which are conducive to the generation of competitive advantages in highly turbulent contexts. Theoretical and empirical contributions are discussed together with a potential research agenda.
... The term is used to refer to a point in time after which the development of AI technology becomes irreversible and uncontrollable, 1 with unknown consequences for the future of humanity. These developments are seen by Kurzweil as an inevitable consequence of the achievement of AGI, and he too believes that we are approaching ever closer to the point where AGI will in fact be achieved (Eden and Moor 2012). Proponents of the Singularity idea believe that, once the Singularity is reached, AGI machines themselves will develop their own will and begin to act autonomously, potentially detaching themselves from their human creators in ways that will threaten human civilisation (Weinstein and ...
... It is still a hypothesis, and it is still unclear whether this is possible in principle, but progress in this direction is much faster than previously thought, and its pace may increase significantly in the future. That is why there is a danger of the uncontrolled use of increasingly powerful and sophisticated AI. 5 It has been pointed out that the rapid development of artificial intelligence technology is the main challenge for humanity (Eden et al. 2012). Since this was first discussed, communication technologies, data analysis and surveillance technologies have advanced very significantly, even radically. ...
Article
Full-text available
The article is devoted to the history of the development of ICT and AI, their current and expected future achievements, and the problems (which have already arisen but will become even more acute in the future) associated with the development of these technologies and their widespread application in society. It shows the close connection between the development of AI and cognitive science, the penetration of ICT and AI into various spheres, particularly health care, and the very intimate areas related to the creation of digital copies of the deceased and posthumous contact with them. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. The authors analyse recent achievements in the field of Artificial Intelligence. There are given descriptions of the basic models, in particular the Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that await us in the coming decades. The authors identify the forces behind the aspiration to create AI, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. It is emphasized that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article provides forecasts of the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them. The study consists of two articles. The first, published in the previous is-sue of the journal, has provided a brief historical overview and characterized the current situation in the field of ICT and AI. It has also analyzed the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT and similar models). The article has discussed the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence. This second article describes and comments on the current assessments of breakthroughs in the field of AI, analyzes various predictions, and provides the authors' own assessments and predictions of future developments. Particular attention is paid to the problems and dangers associated with the rapid and uncontrolled development of AI, with the fact that advances in this field become a powerful means of control over the population, imposing ideologies, priorities and lifestyles, influencing the results of elections, and a tool to undermine security and geopolitical struggles.
... These, Dator suggests, are usually conceived of as being either technological or spiritual/consciousness-based in nature. The case of The Singularity could be considered an intriguing "hybrid" form of these idealized types, wherein consciousness transfers itself onto a technological substrate (e.g., Broderick 1997;Eden et al. 2012;Kurzweil 1999Kurzweil , 2006Smart 2003). Let us consider the technology subtype first. ...
Article
Full-text available
This study aims to evaluate quantitatively (albeit in arbitrary units) the evolution of complexity of the human system since the domestication of fire. This is made possible by studying the timing of the 14 most important milestones—breaks in historical perspective—in the evolution of humans. AI is considered here as the latest such milestone with importance comparable to that of the Internet. The complexity is modeled to have evolved along a bell‐shaped curve, reaching a maximum around our times, and soon entering a declining trajectory. According to this curve, the next evolutionary milestone of comparable importance is expected around 2050–2052 and should add less complexity than AI but more than the milestone grouping together nuclear energy, DNA, and the transistor. The peak of the complexity curve coincides squarely with the life span of the baby boomers. The peak in the rate of growth of the world population precedes the complexity peak by 25 years, which is about the time it takes a young man or woman before they are able to add complexity to the human system in a significant way. It is in society’s interest to flatten the complexity bell‐shaped curve to whatever extent this is possible in order to enjoy complexity longer.
Preprint
An artificial superintelligence (ASI) is artificial intelligence that is significantly more intelligent than humans in all respects. While ASI does not currently exist, some scholars propose that it could be created sometime in the future, and furthermore that its creation could cause a severe global catastrophe, possibly even resulting in human extinction. Given the high stakes, it is important to analyze ASI risk and factor the risk into decisions related to ASI research and development. This paper presents a graphical model of major pathways to ASI catastrophe, focusing on ASI created via recursive self-improvement. The model uses the established risk and decision analysis modeling paradigms of fault trees and influence diagrams in order to depict combinations of events and conditions that could lead to AI catastrophe, as well as intervention options that could decrease risks. The events and conditions include select aspects of the ASI itself as well as the human process of ASI research, development, and management. Model structure is derived from published literature on ASI risk. The model offers a foundation for rigorous quantitative evaluation and decision making on the long-term risk of ASI catastrophe.
Article
This article reviews the legacy of Hakob Pogosovich Nazaretyan (1947–2019), a distinguished Russian scholar, with a focus on the ideologization of societies and meaning formation as the major challenges of our time. The concept of historical singularity, a phenomenon of planetary significance that shapes the emerging scenarios of today’s world landscape, is defined. The growing importance, both for the international community and Russians, of maintaining a balance between the technological and humanistic aspects of universal culture to effectively address the escalating global crisis is highlighted. Based on the analysis of H.P. Nazaretyan’s key works, the role of ideology in this process, its productive anti-entropic function in the past and counterproductivity in the present and future, is examined.
Book
Full-text available
The parallel history of the evolution of human intelligence and artificial intelligence is a fascinating journey, highlighting the distinct but interconnected paths of biological evolution and technological innovation. This history can be seen as a series of interconnected developments, each advance in human intelligence paving the way for the next leap in artificial intelligence. Human intelligence and artificial intelligence have long been intertwined, evolving in parallel trajectories throughout history. As humans have sought to understand and reproduce intelligence, AI has emerged as a field dedicated to creating systems capable of tasks that traditionally require human intellect. This book examines the evolutionary roots of intelligence, explores the emergence of artificial intelligence, examines the parallel history of human intelligence and artificial intelligence, tracing their development, interactions, and profound impact they have had on each other, and envisions future landscapes where intelligence converges human and artificial. Let's explore this history, comparing key milestones and developments in both realms.
Chapter
The following chapters present and elaborate on the leading themes about emergence in contemporary philosophy. Due to the conceptual subtlety and multifaceted nature of emergence as a topic of discussion, philosophical precision is required so that it is easier to understand what emergence is and when it occurs. One central question surrounding the topic is how to characterize emergent phenomena more accurately. Emergent phenomena frequently are taken to be irreducible, to be unpredictable or unexplainable, to require novel concepts, and to be holistic. Some accounts of emergence favor only one idea to the exclusion of all the others, while others simultaneously embrace many. These are merely indications of the leading ideas about emergence and do not, in any way, constitute an exhaustive list.
Book
Examining a series of provocative paradoxes about consciousness, choice, ethics, and other topics, Good and Real tries to reconcile a purely mechanical view of the universe with key aspects of our subjective impressions of our own existence. In Good and Real, Gary Drescher examines a series of provocative paradoxes about consciousness, choice, ethics, quantum mechanics, and other topics, in an effort to reconcile a purely mechanical view of the universe with key aspects of our subjective impressions of our own existence. Many scientists suspect that the universe can ultimately be described by a simple (perhaps even deterministic) formalism; all that is real unfolds mechanically according to that formalism. But how, then, is it possible for us to be conscious, or to make genuine choices? And how can there be an ethical dimension to such choices? Drescher sketches computational models of consciousness, choice, and subjunctive reasoning—what would happen if this or that were to occur?—to show how such phenomena are compatible with a mechanical, even deterministic universe. Analyses of Newcomb's Problem (a paradox about choice) and the Prisoner's Dilemma (a paradox about self-interest vs. altruism, arguably reducible to Newcomb's Problem) help bring the problems and proposed solutions into focus. Regarding quantum mechanics, Drescher builds on Everett's relative-state formulation—but presenting a simplified formalism, accessible to laypersons—to argue that, contrary to some popular impressions, quantum mechanics is compatible with an objective, deterministic physical reality, and that there is no special connection between quantum phenomena and consciousness. In each of several disparate but intertwined topics ranging from physics to ethics, Drescher argues that a missing technical linchpin can make the quest for objectivity seem impossible, until the elusive technical fix is at hand. Bradford Books imprint
Chapter
Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifests as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the superiority of the natural or the troublesomeness of hubris or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in “nature knows best” attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature.
Article
To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that these methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of structural coherence and organizational invariance and a double-aspect view of information.