Content uploaded by Amnon H. Eden
Author content
All content in this area was uploaded by Amnon H. Eden on Feb 13, 2016
Content may be subject to copyright.
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
T
HE
I
M M I NE N T
,
T H E
P
OS S IB L E
,
A N D T H E
I
RR E VE R SI B LE
T
H E
D
I S R U P T I V E
P
O T E N T I A L O F
A
R T I F I C I A L
I
N T E L L I G E N C E
Amnon H. Eden
12 November 2015 Institute of Advanced Systems, Hebrew University of Jerusalem
ABSTRACTABSTRACT
The Imminent, the Possible, and the Irreversible:
The Disruptive Potential of
Artificial Intelligence
Major technological changes have a transformative effect on every aspect of human life.
Increasingly intelligent programs are responsible to paradigm shifts at a steadily accelerating
rate, a trend which acceleration theories suggest is all but guaranteed to continue.
We explore some of the most disruptive applications of artificial intelligence, examining in
particular the impact of computer trading programs (algotraders) on stock markets. We
explore some such imminent technologies (such as autonomous military robots) and their
consequences (eg on job markets). We conclude with a discussion in the potentially
irreversible consequences of this trend, including that of superintelligence.
2
MO T I V AT I ONMO T I V AT I ON
(Alvin Toff ler, “Future Shock”, 1970:12)
In the three short decades between now and the twenty-first century, millions of
ordinary, psychologically normal people will face an abrupt collision with the
future. Citizen’s of the world’s richest and most technologically advanced nations,
will find it increasingly painful to keep up with the incessant demand for change
that characterizes our time. For them, the future will have arrived too soon. …
For the acceleration of change ... is a concrete force that
reaches deep into our personal lives, compels us to act out new
roles, and confronts us with the danger of a new and
powerfully upsetting psychological disease.
3
ARTIFICIAL INTELLIGENC EARTIFICIAL INTELLIGENC E
4
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
Percepts
Actions
ENVIRONMENT
AR TI FI C IA L I NT E LL I G EN C EAR TI FI C IA L I NT E LL I G EN C E
Russell, S. & Norvig, P., 2009. Artificial Intel ligence: A Modern Approach, 3rd ed. Upper Saddle River, NJ, USA: Prentice Hall Press
:
→
(Not a definition)
Problem: agents that receive percepts
from the environment and perform
actions
: →
AGENT
5
Techniques
Symbolic logic
Theorem proving
Neural networks
Semantic nets
Bayesian networks
Genetic algorithms
Case-Based Reasoning
Decision trees
Production systems
Types of learning
Supervised
Unsupervised
Problems
Planning
Classification
Vision / pattern recognition
Natural language processing
…
AI T E CH N IQ UE SAI T E CH N IQ UE S
6
1950: The Imitation Game (Turing Test)
1956: Dartmouth Summer Project (McCarthy)
1959: General Problem Solver (Simon & Newel)
1966: Eliza (Weizenbaum)
1972: SHRDLU (Winograd)
1973: MYCIN (Shortliffe et al.)
1980s: “AI Winter”
1991: Genghis (Brooks)
1997: Deep Blue beats Kasparov
2005: BigDog (Boston Dynamics)
2005: Autonomous cars (Thrun, DARPA)
2007: Checkers is solved
2011: Watson wins Jeopardy!
2011: Apple’s intelligent personal assistant (Siri)
2013: DQA learns to play Atari better than humans
2014: Chatbot “passes” Turing test
2015: AI scores as a four years old child in IQ test
Nilsson, N., 2010. The quest for artifici al intelli gence : a history of ideas and achievements, Cambridge ;;New York: Cambridge University Press.
SO ME MI L E ST O NE SSO ME MI L E ST O NE S
7
ME AS UR I NG IN T EL L I GE N CEME AS UR I NG IN T EL L I GE N CE
Legg, S. & Hutter, M., 2007. Universal Intelligence: A Definition of Machine Intelligence. Minds and Machines, 17(4), pp. 391–444.
Intelligence measures an agent’s ability to
achieve goals in a wide range of environments
Agent
Environments
ଵ
,
ଶ
,
ଷ
,…
Probability distribution in
Total expected reward of
Agent in environment
ఓ
గ
≝
ஶ
ୀଵ
Υ ≝ 2
∈
8
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
ME AS UR I NG IN T EL L I GE N CE (CO N T. )ME AS UR I NG IN T EL L I GE N CE (CO N T. )
(Legg & Hutter 2007)
Observations (aka perceptions)
≝
ଵ
,
ଶ
,
ଷ
,…
Actions
≝
ଵ
,
ଶ
,
ଷ
,…
Rewards
≝
ଵ
,
ଶ
,
ଷ
,…
∈ 0,1 ∩ ℚ
History
≝
ଵ
,
ଵ
,
ଵ
,
ଶ
,
ଶ
,
ଶ
,…
,
,
[Nondeterministic] Agent
∶ →
Probability of ܽ
ାଵ
given history up to ݇
ାଵ
|
Environments
≝
ଵ
,
ଶ
,
ଷ
,…
ାଵ
ାଵ
|
∈ 0,1
Kolmogorov complexity given a (fixed)
universal Turing machine ࣯
min
!" # $ #
Assuming that the total expected reward
given a pair of agent-environment
%
ఓ
గ
≝ & '
ஶ
ୀଵ
( 1
Υ
≝
2
∈
9
WHA T AR T IF I CI AL IN T EL L I GE N CE IS NO T?WHA T AR T IF I CI AL IN T EL L I GE N CE IS NO T?
“Algorithmic” agents?
Chess playing? Image recognition?
Deterministic agents?
Yet another hornet’s nest…
“Stupid” agents?
No satisfactory answer (AHE)
No answer at all (?)
Notes:
What is “natural intelligence”?
Do we have one example? Many? None?
“AI”: time dependent
Behaviour considered “intelligent” at time
may become “not intelligent” at 1
An “AI Problem” at time may become “Not
an AI problem” at 1
10
MA CH IN E L E AR N IN GMA CH IN E L E AR N IN G
11
AGENT
Hypothesis
Learning
Algorithm
Compare
Compare
Percepts
Actions
ENVIRONMENT
MA CH IN E L E AR N IN GMA CH IN E L E AR N IN G
Construct a hypothesis
Seek to ‘predict’ the environment
Compare with environment
Receive percepts
Adapt hypothesis
Depending on ‘reward’
Russell, S. & Norvig, P., 2009. Artificial Intel ligence: A Modern Approach, 3rd ed.
Upper Saddle River, NJ, USA: Prentice Hall Press
By observing the results of its own behaviour
[the computer program] can modify its own
programmes so as to achieve some purpose
more effectively. These are possibilities of the
near future, rather than Utopian dreams.
(Turing 1950)
12
Change
Change
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
AGENT
Learning
Algorithm
RE CU RS IV E S E LF - IM P RO V EM E NTRE CU RS IV E S E LF - IM P RO V EM E NT
The agent may modify itself
A program that re-writes its [source]
code
Metaphor (contested): organism
that can change its own DNA
Gene therapy × ∞
Consequences
Unpredictability
Lack of control on the outcome
Mahoney, M., 2008. A Model for Recursively Self Improv ing Programs
Hypothesis
Change
Compare
Compare
Percepts
Actions
ENVIRONMENT
13
UN IV ER S AL P R OB L EM SO L VE R SUN IV ER S AL P R OB L EM SO L VE R S
Schmidhuber, J., 2012. New Millennium AI and the Convergence of History: Update of 2012. In A. H. Eden et al., eds. Singularity Hypotheses. The Frontiers Collection. Springer
Berlin Heidelberg, pp. 61–82
HSEARCH (Hutter 2005):
asymptotically optimal search
algorithm for ALL well-defined
problems
As fast as the fastest algorithm
Save for a constant factor 1
But: ignores the costs of proof search
is too large...
Implementation
AIXI (Hutter 2005)
Gödel machines (Schmidhuber 2009)
Self-improving, optimally efficient
problem solvers
Starts from (any) suboptimal (general)
problem solver and proof searcher
Computes proofs concerning the system’s
own future
Rewrites itself when proof found it would be
‘useful’
Global Optimality Theorem:
a self-rewrite is globally optimal
14
Arel, I., 2012. The Threat of a Reward-Driv en Adversaria l Artificia l General Intelligence. In A. H. Eden et al., eds. Singularit y Hypot heses. The Frontiers Collect ion. Springer Berlin
Heidelberg, pp. 61–82
RE IN FO RC EM EN T LE A RN I NG WI TH NE UR A L NE T W O RK SRE IN FO RC EM EN T LE A RN I NG WI TH NE UR A L NE T W O RK S
www.codeproject.com
Biologically-inspired AI
Multiple instantiations of a common cortical
circuit
Revived by discoveries on visual cortex
(Wiesel 1959)
Back propagation: Learning driven by
rewards
Environment: finite-state, discrete-time
stochastic dynamic system
Learning is modelled as a
Markov Decision Process
A form of supervised learning (but
unsupervised learning can be integrated)
15
DE EP MA CH IN E LE A R NI N GDE EP MA CH IN E LE A R NI N G
Multiple layers of ‘neurons’
Timeline:
1969 Minsky & Papert: limitations of one layer
1980s proposed but impractical
1990s revived interest
1995 outperforming Kernel machines
2011 superhuman performance in limited
recognition domain
Types of architectures
Fixed architecture: Only connection
weights change in the learning process
Dynamic architecture (Group Method of
Data Handling): Number of ‘neurons’ and
layers can be learned
Types of learning methods
Feedforward Neural Networks (acyclic)
Recurrent Neural Networks (cyclic)
Schmidhuber, J., 2015. Deep Learning in Neural Networks: An Overview. Neural Networks, 61, pp.85–117.
16
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
Recognition architecture in primate cortex
Building complex from simple units (nerve
cells)
“S”: linear units perform template matching
“C”: nonlinear (pooling) units perform a “MAX”
operation: the pooling neuron’s response is
determined by its maximally activated afferent
Shape-tuned units tune to complex shapes but are
tolerant to scaling and translation of their preferred
shape.
BI OL OG ICA L I NS PI RA T IO NB IO L O GI CA L IN S PI R AT I ON
Freedman, D.J. et al., 2003. A Comparison of Primate Prefrontal and Inferior Temporal Cortices during
Visual Categoriz ation. The Journal of Neuroscienc e, 23(12), pp.5235–5246.
17
COMPUTER TRADINGCOMPUTER TRADING
efinancialnews.com
18
AKA electronic algorithmic trading, automated
trading
2006: 60% of orders in the LSE (Bloomberg)
trading floors Matching Engines
2008 Flash Crash
Affected global financial market
2013: 80% (est.) of US equities
Speed of executing orders: 0.12ms
matching engines in the LSE
AL GO T RA D IN GAL GO T RA D IN G
Donald MacKenzie 2011, “How to Make Money in Microseconds”, London Review of Books, 33(10):16–18
www.nyse.com
www.nyse.com
metro.co.uk
http://trad oholics.com
19
Shares
Morgan Stanley, Goldman Sachs, Wells Fargo, JP
Morgan, Bank of America, …
Traders
Mutual funds, pension funds, insurance companies
Revise prices in milliseconds
1000s x faster than humans
Input
Economic indicators (stocks, revenue, profits, …)
“Unannounced events and surprise news”
Dow Jones Calendar Live: “corporate news, economic data and
key market events delivered via low-latency feed”
Decrease volatility on the foreign
exchange market
Improve liquidity
Smooth fluctuations in liquidity
Donald MacKenzie. How to Make Money in Microseconds. London Review of Books 33(10), pp. 16–18 (19 May 2011).
AL GO TR A DE R S: SH A RE - T RA D IN G PR O GR A MSAL GO TR A DE R S: SH A RE - T RA D IN G PR O GR A MS
FACTS BENEFITS
….
SHORTCOMINGS
20
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
6 May 2010: DowJones drops 6% in 5min
Overall prices of US shares & futures
600 points
Almost unprecedented rapidity
Maximum fluctuations: 1–2% a day
Most price levels recovered < 20min
Some individual shares:
Accenture $40.50 $0.01
Sotheby’s $34 $99,999.99
Unexplained in traditional terms
no ‘new news’ that could account for the huge sudden changes
“F LA SH CR A SH ” ( NY S E , M A Y 2 01 0 )“ F LA S H C RA SH ” (N YS E , MA Y 20 10 )
(MacKenzie 2011)
nature.com
21
“F LA SH CR A SH ” F OR E N SI CS“F LA SH CR A SH ” F OR E N SI CS
Start: large sale $4.1bn
Infrequent volume (twice previous year)
Seller: Kansas City investment managers
75,000 “Futures” ൈ$55,000
“Index future contracts”: Track S&P 500 stock-market
End: ‘Stop Logic Functionality’
Safety mechanism in the electronic framework used
5sec trading pause
Automated systems stopped throughout US
2.41 pm “Hot-potato trading” spasm
14 sec, 27,000 transactions, 5% drop
Affected: futures-specific ‘market-neutral’
algorithms
Ignore overall market
Conclude catastrophe in “Index future contracts”
Futures changed hands back and forth
Aggregate change: … 200
Cascade of STOP orders
begin loop
read #FuturesToSell
Sell 9% (# Futures traded previous minute)
if #FuturesToSell 0
then repeat loop
end
if FastDrop() then Sell(everything)
‘Volume particip ation
algorithm’ (*)
STOP order (*)
(MacKenzey 2011). (*) Stock tradi ng algorith ms are trade secrets. The pseudocode is hypothesiz ed from the behaviour exhibited by the respecti ve programs
22
UL TR AF A ST E X TR E ME E VE NT SUL TR AF A ST E X TR E ME E VE NT S
Typical duration 15-20ms
Price change > 0.8%
Price ticking down [up]
10 times before ticking
up [down]
Neil Johnson, et al. 2013, “Abrupt rise of new machine ecology beyond human response time”, Nature Scientific Reports, 3(2627)
Crashes
Spikes
23
UL TR AF A ST E X TR E ME E VE NT SUL TR AF A ST E X TR E ME E VE NT S
A new financial phenomenon
Abrupt & extreme price changes
(Johnson, et al. 2013)
‘Spike’
‘Crash’
Bloomberg Finance LP., Intraday Chart: SMCI US Equity, 1 Oct. 2010
24
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
TY P E S O F AL G OR I TH M STY P E S O F AL G OR I TH M S
(MacKenzie 2011)
Computer algorithms [despite
acceleration] remain far from fully
intelligent.
Bounded rationality:
individual algorithms are highly
specialized
investment goals are skewed by
considerations & expertise
the universe of computer algorithms
is necessarily highly diverse
Statistical arbitrage
Exploit price fluctuations
Execution algorithms
Break large orders into smaller slices
Execute in fixed intervals predictable
Algo-sniffers
Detect & beat execution algorithms
Spoofers (illegal)
Issue many ‘buy’ orders below price
demand
Sell at a higher price
cancel ‘buy’ order
25
AN E ME RG I NG ECO L O GY OF CO MP ET I T IV E M AC H I NE SAN E ME RG I NG ECO L O GY OF CO MP ET I T IV E M AC H I NE S
(Farmer et al. 2011)
Society’s techno-social systems are
becoming ever faster … Generating a new
behavioural regime as humans lose the
ability to intervene in real time
… OR A FOOD CHAIN?
AN ECOLOGY?
The universe of computer algorithms
is best understood as a complex
ecology of highly specialized, highly
diverse, and strongly interacting
agents
26
FO RE SI GH T RE PO R TFO RE S IG H T RE P O RT
Farmer, J. Doyne, Spyros Skouras, and UK Government Office of Science’s Foresight Project, “The Future of Computer Tradin g in Financial Markets. DR6: An Ecological
Perspective on the Future of Computer Trading”, 2011
Strategy
spreads
Market changes
•inefficiencies that
allowed strategy
to succeed reduce
Performance
of algos
degrades
New
inefficiencies
created all the
time
New
strategies are
created
Instability
Regulatory
Changes
Inefficiencies
ALGORITH MIC STRATEGIES
LIFE CYCLE
27
RE ME D IE SRE ME D IE S
New “circuit breaker”
Old: halt or slow down trades of a specific stock if price
moves > 10% within 5 min.
New: revise & lower threshold of circuit breakers
“If trading within the price band cannot occur
for more than 15 sec.”
Creates a 5 min. pause
Approved June 2012
Implemented 4 Feb 2013
“If the S&P 500 SPX drops by 7%, 13% or 20%
from the prior day’s closing price”
Halt trading in all exchange-listed securities throughout
the US markets for 15 min.
MarketWatch. ‘SEC Approves “flash Crash” Response Rules’, 1 June 2012
Regulation (Farmer et al.
2011)
We conclude that immediate
regulatory initiative is
necessary …
(1) real-time warning signals
for systemic risk;
(2) … approach to systemic
stability policy and to
competition policy …
(3) building a deeper theoretical
understanding of markets based
on large-scale simulations
28
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
PO T E NT I AL F OR PR O FI TPO T E NT I AL F OR PR O FI T
Edward Tsang, “Direct ional Changes, a new concept for sumarising price movements”, http://www.youtube.com/wat ch?v=WH TBT5eRx6U
FTSE 100 Jun/2007–Oct/2012
Prices sampled hourly:
Longer “coastline”
Prices sampled daily
“Directional change”
29
PO T E NT I AL F OR PR O FI T : AN AL Y S ISPO T E NT I AL F OR PR O FI T : AN AL Y S IS
Edward Tsang, “Direct ional Changes, a new concept for sumarising price movements”, http://www.youtube.com/wat ch?v=WH TBT5eRx6U
FTSE 100 Jun/2007–Oct/2012
“Directional change”
Given “perfect foresight” (buy
low, sell high):
Interval-based return (end of day
prices): 171%
Directional change-based return
(extreme changes):
304%
Longer coastline higher profits
Prices sampled hourly:
Longer “coastline”
Prices sampled daily
30
DE VE LO P IN G AL G OT R AD ER SDE VE LO P IN G AL G OT R AD ER S
code.google.com
31
RECENTLY DISRUPTEDRECENTLY DISRUPTED
32
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
PR ES EN T -D A Y AP PL I CA T IO NSPR ES EN T -D A Y AP PL I CA T IO NS
Recommender systems, personal assistants,
algotrading, search engines, face recognition
(passport control, albums), natural language
recognition, cars parking/antilock breaks/lane
control, robotic surgery, matchmaking,
computer game bots, vacuum cleaners,
customer services, fraud detection, spam
filtering, intrusion detection, …
33
IQ TE S T SC O REIQ TE S T SC O RE
Test: Wechsler Preschool & Primary
Scale of Intelligence (WPPSI-III)
Practiced to measure IQ for 2 to 7 year olds
Results: 4 yrs old-equivalent
AI System: ConceptNet 4.0
From MIT Common Sense Computing
Initiative
Open-source project
Phys.org, October 2015
Word reasoning
Why can birds fly, but cats can’t?
Tell me what I’m thinking of: An animal
that goes “woof”. What is it?
In what way are red and blue alike?
Open questions
Which one has
a curly tail?
Puzzles
34
LE GG ED SQU A D SU P PO R T SYS T EM ( LS 3 )LE GG ED SQU A D SU P PO R T SYS T EM ( LS 3 )
A rough-terrain robot
Developed by Boston Dynamics
Funded by DARPA & US Marine
Corps
Carries 400 lbs
Travels 20 miles without
refueling
Boston Dynamics, with funding from DARPA's M3 program
35
Mining: a dangerous & exhausting job
Autonomous Hauling System, used by the
metals and mining company Rio Tinto Iron Ore
Robotic trucks that load, haul, and dump ore
and waste rock at open pit mines
Moved 200 million ton by June 2014
Trucks can manoeuvre around the site and
complete the journey without human
interference
DR IV ER L ES S MI NI N G IN PI LB AR A , AU S TR A LI ADR IV ER L ES S MI NI N G IN PI LB AR A , AU S TR A LI A
Driverless truc ks in the Jimblebar iron ore mine, Pilbara, Australia
www.abc.net.au / www.popsci.com
36
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
TE SL A M OD E L STE SL A M OD E L S
electrec.co, Sep. 2015
Owners: autopilot is self-improving
37
May 2015: State of Nevada grants self-driving
trucks “full license”
May carry any load
After clocking 16,000 km on other roads
Will still have “drivers” on alert
Model “Inspiration” manufactured by Daimler
AG
Wired. ‘Robot Truck Hits Public Roads for First Time’. Wired UK, 6 May 2015
SE LF -D R I VI N G TR UC K S ON P UB L IC ROA D SSE LF -D R I VI N G TR UC K S ON P UB L IC ROA D S
38
BO OK PR I CI N GBO OK PR I CI N G
CNN. ‘Amazon Seller Lists Book at $23,698,655.93 --plus Shipping’, 25 April 2011
Amazon: automating book prices
Pricing algorithms employed by competing booksellers:
Algorithm I: price is 1.27059 times the next-most-expensive copy of the book
Algorithm II: price is 0.9983 times the price set by the most expensive copy
On April 18, 2011: price of
The Making of a Fly: $23.6 million…
39
THE IMMINENTTHE IMMINENT
40
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
Technologies until 2030
Specific projections:
Predictive crime prevention
Predictive group sentiment analysis
Neural network image recognition
Emotion tracking
Proactive software agents
Policy Horizons Canada, April 2014
PO LI CY H OR I ZO NS CAN AD APO LI CY H OR I ZO NS CAN AD A
41
PO LI CY H OR I ZO NS CAN AD A ( C ON T .)PO LI CY H OR I ZO NS CAN AD A ( C ON T .)
Policy Horizons Canada, April 2014
42
Employment (in millions)
300
200
100
0
0
0.2
0.4
0.6
0.8
Probability of Computerization
EF FE CT ON JO B MA R K ETEF FE CT ON JO B MA R K ET
Frey, C.B. & Osborne, M., 2013. The Future of Employment: How susceptible are jobs to compute risation? Oxford Martin School, University of Oxford
The distribution of 2010 Bureau of Labor Statistics employment and wage data over the probability of
computerisation
Office & admin support
Sales
Service
Production
Management/business/financi al
Science
/engineering
Education/legal
/arts/media
Healthcare
Transportat ion
Maintenance
Service
43
MA SS U NE M PL O Y ME N TMA SS U NE M PL O Y ME N T
Smith, Aaron, and Janna Anderson. ‘AI, Robotics, and the Future of Jobs’.
Pew Research Center, 6 August 2014.
1. Displacement is already happening
2. Previous tech. revolutions happened more slowly, people had time
to retrain
3. Robot/AI are much more versatile than previous technologies
Greater disruptive potential
4. Gains at the top of the labor market will not be offset by losses in
the middle & bottom
44
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
MA SS U NE M PL O Y ME N T: C OU NT E RA R G UME N TSMA SS U NE M PL O Y ME N T: C OU NT E RA R G UME N TS
https://en.wikipedi a.org/wiki/Cat egory:O bsolete_occu pations
Technology has created more jobs than it destroyed
Obsolete occupations:
Switchboard operator
(Human) computer
Milkman
Lamplighter
Scribe
Typesetter
Law-writer (replicating
documents
Horse dung remover
Rat catcher
Lift operator
Human Alarm Clock
Cabin boy
Hallboy
Footman/maid
Mantle cleaner
Soda jerker
Steam operator
Biritch (herald)
Weaver
Textile worker
Coal separator
Coal stoker
Fendersmith
Leech gatherer
Gong farmer
…
45
O*NET
Labour market analysis
US Department of Labour
Uses the Standard Occupational Classification
updated by surveys of each occupation’s worker
population & related experts
Provides up-to-date information on occupations
as they evolve over time
(Frey & Osborne 2013)
BO TT LE N EC K S TO J OB C OM P UT E RI ZAT I ONBO TT LE N EC K S TO J OB C OM P UT E RI ZAT I ON
46
June 2013: NASA's Asteroid Grand
Challenge
The initiative seeks to identify asteroids that
might be interesting candidates for scientific
exploration
Nov 2013: NASA announces
partnership with asteroid-mining firm
Planetary Resources Inc.
AP PL IC A TI O NS : AS T ER O ID M IN I NGAP PL IC A TI O NS : AS T ER O ID M IN I NG
47
AU TO NO M OU S MI L IT A RY R OB O TSAU TO NO M OU S MI L IT A RY R OB O TS
Is the target a Soviet-made T-80 tank?
IDENTIFICATION CONFIRMED
Is the target located in an authorized free-fire zone?
LOCATION CONFIRMED
Are there any friendly units within a 200-meter radius?
NO FRIENDLIES DETECTED
Are there any civilians within a 200-meter radius?
NO CIVILIANS DETECTED
Weapons release authorized. No human command
authority required
Singer, P. W. (2009, Winter). Military Robots and the Laws of War. The New Atlan tis, (23):25–45
48
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
“B ER LI N S T AT E ME N T”“B ER LI N S T AT E ME N T”
Danger to peace & international security
accelerate pace & tempo of warfare
exacerbate dangers of asymmetric warfare
further indiscriminate & disproportionate use of force
obscure moral & legal responsibility for war crimes
undermines the capacity of human beings to make responsible
decisions during military operations
Internation al Committ ee for Robot Arms Control (2010), “Berlin Statement”
[There is] an urgent need to bring into existence an
arms control regime to regulate the development,
acquisition, deployment, and use of armed tele-
operated and autonomous robotic weapons
49
theatlantic.com
Super aEgis II turret gun
“Raptor” drone
AU TO NO M OU S MI L IT A RY R OB O TS (C O NT . )AU TO NO M OU S MI L IT A RY R OB O TS (C O NT . )
Future of Life Institu te. ‘Autonom ous Weapons: An Open Letter from AI
& Robotics Researchers’, 28 July 2015
It will only be a matter of time until [ALRs] appear on the black market and in
the hands of terrorists, dictators wishing to better control their populace,
warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are
ideal for tasks such as assassinations … subduing populations and [ethnic
cleansing] (FHI 2015)
50
AU TO NO M OU S SW A RM SA U TO N OM O US SW AR M S
Scharre, Paul. ‘Robotics on the Battl efield Part II:
The Coming Swarm’. Center for a New American Security , 2014.
51
Uninhabited systems [have] the potential
to disaggregate expensive multimission
systems into a larger number of smaller,
lower cost distributed platforms
Network of cooperating ALRs
Air, ground, and naval
Clouds of billions (!) of ultra-cheap 3D-
printed drones
Advantages
Allow dispersed attacks
Saturate enemy territory
Increased stealth
Many decoys
“Mining” airspace
Greater survivability/resilience
AU TO NO M OU S MI L IT A RY R OB O TS : P RO SAU TO NO M OU S MI L IT A RY R OB O TS : P RO S
Kaplan, Jerry. ‘Robot Weapons: What’s the Harm?’ The New York Times, 17 August 2015
A soldier may be tempted to return fire indiscriminately,
in part to save his or her own life. By contrast, a machine
won’t grow impatient or scared, be swayed by prejudice or
hate, willfully ignore orders or be motivated by an instinct
for self-preservation (Kaplan 2015)
52
http://ieet.org/
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
LE GA L IM P LI C AT I ON SLE GA L IM P LI C AT I ON S
Scherer, Matthew U. ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies , and Strateg ies’. SSRN Scholarly Paper. Rochester, NY: Social Science Research
Network, 30 May 2015.
Liability
Who’s liable in case of harm?
The manufacturer? Engineer? The AI?
What is “foreseeable” by the AI? By the manufacturer?
Liability requires that the harm was “reasonably foreseeable”
Ownership
Who owns (intellectual) property generated by an AI?
Regulation
Anyone with a computer may be able to create an AI
53
THE (IR-)REVERSIBLETHE (IR-)REVERSIBLE
54
Hunter-gatherer: 224,000 years
Agricultural revolution
Farming society: 909 years
Industrial revolution
Industrial society: 6.3 years
Forthcoming: technological singularity (?)
Hanson, R, 2008. “Economics of the Singularity .” IEEE Spectrum 45(6): 37–42
AC CE LE R AT I ON : EC O NO M IC ARG U ME N TSAC CE LE R AT I O N: EC ON O MI C AR GU M EN T S
WORLD ECO NOM Y DOUBLING TIME
Long-term histor y of World GDP over the last million years, plot ted on
a double logarithmic time scale (Bostrom 2014)
TIME NEEDED TO SUS TAIN +MILLIO N PEOPLE
200,000BC: Million yrs
500BC: 200 yrs
2010: 90 minutes
2015
55
AC CE LE R AT I ON : SO C IO - TE CH N O L O GI C AL AR G UM E NT SAC CE LE R AT I ON : SO C IO - TE CH N O L O GI C AL AR G UM E NT S
We can read the history of modernity as
a series of innovations [eg in
transportation, communication] in ever
increasing time compression. …
In the information age, not only are the
rhythms of life faster, but the rate of
change has itself accelerated.
Social theorists have responded in a
number of ways to these processes of
acceleration.
Kurzweil, R., 2006. The Singulari ty Is Near: When Humans Transcend Biology, Penguin.
Wajcman, J., 2008. Life in the fast lane? Towards a sociology of technolo gy and time.
The British journal of sociology, 59(1), pp.59–77.
The pace of change of our human-created
technology is accelerating and its powers are
expanding at an exponential pace
56
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
AC CE L ER A TI O NAC CE L ER A TI O N
Exponentially more intelligent AI
Time intervals between each major
breakthrough in computer science
tends to come roughly twice as fast as
the previous one
the most notable events in over 40,000
years or 29 lifetimes of human history
have sped up exponentially, apparently
converging to zero within the next few
decades
Omega point or historic singularity:
around 2040
Schmidhuber, J., 2012. New Millennium AI and the Convergence of History:
Update of 2012. In A. H. Eden et al., eds. Singularity Hypotheses pp. 61–82
Ω− 2
9
Lifetimes: humans start colonizing the world from Africa
Ω− 2
8
LTs: bow & arrow; hunting
Ω− 2
7
LTs: agriculture; permanent settlements
Ω− 2
6
LTs: high civilizations (Sumeria, Egypt), writing
Ω− 2
5
LTs: democracy, science, art, philosophy , math., anatomy,
music, …
Ω− 2
4
LTs: bookprint, science & culture spreading
Ω− 2
3
LTs: exploration, Renaiss ance, Reformation, Scientific
Revolution
Ω− 2
2
LTs: enlightenment, flying machines; steam eng., industrial
revolution
Ω− 2 LTs: combustion eng, electricity, medicine , genetics,
evolution theory
Ω− 1 LT: modern post, WWII, computers, spacecra , DNA
Ω− 1/2 LT: personal computers, World Wide Web
Ω− 1/4 LT
Ω− 1/8 LT
….
Ω
= 2040
57
EN ER GY RA T E D EN S IT YEN ER GY RA T E D EN S IT Y
Definition:
The amount of energy passing through a system
per unit time and per unit mass
Unifying principle:
Physical systems
stars & galaxies: 10
-3
–10
-2
erg/s/g
Biological systems
plants, animals: 103–105 erg/s/g
Cultural/technological systems:
Agricultural 105 erg/s/g
factories 106.5 erg/s/g
Aircrafts 107.5 erg/s/g
Chaisson, E.J., 2013. A Singular Universe of Many Singularities: Cultural Evolut ion in a Cosmic Context.
In A. H. Eden et al., eds. Singularity Hypotheses, pp. 413–440
58
BI G HI S TO R YBI G HI S TO R Y
59
AC CE LE R AT I ON : OU T CO MEA C C EL E RA T IO N : O UT C OM E
(Alvin Toff ler, “Future Shock”, 1970:12)
In the three short decades between now and the twenty-first century, millions of
ordinary, psychologically normal people will face an abrupt collision with the
future. Citizen’s of the world’s richest and most technologically advanced nations,
will find it increasingly painful to keep up with the incessant demand for change
that characterizes our time. For them, the future will have arrived too soon. …
For the acceleration of change ... is a concrete force that
reaches deep into our personal lives, compels us to act out new
roles, and confronts us with the danger of a new and
powerfully upsetting psychological disease.
60
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
CH AN GE DE N IA LI SMC H AN G E D EN I A LI S M
Thus man moves swiftly into an unexplored
universe, into a totally new stage of eco-
technological development… stumbles into the
most violent revolution in human history
muttering, in the words of one famous though
myopic sociologist, that "the processes of
modernization ... have been more or less
completed." He simply refuses to imagine the
future.
Toffler Future Shock, p. 115
(Kurzweil 2005)
61
HU MA N- L EV E L AR T IF I CI AL IN T EL L I GE N CE (HL A I)HU MA N- L EV E L AR T IF I CI AL IN T EL L I GE N CE (HL A I)
Also known as –
General AI, artificial general intelligence (AGI), … or simply AI
Working definitions
Turing test (Turing 1950)
A sufficient, not a necessary condition (!)
The Employment Test (Nilsson 2005)
Programs must be able to perform the jobs ordinarily performed by humans
Progress toward human-level AI could then be measured by the fraction of
these jobs that can be acceptably performed by machines
HLAI ‘point’ (Kruel 2012)
AI is able to perform 80% of jobs at least as well as humans
62
Surveys among AI experts
show:
Majority expects HLAI to
arrive by the end of the
century
The median date of HLAI
arrival: 2050
Müller, V.C. & Bostrom, N., 2014. Future Progress in Artificial Intelligence : A Poll Among Experts
PL AU SI B IL I T Y OF HL AIPLA US I BI L I TY OF HL A I
63
PL AU SI B IL I T Y OF HL AIPLA US I BI L I TY OF HL A I
Turing, A.M., 1950. Computi ng Machinery and Intellig ence. Mind, 59
I believe that at the end of the
century the use of words and
general educated opinion will
have altered so much that one
will be able to speak of machines
thinking without expecting to be
contradicted
My contention is that machines can be
constructed which will simulate the
behaviour of the human mind very
closely. They will make mistakes at
times, and at times they may make new
and very interesting statements, and on
the whole the output of them will be
worth attention to the same sort of
extent as the output of a human mind
64
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
IM PL AU S IB I L I TY OF HL A IIM PL AU S IB I L I TY OF HL A I
Descartes, Rene. Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences. Vol. Section Five, 1642.
[Machines] could never use words or other signs arranged
in such a manner as is competent to us in order to declare
our thoughts to others.
Although such machines might execute many things with
equal or perhaps greater perfection than any of us, they
would, without doubt, fail in certain others from which it
could be discovered that they did not act from knowledge,
but solely from the disposition of their organs.
65
It must be morally impossible that there should exist in
any machine a diversity of organs sufficient to enable it
to act in the way in which our reason enables us to act.
IM PL AU S IB I L I TY OF HL A I 2IM PL AU S IB I L I TY OF HL A I 2
web.mit.edu
It is hugely unlikely, though not impossible, that a conscious
mind will ever be built out of software. (Gelertner 2011, email)
nbcnews.com
[The] Singularity will never occur… I am a skeptic.
I don't believe this kind of thing is likely to happen,
at least for a long time. (Moore, IEEE Spectrum
2008)
Regardless of what the future holds, believers in the ‘‘machine
intelligence explosion’’ are simply fideists*(Bringsjord et al. 2012)
(*) fi·de·ism
[fee-dey-iz-uhm, fahy-dee-] noun
exclusive reliance in religious matters upon faith,
with consequent rejection of appeals to science or philosophy. [
Random House Dictionary 2013
]
66
IM PL AU S IB I L I TY OF HL A I 3IMP L A US I BI LIT Y OF HLA I 3
Searle, John R. ‘Minds, Brains, and Programs’. Behavio ral and Brain Sciences 3, no. 03 (1980): 417–24
Because we do not understand the brain very well we are constantly
tempted to use the latest technology as a model for trying to
understand it. In my childhood we were always assured that the
brain was a telephone switchboard. ('What else could it be?') I was
amused to see that Sherrington, the great British neuroscientist,
thought that the brain worked like a telegraph system. Freud often
compared the brain to hydraulic and electro-magnetic systems.
Leibniz compared it to a mill, and I am told some of the ancient
Greeks thought the brain functions like a catapult. At present,
obviously, the metaphor is the digital computer.
67
agree that
machine
equals
brain
Not until a machine can write a sonnet or compose a concerto because of
thoughts and emotions felt, and not by the chance fall of symbols, could we
agree that machine equals brain
HL AI : COU NT ER AR G UM EN TSHL AI : CO U NT E RA R G U ME NT S
Butler, Samuel. Erewhon, 1872. In: Turing, A.M., 1951. Can Digital Computers Think? Automatic Calculating Machines, BBC (15 May 1951)`
The only thing of which I am sure is
that the distinction between the
organic and inorganic is arbitrary;
that it is more coherent with our other
ideas, and therefore more acceptable,
to start with every molecule as a
living thing, and then deduce death
as the breaking up of an association
or corporation, than to start with
inanimate molecules and smuggle life
into them.
68
There is no security against the
ultimate development of mechanical
consciousness, in the fact of machines
possessing little consciousness now.
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
HL AI : COU NT ER AR G UM EN TS 2HL AI : COU NT ER AR G UM EN TS 2
Turing, A.M., 1951. Can Digit al Computers Think? Autom atic Calcula ting Machines, BBC (15 May 1951)
Many people are extremely opposed to the
idea of machines that think, but I do not
believe that it is for any of the reasons that
I have given, of any other rational reason,
but simply because they do not like the
idea.
To [construct machines which simulate
human minds] would meet with great
opposition ... from the intellectuals who are
afraid of being put out of a job.
When a distinguished but elderly
scientist states that something is
possible, he is almost certainly
right. When he states that
something is impossible, he is very
probably wrong
Arthur C. Clark's third law
blogsot.com
69
CO NS EQ U EN C ES OF HL AICO NS EQ U EN C ES OF HL AI
(Turing 1951)
Since man’s near-monopoly of all higher forms of intelligence has
been one of the most basic facts of human existence throughout the
past history of this planet, such developments would clearly create a
new economics, a new sociology, and a new history
Schwartz, J., 1987. Limits of Artificial Intelligence. In Stuart
C. Shapiro, ed. Encyclopedia of Artificial Intelligence
Let us … look at the consequences of constructing [HLAI]. ...
There would be plenty to do .... trying to keep one's
intelligence up to the standards set by the machines, for it
seems probable that once the machine thinking method has
started, it would not take long to outstrip our feeble powers
70
SU PE R IN T E LL I G E N C ESU PE R IN T E LL I G E N C E
Bostrom, N., 2014. Superintel ligence: Paths, dangers, strate gies, Oxford University Press.
“Any intellect that greatly exceeds the cognitive performance of humans in
virtually all domains of interest.” (Bostrom, 2014 ch. 2)
Intelligence
Mouse
Village
idiot
Chimp Einstein
71
PA TH S TO MA CH IN E S U PE R IN T E L LI GE N CEP AT H S T O MAC H I NE SU P ER I NT EL LI G EN CE
Let an ultraintelligentmachine be defined as a machine that can far
surpass all the intellectual activities of any man however clever.
Since the design of machines is one of these intellectual activities, an
ultraintelligent machine could design even better machines; there
would then unquestionably be an intelligence explosion, and the
intelligence of man would be left far behind
I.J. Good, “Speculations Concerning the First Ultraintelligent
Machine”,1965
Complicated electronic circuits can also make computers act in an
intelligent way. And if they are intelligent they can presumably
design computers that have even greater complexity and
intelligence
Stephen Hawking, “Science in the Next Millennium”, 1998
theguardian. tv
72
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
MA CH IN E S U PE R IN T E L LI GE N CE RIS KSMA CH IN E S U PE R IN T E L LI GE N CE RIS KS
S. Hawking , S. Russell , M. Tegmark, F. Wilczek. Transcen dence looks at the implicat ions of artificial intelligence—but are we taking AI seriously enough? The Independent (1 May 2014)
Artificial-intelligence (AI) research is now progressing rapidly
Recent landmarks … are merely symptoms of an IT arms race fuelled by unprecedented investments
Such achievements will probably pale against what the coming decades will bring
Success in creating AI would be the biggest event in human history
Unfortunately, it might also be the last, unless we learn how to avoid the risks
An explosive transition is possible
As Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even
further, triggering what Vernor Vinge called a “singularity”
One can imagine such technology outsmarting financial markets, out-inventing
human researchers, out-manipulating human leaders, and developing weapons we
cannot even understand.
73
MA CH IN E S U PE R IN T E L LI GE N CE RIS K: IND I F FE R EN CEMA CH IN E S U PE R IN T E L LI GE N CE RIS K: IND I F FE R EN CE
2001: A Space Odyssey. Warner Home Video , 2006.
Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do
that.
Bowman: What's the problem?
HAL: I think you know what the problem is
just as well as I do.
Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to
allow you to jeopardize it.
Bowman: I don't know what you're talking
about, HAL.
HAL: I know that you and Frank were
planning to disconnect me, and I'm afraid
that's something I cannot allow to happen.
…
Bowman: HAL, I won't argue with you
anymore! Open the doors!
HAL: Dave, this conversation can serve no
purpose anymore. Goodbye.
74
MA CH IN E S U PE R IN T E L LI GE N CE RIS K: IND I F FE R EN CE (2MA CH IN E S U PE R IN T E L LI GE N CE RIS K: IND I F FE R EN CE (2
Indifference, not malevolence
Nick Bostrom, “Existential Risks”, 2002, and “The Ethics of Superintelligent Machines”
A well-meaning team of programmers may make a big mistake in designing its
goal system. This could result… in a superintelligence whose top goal is the
manufacturing of paperclips, with the consequence that it starts transforming
first all of earth and then increasing portions of space into paperclip
manufacturing facilities.
75
MA CH IN E S U PE R IN T E L LI GE N CE RIS K: IND I F FE R EN CE 3MA CH IN E S U PE R IN T E L LI GE N CE RIS K: IND I F FE R EN CE 3
Indifference, not malevolence (cont.)
The Instrumental Convergence Thesis
Many final goals can be attained using few
(‘converging ’) sub-goals, such as:
Self-preservation
Cognitive enhancement
Resource acquisition
Humans may be ‘in the way’ of these sub-goals
Implication:
Bostrom, N., 2014. Superintel ligence: Paths, dangers, strate gies, Oxford University Press.
Instrumental Convergence
Thesis
Several instrumental values [or goals]
can be identified which are
convergent in the sense that their
attainment would increase the chances
of the agent’s goal being realised for a
wide range of final goals and a wide
range of situations, implying that
these instrumental values [or goals]
are likely to be pursued by a broad
spectrum of situated intelligent agents.
Superintelligent ∝ Human Human ∝ Ant
76
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
MA CH IN E S U PE R IN T E L LI GE N CE RIS K: ARM S R AC EMA CH IN E S U PE R IN T E L LI GE N CE RIS K: ARM S R AC E
Istvan, Zoltan. ‘A Global Arms Race to Create a Superintel ligent AI Is Looming’. Motherb oard, 6 March 2015.
Forget about superintelligent AIs being created by a company,
university, or a rogue programmer with Einstein-like IQ.
I'm certain [that] another theme regarding AI is just about to emerge—
one bound with nationalistic fervor and patriotism.
Politicians and military commanders around the world will want this
superintelligent machine-mind for their countries defensive forces. And
they'll want it exclusively.
Whichever government launches and controls a superintelligent AI first
will almost certainly end up the most powerful nation in the world
because of it
77
MA CH IN E S U PE R IN T E L LI GE N CE RIS K: ARM S R AC E 2MA CH IN E S U PE R IN T E L LI GE N CE RIS K: ARM S R AC E 2
(Istvan 2015)
can you imagine if AI was developed and launched in, let's say, North
Korea, or Iran, or increasingly authoritarian Russia?
What if another national power told that superintelligence to break all
the secret codes and classified material that America's CIA and NSA use
for national security? …
Hack into the mainframe computers tied to nuclear warheads, drones, and other
dangerous weaponry?
Override all traffic lights, power grids, and water treatment plants?
The possible danger is overwhelming
The launch of the first truly autonomous, self-aware artificial
intelligence… is a matter of the highest national and global security
78
SU PE RI N TE LLI GE NC E: R IS K SSU PE RI N TE LLI GE NC E: R IS K S
More subtly, it could result in a superintelligence realizing a state of affairs that
we might now judge as desirable but which in fact turns out to be a false utopia,
in which things essential to human flourishing have been irreversibly lost.
We need to be careful about what we wish for from a superintelligence, because
we might get it.
Bostrom, Nick. ‘Ethical Issues in Advanc ed Artific ial Intelli gence’. In Cogniti ve, Emotiv e and Ethical Aspects of Decision Making in Humans and in Artifi cial Intell igence, edited by
George Eric Lasker and Goreti Marreiros, 2:12–17. International Institute for Advanced Studies, 2007.
It seems obvious that major existential risks would be
associated with an intelligence explosion, and that the
prospect should hence be taken seriously even if it were
known to have but a moderately small probably of
materializing (Bostrom 2014)
79
IM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CEIM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CE
AAAI. ‘AAAI Presidenti al Panel on Long- Term AI Futures (’Asilomar Meeting’), Inte rim Report from the Panel Chairs’,
August 2009.
The panel of experts was overall skeptical
of the radical views expressed by futurists
and science-fiction authors ... about the
prospect of an intelligence explosion as
well as of a “coming singularity,” and also
about the large-scale loss of control of
intelligent systems.
80
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
IM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CE 2IM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CE 2
If indeed our objective is to build
computer systems that solve very
challenging problems, my thesis is that
IA > AI
that is, that intelligence amplifying
systems can, at any given level of
available systems technology, beat AI
systems. That is, a machine and a mind
can beat a mind-imitating machine
working by itself.
81
The worry stems from a fundamental
error in not distinguishing the
difference between the very real
recent advances in a particular aspect
of AI, and the enormity and
complexity of building sentient
volitional intelligence.
Brooks, Frederick P., Jr. ‘The Computer Scientist As Tools mith II: ACM Allen Newell Award
Acceptance Lecture Deliver ed at SIGGRAPH 94’. Communic ations of the ACM 39, no. 3 (March
1996): 61–68
IM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CE 3I M PL A US IB ILI TY O F MA CH I NE SU P ER I NT E L LI GEN C E 3
Allen, Paul. ‘The Singularity Isn’t Near’. MIT Technology Review, 12 October 2011
Lanier, Jaron. ‘The Myth Of AI’. Edge.com, 14 Novembe r 2014
82
A religion, promulgated by the "domineering
subculture: wealthy, prolific, and influential
elite class [Silicon Valley] In the history of
organized religion, it's often been the case
that people have been disempowered precisely
to serve what was perceived to be the needs of
some deity or another, where in fact what
they were doing was supporting an elite class
that was the priesthood for that deity.
The new religious idea of AI is a lot like the
economic effect of the old idea, religion.
(Lanier 2014)
I am puzzled by the arguments
put forward by those who say we
should worry about a coming
AI, singularity, because all they
seem to offer is a prediction
based on Moore's law. (Smolin,
in Lanier 2014)
Like other attempts to forecast the
future from the past, these “laws”
will work until they don’t. (Allen
2011)
IM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CE 4IM PL AU S IB I L I TY OF MA C HI N E S UP E RI N T ELL IGE N CE 4
Lanier, Jaron. ‘An Apocalypse of Self-Abdication’. In You Are Not a Gadget: A Manifes to. London: Penguin, 2011.
83
The meaning of life, in this view, is making the digital system we
call reality function at ever- higher “levels of description.”
The coming Singularity is a popular belief in the society of
technologists. Singularity books are as common in a computer
science department as Rapture images are in an evangelical
bookstore.
The antihuman approach to computation is one of the most
baseless ideas in human history. A computer isn’t even there
unless a person experiences it. There will be a warm mass of
patterned silicon with electricity coursing through it, but the bits
don’t mean anything without a cultured person to interpret them.
When the techies
are crazier than
the Luddites
… AN D TH E PH IL I ST IN ES… AN D TH E PH I LI S T I NE S
It is important that Dr Eden does not spend his working time researching on the
topic of the singularity. This does not fit with the research profile of the School [of
computer science and electronic engineering], it appears to be outside his own
core area of expertise (software engineering) and at this moment in time it is not
clear whether this area is going to be significant.
84
Dr Eden is advised against spending the
research time funded by the School/University
on this topic.
Maria Fasli, Head of School, Computer Science &
Electronic Engineering, University of Essex, 9 April
2012
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
Open Questions
Notions of a singularity
Bibliography
THANK YOU!THANK YOU!
85
OP EN Q UE S TI ONSOP EN Q UE S TI ONS
What AI is not?
Is there a species-nonspecific, effective metric of intelligence?
Has the intelligence of AI systems been growing? At which rate?
86
NO TI ON S O F A SI NG U LA R IT YNO TI ON S O F A SI NG U LA R IT Y
Eden, Amnon H., Eric Steinhart, David Pearce, & James H. Moor. “Singularity Hypotheses: A n Overview”. In: Eden, Amnon H., James H. Moor, Johnny H.
Søraker, and Eric Steinhart, eds. Singularity Hypotheses - A Scientific and Philosophical Assessment. The Frontiers Collection. Springer, 2012.
A change comparable to the rise of
human life on Earth (Vernor Vinge
(UCSD) 1993)
wikimedia
An essential singularity in the history of the race beyond which
human affairs as we know them could not continue (John Von
Neumann, in Ulam 1958)
A rupture the fabric of human history
(Kurzweil, Google, 2006, The
Singularity Is Near)
Wired
theguardian.c om
87
NO TI ON S O F A SI NG U LA R IT Y (CO N T .)NO TI ON S O F A SI NG U LA R IT Y (CO N T .)
1st Wave: the Agricultural Revolution
took thousands of years
2nd Wave: the Industrial Revolution
took hundreds of years
3rd Wave: the Knowledge Revolution
(Toffler, The Third Wave, 1980)
It is likely that the Third Wave will
sweep across history and complete
itself in a few decades
88
The Imminent, the Possible, and the Irreversible: The Disruptive Potential of Artificial Intelligence 12 November 2015
Computability Group, Institute of Advanced Studies, Hebrew University of Jerusalem Amnon H. Eden
BI BL I O G RA P HYB I BL I O GR A PH Y
Eden, A.H., Steinhart E., Pearce, D., Moor, J.H., 2013. “Singularity Hypotheses:
An Overview.” In A.H. Eden et al. (eds.), Singularity Hypotheses pp. 1–12. The
Frontiers Collection, Springer
Hutter, M., 2005. Universal Artificial Intelligence: Sequential Decisions Based on
Algorithmic Probability. Berlin: Springer
Russell, S.J. & Norvig, P., 1995. Artificial Intelligence: A Modern Approach, Upper
Saddle River, NJ, USA: Prentice-Hall, Inc.
Toffler, A., 1970. Future shock, Random House.
Turing, A.M., 1951. Can Digital Computers Think? Automatic Calculating
Machines, BBC (aired 15 May). The Essential Turing, pp. 476–486
Turing, A.M., 1951. Intelligent Machinery, A Heretical Theory. The ’51 Society,
Manchester. The Essential Turing, pp. 472–475
89