ChapterPDF Available

Self-Motivating Computational System Cognitive Architecture: An Introduction

Authors:

Abstract and Figures

This paper is an overview of the Independent Core Observer Model (ICOM) Cognitive Extension Architecture which is a methodology or ‘pattern’ for producing a self-motivating computational system that can be self-aware. ICOM is as a system for abstracting standard cognitive architecture from the part of the system that can be self-aware and a system for assigning value on any given idea or ‘thought’ and action as well as producing ongoing self-motivations in the system. In ICOM, thoughts are created through emergent complexity in the system. As a Cognitive Architecture, ICOM is a high level or ‘top down’ approach to cognitive architecture focused on the system’s ability to produce high level thought and self-reflection on ideas as well as form new ones. Compared to standard Cognitive Architecture, ICOM is a form of an overall control system architecture on top of such a traditional architecture.
Content may be subject to copyright.
U
NCORRECTED PROOF
1Part VI
2Artificial General Intelligence
3
4True Articial General Intelligence (AGI) will be a phaseshift in the properties of
5civilization. No longer will we be the smartest intelligence the Earth has ever
6known. It will mean potentially huge shifts in what we can accomplish. When
7thinking about what true AGI means for humanity, one needs to understand how
8and what motivates that AGI. Do we even know? Is there any way for us to predict
9what will happen once there is a machine intelligence that is beyond human ability?
10 Will it be like us? Will it be friendly to us? These are all questions to be considered
11 by humanity as we get closer to making the goal of AGI real. Setting aside the
12 implications of AGI, to make it real we need to build and we have a lot of work
13 to-do but it could be closer than one might think.
14 The following three chapters are about one possible solution to designing and
15 building an AGI called the Independent Core Observer Model which is an engi-
16 neering design pattern, or cognitive extension architecture, developed by a privately
17 funded research project which I manage. These three chapters are designed to
18 explain elements of ICOM so that subsequent chapters focused on study results or
19 algorithms can just reference these to explain the fundamentals of ICOM based
20 systems.
21 The three chapters are:
22 Self-Motivating Computational System Cognitive Architecture (An
23 Introduction)High level operational theory of the Independent Core
24 Observer Model Cognitive Extension Architecture
25 Modeling Emotions in a Computational SystemEmotional Modeling in the
26 Independent Core Observer Model Cognitive Architecture
27 Articial General Intelligence as an Emergent QualityArticial General
28 Intelligence as an Emergent Quality of ICOM and the AGI Phase Transition
29 Threshold
30 David J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: Part 6 Date: 6-8-2016 Time: 12:16 pm Page: 431/431
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Google It
Series Title
Chapter Title Self-Motivating Computational System Cognitive Architecture: An Introduction
Copyright Year 2016
Copyright HolderName Springer Science+Business Media New York
Corresponding Author Family Name Kelley
Particle
Given Name David J.
Prefix
Suffix
Division
Organization
Address Seattle, Washington, United States
Email david@artificialgeneralintelligenceinc.com
Abstract This paper is an overview of the Independent Core Observer Model (ICOM) Cognitive Extension
Architecture which is a methodology or ‘pattern’ for producing a self-motivating computational system
that can be self-aware. ICOM is as a system for abstracting standard cognitive architecture from the part of
the system that can be self-aware and a system for assigning value on any given idea or ‘thought’ and
action as well as producing ongoing self-motivations in the system. In ICOM, thoughts are created through
emergent complexity in the system. As a Cognitive Architecture, ICOM is a high level or ‘top down’
approach to cognitive architecture focused on the system’s ability to produce high level thought and self-
reflection on ideas as well as form new ones. Compared to standard Cognitive Architecture, ICOM is a
form of an overall control system architecture on top of such a traditional architecture.
U
NCORRECTED PROOF
1Self-Motivating Computational System
2Cognitive Architecture: An Introduction
3David J. Kelley
4Introduction to Articial General Intelligence
5The road to building articial general intelligence (AGI) is not just very complex
6but the most complex task computer scientists have tried to solve. While over the
7last 30+ years a great amount of work has been done, much of that work has been
8narrow from an application standpoint or has been purely theoretical and much of
9that work has been focused on elements of Articial Intelligence (AI) such as image
10 pattern recognition or speech analysis. The trick in these sorts of tasks is under-
11 standing in context; which is a key part of true articial general intelligence but
12 its not the only issue. This chapter does not focus on the problem of context and
13 pattern recognition but on the problem of self-motivating computational systems, or
14 rather of assigning value and emergent qualities because of that trait. It is the
15 articulation of a theory for building a system that has its own feelingsand can
16 decide for its self if it likes this art or that art or it will try to do this task or that task
17 and can be entirely independent.
18 Note: Articial intelligence (AI) is the intelligence exhibited by machines or software. It is
19 also the name of the academic eld of study which studies how to create computers and
20 computer software that are capable of intelligent behavior. Major AI researchers and
21 textbooks dene this eld as the study and design of intelligent agents, in which an
22 intelligent agent is a system that perceives its environment and takes actions that maximize
23 its chances of success. John McCarthy, who coined the term in 1955, denes it as the
24 science and engineering of making intelligent machines.https://en.wikipedia.org/wiki/
25 Articial_intelligence
26 Note: Articial general intelligence (AGI) is the intelligence of a machine that could
27 successfully perform any intellectual task that a human being can. It is a primary goal of
28 articial intelligence research and an important topic for science ction writers and
D.J. Kelley (&)
Seattle, Washington, United States
e-mail: david@articialgeneralintelligenceinc.com
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 433/445
©Springer Science+Business Media New York 2016
N. Lee (ed.), Google It, DOI 10.1007/978-1-4939-6415-4_24
433
Editor Proof
U
NCORRECTED PROOF
29 futurists. Articial general intelligence is also referred to as strong AI,full AIor as the
30 ability to perform general intelligent action.https://en.wikipedia.org/wiki/Articial_
31 general_intelligence
32 Let us look at the thesis statement for this chapter;
33 Regardless of the standard cognitive architecture used to produce the under-
34 standingof a thing in context, the following architecture supports assigning value
35 to that context in a computer system that is self-modifying based on those value
36 based assessments, albeit indirectly, where the systems ability to be self-aware is
37 an emergent quality of the system based on the ICOM emotional architecture.
38 Note: The term Contextrefers to the framework for dening a given object or thing wither
39 abstract in nature, an idea or noun of some sort. When discussion for example a pencil, it is
40 the context in which the idea of the pencil sits that makes and provides meaning to the
41 discussion of the pencil save in the abstract and even then we have the abstractidea which
42 is itself contextin which to discuss the pencil.
43 Note:is the capacity for introspection and the ability to recognize ones self as an indi-
44 vidual separate from the environment and other individuals. It is not to be confused with the
45 consciousness in the sense of qualia. While consciousness is a term given to being aware of
46 ones environment and body and lifestyle, self- is the recognition of that awareness.[1]
47 https://en.wikipedia.org/wiki/Self-awareness
48 Note: A cognitive architecture is a hypothesis about the xed structures that provide a
49 mind, whether in natural or articial systems, and how they work together in conjunction
50 with knowledge and skills embodied within the architecture to yield intelligent behavior
51 in a diversity of complex environments.http://cogarch.ict.usc.edu/
52 In computer science and software engineering there is a complex set of terms and
53 acronyms that mean any number of things depending on the audience in the tech
54 sector. Further, in some cases, the same acronyms mean different things in different
55 circles and often in those circles people have a different understanding of terms that
56 should mean the same thing and the individuals believe they know what each other
57 are talking about, but in the end they were thinking different things with various but
58 critical differences in the meanings of those terms. To offset that problem to some
59 degree, I have articulated a few denitions in a glossary at the end of this chapter, as
60 I understand them; so that, in the context of this chapter, one can refer back to these
61 terms as a basis for understanding. While at the end of the chapter there is a
62 glossary, the most critical term needed to be able to approach the subject in detail
63 under this pattern is context.
64 The term Contextin this chapter refers to the framework for dening an object,
65 such as an idea or noun of some sort and the environment in which that thing should
66 be understood. When discussing for example a pencil, it is the context in which the
67 idea of the pencil sits that makes and provides meaning to the discussion of the
68 pencil, save in the abstract, and even then we have the abstractidea that is itself
69 contextin which to discuss the pencil.
70 To better understand the idea of context, think of a pencil being used by an artist
71 vs a pencil being used by a student. It is the used by an artistversus used by a
72 studentthat is an example of contextwhich is important in terms of
434 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 434/445
Editor Proof
U
NCORRECTED PROOF
73 understanding the pencil and its current state. Using ICOM it is the assigning of
74 value to context or two elements in a given context as they related to one another
75 that is key to understanding the ICOM theory as a Cognitive Extension Architecture
76 or over all architecture for an AGI system.
77 Understanding the Problem
78 Solving StrongAI or AGI (Articial General Intelligence) is the most important
79 (or at least the hardest) Computer Science problem in the history of computing
80 beyond getting computers working to begin with. That being the case though, it is
81 only incidental to the discussion here. The problem is to solve or create human like
82 cognition in a software system sufciently able to self-motivate and take inde-
83 pendent action on that motivation and to further modify actions based on
84 self-modied needs and desires over time.
85 Note: A theoretical construct used to explain behavior. It represents the reasons for peoples
86 actions, desires, and needs. Motivation can also be dened as ones direction to behavior, or
87 what causes a person to want to repeat a behavior and vice versa. In the context of ICOM:
88 The act of having a desire to take action of some kind, to be therefore motivatedto take
89 such action, where self-motivation is the act of creating ones own desire to take a given
90 action or set of actions. https://en.wikipedia.org/wiki/Motivation
91 There are really two elements to the problem; one of decomposition, for example
92 pattern recognition, including context or situational awareness and one of
93 self-motivation or what to do with things once a system has that contextproblem
94 addressed and value judgements placed on them. That second part is the main focus
95 of the ICOM Cognitive Extension architecture.
96 Going back to the thesis or the theoryor approach for ICOM;
97 The human mind is a software abstraction system (meaning the part that is
98 self-aware is an emergent software system and not hard coded on to its substrate)
99 running on a biological computer. The mind can be looked at as a system that uses a
100 system of emotions to represent current emotional states in the mind, as well as
101 needs and associated context based on input, where the mind evaluates them based
102 on various emotional structures and value assignments and then modies the
103 underlying values as per input; denoted by associated context as decomposed in the
104 process of decomposition and identication of data in context. Setting aside the
105 complex neural network and related subsystems that generate pattern recognition
106 and other contextual systems in the human mind, it is possible to build a software
107 system that uses a model that would, or could, continuously feeland modify those
108 feelings like the human mind but based on an abstracted software system running
109 on another computing substrate. That system for example could use for example a
110 oating pointvalue to represent current emotional states on multiple emotional
111 vectors, including needs as well as associated context to emotional states based on
112 input, and then evaluate them based on these emotional structures and values
113 assignments; therefore modifying the underlying values as per input as denoted by
Self-Motivating Computational System Cognitive Architecture 435
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 435/445
Editor Proof
U
NCORRECTED PROOF
114 associated context from the decomposition process. In which case, given sufcient
115 complexity, it is possible to build a system that is self-aware and self-motivating as
116 well as self-modifying.
117 Relationship with the Standard Concepts of Cognitive
118 Architecture
119 Cognition can be dened as the mental process of collecting knowledge and
120 understanding through thought, experience and senses [2]. Further in the process of
121 designing a machine that is intelligent, it is important to build an architecturefor
122 how you are going to go about building said machine. Cognitive Architecture is a
123 given hypothesis for how one would build a mind that enables the mental process of
124 collecting knowledge and understanding through thought, experience and senses [3].
125 So then how does the ICOM methodology or hypothesis apply, or relate, to the
126 standard concepts of Cognitive Architecture? In my experience, most cognitive
127 architecture such as Sigma [4] is really a bottom up architecture focused on the
128 smallest details of what we have the technology and understanding to look at and do
129 to build from the ground up based on some model. In such systems, typically, they
130 are evaluated based on their behavior. ICOM is a Cognitive Architecturefocused
131 on the highest level down. Meaning ICOM is focused on how a mind says to itself,
132 I exist and I feel good about that. ICOM in its current form is not focused on the
133 nuance of decomposing a given set of sensory input but really on what happens to
134 that input after its evaluated, broken down and rened or comprehendedand ready
135 to decide how it(being an ICOM implementation) feels about it.
136 From a traditional AGI Architecture standpoint, ICOM approaches the problem
137 of AGI from the other direction then what is typical, and in that regard ICOM may
138 seem more like an overall control system for AGI Architecture. In fact, in the
139 overall ICOM model a system like Tensor Flow [4] is a key part of ICOM for
140 preforming a key task of cognition around bringing sensory input into the system
141 through what, in the ICOM model, is referred to as the contextengine in which
142 most AGI architectural systems can be applied to this functionality in an ICOM
143 implementation.
144 Regardless of the fact that ICOM is a top down approach to AGI Architecture,
145 the thoughtsthemselves are an emergent phenomenon of the process of emotional
146 evaluation in the system. Lets see how in the next section.
147 Emergent by Design
148 The Independent Core Observer Model (ICOM) contends that consciousness is a
149 high level abstraction. And further, that consciousness is based on emotional
150 context assignments evaluated based on other emotions related to the context of any
436 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 436/445
Editor Proof
U
NCORRECTED PROOF
151 given input or internal topic. These evaluations are related to needs and other
152 emotional values such as interests which are themselves emotions and used as the
153 basis for value; which drives interest and action which, in turn, creates the
154 emergent effect of a conscious mind. The major complexity of the system then is
155 the abstracted subconscious and related systems of a mind executing on the
156 nuanced details of any physical action without the conscious mind dealing with
157 direct details. Our ability to do this kind of decomposition is already approaching
158 mastery in terms of the state of the art in technology to generate context from data;
159 or at least we are far enough along to know we have effectively solved the problem
160 if not having it completely mastered at this time.
161 Note: The state or quality of awareness, or, of being aware of an external object or
162 something within oneself. It has been dened as: sentience, awareness, subjectivity, the
163 ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive
164 control system of the mind. Despite the difculty in denition, many philosophers believe
165 that there is a broadly shared underlying intuition about what consciousness is. As Max
166 Velmans and Susan Schneider wrote in The Blackwell Companion to Consciousness:
167 Anything that we are aware of at a given moment forms part of our consciousness, making
168 conscious experience at once the most familiar and most mysterious aspect of our lives.-
169 https://en.wikipedia.org/wiki/Consciousness
170 A scientist studying the human mind suggests that consciousness is likely an
171 emergent phenomenon. In other words, she is suggesting that, when we gure it
172 out, we will likely nd it to be an emergent quality of certain kinds of systems
173 under certain circumstances [5]. This particular system creates consciousness
174 through the emergent quality of the system as per the suggestion that it is an
175 emergent quality.
176 Lets look at how ICOM works, and how the emergent qualities of the system
177 thus emerge.
178 Independent Core Observer Model (ICOM) Working
179 The Independent Core Observer Model (ICOM) is a system where the core AGI is
180 not directly tied to detailed outputs but operates through an abstraction layer, or
181 observerof the core, which only need deal in abstracts of input and assigning
182 valueto output context. The observer is similar in function to the subconscious of
183 the human mind; dealing with the details of the system and system implementation,
184 including various autonomic systems and context assignment and decomposition of
185 input and the like.
186 Take a look at the following diagram (Fig. 1).
187 As we can see, fundamentally it would seem simple and straight forward;
188 however, the underlying implementation and operation of said framework is suf-
189 ciently complicated to be able to push the limits of standard computer hardware in
190 lab implementations. In this diagram, input comes into the observer which is broken
191 down into context trees and passed into the core. The core emotionallyevaluates
Self-Motivating Computational System Cognitive Architecture 437
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 437/445
Editor Proof
U
NCORRECTED PROOF
192 them for their emotional effects based on various elements, which we will dene
193 later, and then the results of that evaluation is analyzed by the observer and output
194 is generated by the observer to the various connected systems.
195 At this level, it is easy to conceptualize how the overall parts of the system go
196 together. Now lets look at how action occurs in the system in which the ICOM
197 provides a bias for action in ICOM as implemented in the lab (Fig. 2).
198 In this, we see the ow as might be implemented in the end component of the
199 observer of a specic ICOM implementation. While details of implementation may
200 be different in various implementations, this articulates the key tasks such systems
201 would have to do, as well as articulates the relationship of those tasks with regard to
202 a functioning ICOM based systems. Keep in mind this is not the complete observer
203 but refers to the end component as shown in the higher level diagram later.
204 The key goal of the observer end component ow is to gather context. This can
205 be through the use of pattern recognition neural networks or other systems as might
206 make sense in the context engine. The Observer system, in this case, needs to look
207 up related material in the process of receiving processed context from a context
208 engine of some kind. In providing that back to the core, the observer then needs to
209 map that context to a task or existing task. If that item exists, the context engine can
210 add the additional context tree to create the appropriate models or see if it is a
211 continuation of a task or question the system can drive to take actions as articulated
212 by placing that context tree back in the que and have the context engine check for
213 additional references to build out that tree more and pass again through the core.
214 Additionally, the system can then work out details of executing a given task in
Fig. 1 ICOM model diagram
B & W IN PRINT
438 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 438/445
Editor Proof
U
NCORRECTED PROOF
215 greater detail based on the observer context from the core after that emotional
216 evaluation. In laymans terms, this part of the observer model is focused on the
217 details of taking actions that the core has determined it would like to take through
218 that emotional based processing.
219 Now lets look at the idea of contextand how that needs to be composed for
220 ICOM (Fig. 3).
221 In this model we can see where existing technology can plug in, in-terms of
222 context generation, image or voice decomposition, neural networks and the like.
223 Once such systems create a context tree, the input decomposition engine needs to
224 determine if it is something familiar in terms of input. If that is the case, the system
225 needs to map it to the existing model for that kind of input (say vision for example).
226 The analysis with that model as default context in terms of emotional references is
227 then attached to the context tree (a logical data structure of relationships between
228 items). If there is existing context queued, it is then attached to the tree. If it is new
229 input, then a new base context structure needs to be created so that future rela-
230 tionships can be associated and then handed to the core for processing.
231 Now lets look at over all ICOM architecture (Fig. 4).
232 In this case, we can see the overall system is somewhat more complicated; and it
233 can be difcult to see where the observer and core model components are separate
234 so they have been color coded green. In this way, we can see where things are
235 handed off between the observer and the core.
Fig. 2 ICOM observer execution logical ow chart
B & W IN PRINT
Self-Motivating Computational System Cognitive Architecture 439
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 439/445
Editor Proof
U
NCORRECTED PROOF
236 Walking through this diagram, we start with Sensor input that is decomposed
237 into usable data. Those particular elements could be decomposed any number of
238 ways and is incidental to the ICOM architecture. There are many ways in the
239 current realm of computer science to determine how this can function. Given that
240 the data is decomposed, it is then run through the contextengine to make the
241 appropriate memory emotional context associations. At which point, if there is an
242 automatic response (like a pain response in humans), the observer may push some
Fig. 3 Context engine task ow
Fig. 4 Overall ICOM architecture
B & W IN PRINTB & W IN PRINT
440 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 440/445
Editor Proof
U
NCORRECTED PROOF
243 automatic response prior to moving forward in the system with the context tree then
244 falling into the incoming context que of the core. This que is essentially incoming
245 thoughts or things to thinkabout in the form of emotional processing. By
246 thinking, we only mean emotional processing as per the various matrices to
247 resolve how the machine feelsabout a given context tree. Actual thoughts would
248 be an emergent quality of the system, as articulated elsewhere.
249 The core has a primary and secondary emotional state that is represented by a
250 series of vector oating point values or valuesin the lab implementations. This
251 allows for a complex set of current emotional states and subconscious emotional
252 states. Both sets of states along with a needs hierarchy are part of the core calcu-
253 lations for the core to process a single context tree. Once the new state is set and the
254 emotional context tree for a given set of context is done, the system checks if its
255 part of a complex series and may pass to the memory pump if it is under a certain
256 predetermined value or, if it is above and complete, it passes to the context pump. If
257 it is part of a string of thought, it goes to the context que pending completion; in
258 which case it would again be passed to the context pump which would pass the tree
259 back to the observer.
260 As you can see, from the initial context tree the observer does a number of things
261 to that queue including dealing with overload issues and placing processed trees of
262 particular interest into the queue as well as input the context trees into the que.
263 Processed trees coming out of the core into the observer can also be passed up
264 inside the core and action taken on actionable elements. For example, say a question
265 or paradox needs a response, or additional context or there is an action that should
266 be otherwise acted upon; where the observer does not deals with the complexity of
267 that action per se.
268 Unied AGI Architecture
269 Given the articulated ow of the system, it is important to note that functionally the
270 system is deciding what emotionallyto feelabout a given thing based on more
271 than 144 factors (in the model implementation in the lab but is not necessarily always
272 true in simpler implementations) per element of that tree plus needselements that
273 affect that processing. Thought in the system is not direct; but, as the complexity of
274 those elements passing through the core increases and things are tagged as inter-
275 esting and words or ideas form emotional relationships, then complex structures
276 form around context as it relates to new elements. If those happen to form structures
277 that relate to various other context elements, including ones of particular emotional
278 signicance, the system can become sufciently complex that these structures could
279 be said to be thought; based on those underlying emotional constructs which drive
280 interest of focus forcing the system to reprocess context trees as needed. The
281 underlying emotional processing becomes socomplex as to seem deliberate, while
282 the underlying system is essentially an overly complex difference engine.
Self-Motivating Computational System Cognitive Architecture 441
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 441/445
Editor Proof
U
NCORRECTED PROOF
283 The idea of emergent theory really gets to the fact of the systems ability to be
284 self-aware as a concept and that it is thinking is a derivative of its emotional context
285 assignments and what it chooses to think about. This is really an ultra-complex
286 selection of context based on various parameters and the relationships between
287 them. For example, the system might be low on power which negatively affects say
288 a sadness vector, and to a lesser degree a pain vector is associated with it, so the
289 system might select to think about a bit of context from memory that solves that
290 particular emotional pattern the most. It might continue to try related elements until
291 something addresses this particular issue and vector parameters for the core state
292 stabilize back to normal. Keep in mind the implementation of an idea, say to plug
293 into chargemight be just that, an idea of thinking about itwhich is not going to
294 do much other than temporarily provide hope. It is the thinking of the action
295 which provides the best pattern match and, given it is an action, the observer will
296 likely take that action or at least try to. If the observer does execute that action, it
297 would be a real solution and thus we can say the overall system thought about and
298 took action to solve its power issue. The idea of this context processing being
299 considered thought is an abstraction of what is really happening in detail where the
300 computation is so complex as to be effectively thoughtin the abstract. Its easier to
301 thank about or otherwise conceptualize this way making it recognizable to humans.
302 It is a distinct possibility that humans would perceive the same type of
303 abstraction for our own thoughts if, in fact, we understood how the brain operated
304 and how awareness develops in the human mind. It is also important to note that the
305 core while it is looking at the high level emotional computation of a given context
306 tree, the actionof tasks in any given tree that are used to solve a problem might
307 actually be in that tree just not in a way that is directly accessible to the emergent
308 awareness which is a higher level abstraction from that context processing system.
309 What this means is that the emotional processing is what is happening in the core
310 but the awareness is a function of that processing at one level of abstraction from
311 the core processing and that being the case details of a given context tree may not
312 be surfaced to that level of abstraction until that action is picked up by the observer
313 and placed back into the que in such a way as that action becomes the top level
314 element of a given context tree and thus more likely to be the point of or focus on
315 that abstracted awareness.
316 Motivation of the Core Model and ICOM Action
317 One of the key elements of the ICOM system architecture is a bias to
318 self-motivation and action. Depending on the kinds of values associated with a
319 context, the system may take action; or rather the Observer component of the
320 system is designed to try to take action based on given context that is associated
321 with the context of an action. That being the case, any contextthat is associated
322 with action is therefore grounds for action. The observer is then creating additional
323 context to post back to the system to be emotionally evaluated and further action
442 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 442/445
Editor Proof
U
NCORRECTED PROOF
324 taken. The complexity of the action itself is abstracted from the core processing.
325 The fact that the primary function of the observer is to take action is part of giving
326 the system a bias for action, unless extensive conditioning is applied to make the
327 system associate negative outcome with action. Given that the system will con-
328 tinually process context, as it has bandwidth based on its emotional relevance and
329 needs, the system (ICOM) is then designed to at least try action along those ele-
330 ments of context. It is here that we complete the biasingof the system towards
331 action based on emotional values or context. We then have a self-motivating system
332 based on emotional context that can manipulate itself through its own actions, and
333 through indirect manipulation of its emotional matrix. By indirectmeaning the
334 emergent awareness can only indirectly manipulate its emotional matrix whereas
335 the core does this directly.
336 Application Biasing
337 ICOM is a general high level approach to overall articial general intelligence. That
338 being said, an AGI, by the fact that it is an AGI, should in theory be able to do any
339 given task. Without such a system having attained self-awareness, you can then
340 train the system around certain tasks. By associating input or context with pleasure
341 or other positive emotional stimulation, you can use those as a basis for the system
342 to select certain actions. By limiting the action collection to what is possible in the
343 given application and allowing the system to create and try various combinations of
344 these actions, you essentially end up with an evolutionary algorithmic system for
345 accomplishing tasks based on how much pleasure the system gains or based on the
346 system biases as might be currently present. Additionally, by conditioning, you can
347 manipulate core context to create a better environment for the conditioning of a
348 certain set of tasks you might want in a certain application bias that you want to
349 create.
350 In the training, or attempted biasing, keep in mind that personality or individual
351 traits can develop in an implementation.
352 Personality, Interests and Desire Development
353 Very much related to the application usage biasing of an ICOM implementation is
354 the idea of personality, interests and desires in the context of the ICOM system. All
355 input and all thought further manipulate how the system feels about any given topic,
356 no matter what this input is, biasing the system towards one thing or the other. It is
357 important in the early stages of development to actually tightly control that biasing;
358 but it is inevitable that it will have its own biases over time based on what it learns
359 and how it feelsabout a given context with every small bit of input and its own
360 thinking.
Self-Motivating Computational System Cognitive Architecture 443
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 443/445
Editor Proof
U
NCORRECTED PROOF
361 The point cannot be overstated that the system at rest will continue to think about
362 things. What this means is the system, with limited input or even with signicant
363 input, will look at things that have occurred to it in the past and or related items to
364 things it happens to just think about. The system will then automatically reprocess
365 and rethink about things based on factors like recently processed input or interest
366 levels; related to topics of interest or based on current needs and the like. Each time
367 the system cycles it is slowly manipulating its self, its interests consciously and
368 subconsciously as well as adjusting interests and emotional states slowly (note that
369 this is by design to not let the underlying vectors change too fast or there is a greater
370 risk of the system becoming unstable) over time through this processing.
371 Summary
372 The ICOM architecture provides a substrate independent model for true sapient and
373 sentient machine intelligence; as capable as human level intelligence. The
374 Independent Core Observer Model (ICOM) Cognitive Extension Architecture is a
375 methodology or patternfor producing a self-motivating computational system that
376 can be self-aware that differentiates from other approaches by a purely top down
377 approach to designing such a system. Another way to look at ICOM is as a system
378 for abstracting standard cognitive architecture from the part of the system that can
379 be self-aware and as a system for assigning value on any given idea or thought
380 and producing ongoing motivations as well as abstracted thoughts through emer-
381 gent complexity.
382 References
383 1. Self-awareness(wiki) https://en.wikipedia.org/wiki/Self-awareness 9/26/02015 AD
384 2. Cognitionhttps://en.wikipedia.org/wiki/Cognition 01/27/02016AD
385 3. Cognitive Architecturehttp://cogarch.ict.usc.edu/ 01/27/02016AD
386 4. The Sigma Cognitive Architecture and System[pdf] by Paul S. Rosenbloom, University of
387 Southern California http://ict.usc.edu/pubs/The%20Sigma%20cognitive%20architecture%
388 20and%20system.pdf 01/27/02016AD
389 5. Knocking on Heavens Doorby Lisa Randall (Chapter 2) via Tantor Media Inc. 2011
390 6. Cognitive Architecturewiki https://en.wikipedia.org/wiki/Cognitive_architecture
391 01/27/02016AD
392 7. Self-Motivating Computational System Cognitive ArchitectureBy David J Kelley http://
393 transhumanity.net/self-motivating-computational-system-cognitive-architecture/ 1/21/02016
394 AD
395 8. email dated 10/10/2015 - RenéMilan quoted discussion on emotional modeling
396 9. Properties of Sparse Distributed Representations and their Application to Hierarchical
397 Temporal Memory(March 24, 2015) Subutai Ahmad, Jeff Hawkins
398 10. Feelings Wheel Developed by Dr. Gloria Willcoxhttp://msaprilshowers.com/emotions/the-
399 feelings-wheel-developed-by-dr-gloria-willcox/ 9/27/2015 further developed from W. Gerrod
400 Parrots 2001 work on a tree structure for classication of deeper emotions http://
401 msaprilshowers.com/emotions/parrotts-classication-of-emotions-chart/
444 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 444/445
Editor Proof
U
NCORRECTED PROOF
402 11. The Plutchik Model of Emotionsfrom http://www.deepermind.com/02clarty.htm
403 (2/20/02016) in article titled: Deeper Mind 9. Emotions by George Norwood
404 12. An Equation for Intelligence?Dave Sonntag, PhD, Lt Col USAF (ret), CETAS Technology,
405 http://cetas.technology/wp/?p=60 reference Alex Wissners paper on Causal Entropy.
406 (9/28/2015)
407 13. How to Create a Mind The Secret of Human Thought RevealedBy Ray Kurzweil; Book
408 Published by Penguin Books 2012 ISBN
409 14. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Articial
410 Agents2012 Whitepaper by Nick Bostrom - Future of Humanity Institute Faculty of
411 Philosophy and @ Oxford Martin School Oxford University
412 15. https://en.wikipedia.org/wiki/Chaos_theory
413 16. https://en.wikipedia.org/wiki/Emergence (System theory and Emergence)
414 17. Move: Transcendence (2014) by character Will Caster (Johnny Depp); Written by Jack
415 Paglen; presented by Alcon Entertainment
416 Consulted Works
417 18. Causal Mathematical Logic as a guiding framework for the prediction of Intelligence Signals
418 in brain simulationsWhitepaper by Felix Lanzalaco - Open University UK and Sergio
419 Pissanetzky University of Houston USA
420 19. Implementing Human-like Intuition Mechanism in Articial IntelligenceBy Jitesh Dundas
421 Edencore Technologies Ltd. India and David Chik Riken Institute Japan
Self-Motivating Computational System Cognitive Architecture 445
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 24 Date: 6-8-2016 Time: 12:17 pm Page: 445/445
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Google It
Series Title
Chapter Title Modeling Emotions in a Computational System
Copyright Year 2016
Copyright HolderName Springer Science+Business Media New York
Corresponding Author Family Name Kelley
Particle
Given Name David J.
Prefix
Suffix
Division
Organization
Address Seattle, USA
Email david@artificialgeneralintelligenceinc.com
Abstract This chapter is an overview of the emotional modeling used in the Independent Core Observer Model
(ICOM) Cognitive Extension Architecture research which is a methodology or software ‘pattern’ for
producing a self-motivating computational system that can be self-aware under certain conditions. While
ICOM is also as a system for abstracting standard cognitive architecture from the part of the system that
can be self-aware it is primarily a system for assigning value on any given idea or ‘thought’ and based on
that take action as well as producing ongoing self-motivations and in the system take further thought or
action. ICOM is at a fundamental level driven by the idea that the system is assigning emotional values to
‘context’ (or context trees) as it is perceived by the system to determine how it feels. In developing the
engineering around ICOM two models have been used based on a logical understanding of emotions as
modeled by traditional psychologist as opposed to empirical psychologist which tend to model emotions
(or brain states) based on biological structures. This approach is based on a logical approach that is also not
tied to the substrate of any particular system.
U
NCORRECTED PROOF
1Modeling Emotions in a Computational
2System
3David J. Kelley
4Introduction to Emotional Modeling
5Emotional modeling used in the Independent Core Observer Model (ICOM) rep-
6resents emotional states in such a way as to provide the basis for assigning abstract
7value to ideas, concepts and things as they might be articulated in the form of
8context trees where such a tree represents the understanding of a person, place or
9thing including abstract ideas and other feelings. These trees are created by the
10 context engine based on relationships with other elements in memory and then
11 passed into the core (see the whitepaper titled Overview of ICOM or the
12 Independent Core Observer Model Cognitive Extension Architecture) which is a
13 methodology or patternfor producing a self-motivating computational system that
14 can be self-aware under certain conditions. This particular chapter is focused only
15 on the nuances of emotional modeling in the ICOM program and not what is done
16 with that modeling or how that modeling may or may not lead to a functioning
17 ICOM system architecture.
18 While ICOM is also as a system for abstracting standard cognitive architecture
19 from the part of the system that can be self-aware, it is primarily a system for
20 assigning value on any given idea or thoughtand based on that the system can
21 take action, as well as produce ongoing self-motivations in the system to further
22 then have additional thought or action on the mater. ICOM is at a fundamental level
23 driven by the idea that the system is assigning emotional values to contextas it is
24 perceived by the system to determine its own feelings. In developing the engi-
25 neering around ICOM, two models have been used for emotional modeling, which
26 in both cases are based on a logical understanding of emotions as modeled by
27 traditional psychologist as opposed to empirical psychologist which tends to be
D.J. Kelley (&)
Seattle, USA
e-mail: david@articialgeneralintelligenceinc.com
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 447/461
©Springer Science+Business Media New York 2016
N. Lee (ed.), Google It, DOI 10.1007/978-1-4939-6415-4_25
447
Editor Proof
U
NCORRECTED PROOF
28 based on biological structures. The approaches articulated here are based on a
29 logical approach that is also not tied to the substrate of the system in question
30 (biological or otherwise).
31 Understanding the Problem of Emotional Modeling
32 Emotional structural representation is not a problem I wanted to solve indepen-
33 dently nor one I felt I had enough information as an expert to solve without a
34 lifetime of work on my own. The current representational systems that Ive selected
35 are not necessarily the best way(s) or the right way(s) but two ways that do work
36 and are used by a certain segment of psychological professionals. This is based on
37 the work of others in terms of representing the complexity of emotions in the
38 human mind by scientists that have focused on this area of science. The selection of
39 these methods are more based on computational requirements than any other
40 selection criteria.
41 It is important to note that both of the methods the ICOM research have used are
42 not based on scientic data as might be articulated by empirical psychologists
43 which might use ANOVA (variance analysis), or factor analysis [1]. While these
44 other models maybe be more measured in how they model elements of the bio-
45 logical implementation of emotions in the human mind, the models selected by me
46 here for ICOM research are focused on howand the logical modeling of those
47 emotions or feelings. If we look at say the process for modeling a system such as
48 articulated in Properties of Sparse Distributed representations and their
49 Application to Hierarchical Temporal Memory[2] such representation is very
50 much specic to the substrate of the human brain. Since I am looking at the problem
51 of self-motivating systems or computational models that are not based on the
52 human brain literally but only in the logical sense the Wilcox system [3] or more
53 simply the Plutchik method [4] is a more straight forward model and accurately
54 models logically what we want to-do to separate from the underlying complexity of
55 the substrate of the human biological mind.
56 The Plutchik Method
57 George Norwood described the Plutchik method as:
58 Consider Robert Plutchiks psychoevolutionary theory of emotion. His theory is one of the
59 most inuential classication approaches for general emotional responses. He chose eight
60 primary emotions - anger, fear, sadness, disgust, surprise, anticipation, trust, and joy.
61 Plutchik proposed that these basicemotions are biologically primitive and have evolved
62 in order to increase the reproductive tness of the animal. Plutchik argues for the primacy
63 of these emotions by showing each to be the trigger of [behavior] with high survival value,
64 such as the way fear inspires the ght-or-ight response.
448 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 448/461
Editor Proof
U
NCORRECTED PROOF
65 Plutchiks theory of basic emotions applies to animals as well as to humans and has an
66 evolutionary history that helped organisms deal with key survival issues. Beyond the basic
67 emotions there are combinations of emotions. Primary emotions can be conceptualized in
68 terms of pairs of polar opposites. Each emotion can exist in varying degrees of intensity or
69 levels of arousal.[4]
70 While George Norwood mentions earlier in his chapter talking about the
71 Plutchick method that it would be almost impossible to represent emotions in terms
72 of math or algorithms I would disagree. As you can see by this representation of the
73 Plutchik method it is essentially 8 vectors or valueswhen represented in 2
74 dimensions which is easily modeled with a series of number values (Fig. 1).
75 Now in the case of ICOM since we want to represent each segment as a numeric
76 value, a oating point value was selected to insure precision along with a reverse
77 scale as opposed to what is seen in the diagram above. Meaning if we have a
78 number that represents joy/serenity/ecstasythe ICOM version is a number starting
79 from 0 to N where N is increasing amounts or intensity of joy.
80 To represent ICOM emotional states for anything assigned emotional values you
81 end up with an array of oating point values. By looking at the chart above we can
82 see how emotional nuances can be represented as a combination of values on two or
83 more vectors which gives us something closer to the Wilcox model but using less
84 values and given the difference it is orders of magnitude when seen in terms of a
85 computational comparison.
86 Let us take a look at Fig. 2.
87 As you can see we have reversed the vectors such that the value or intensity is
88 increasing as we leave the center of the diagram on any particular vector. From a
89 modeling standpoint this allows the intensity to be innite above zero verses lim-
90 iting the scale in the standard variation not to mention it is more aligned with what
91 you might expect based on the earlier work (see the section on Willcox next). This
92 variation as ween here is what we are using in the ICOM research.
93 Let us look at the other model.
Fig. 1 Plutchik model
B & W IN PRINT
Modeling Emotions in a Computational System 449
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 449/461
Editor Proof
U
NCORRECTED PROOF
94 The Wilcox Model
95 Initially the ICOM Research centered on using the Willcox model for emotions and
96 is still big part of the modeling methodology and research going into the ICOM
97 project. Given the assumption that the researchers in the eld of mental health or
98 studying emotions have represented things to a sufcient complexity to be rea-
99 sonably accurate we can therefore start with their work as a basis for representation
100 I therefore landed on Willcox initially as being the most sophisticated logical
101 model. Take a look at Fig. 3.
102 Based on the Willcox wheel we have 72 possible values (the six inner emotions
103 on the wheel are a composite of the others) to represent the current state of emo-
104 tional affairs by a given system. Given that we can then represent the current
105 emotional state at a conscious level by a series of values that for computation
106 purposes we will consider vectorsin an array represented by oating point values.
107 Given that we can also represent subconscious and base states in a similar way that
108 basically gives us 144 values for the current state. Further we can use them as
Fig. 2 Modied Plutchik
B & W IN PRINT
450 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 450/461
Editor Proof
U
NCORRECTED PROOF
109 vectors to represent the various states to back weight and adjust for new states. This
110 then can be represented as needed in the software language of choice.
111 If we map each element to vectors spread on a 2 dimension X/Y plane we can
112 compute an average composite score for each element and use this in various kind
113 of emotional assessment calculations.
114 We are thus representing emotional states using two sets of an array of 72
115 predened elements using oating point values we also can present assigned arrays
116 on a per context basis and use a composite score of an element as processed to
117 further compare various elements of context emotional arrays or composite scores
118 with current states and make associated adjustments based on needs and preexisting
119 states. For example the current emotional state of the system by be a set of values
120 and a bit of context might affect that same set of values with its own set of values
121 for the same emotions based on its associated elements and a composite is calcu-
122 lated based on the combination which could be an average or mean of each vector
123 for any given element of the emotional values.
124 The Emotional Comparative Relationship
125 Given the array of oating point number declarations, a given element of context
126 will have a composite of all pre-associated values related to that context and any
127 previous context as it might be composited. For this explanation we will assume
128 context is pre-assigned. The base assignments of these values are straight forward
129 assignments but each cycle of the core (see the ICOM Model overview for a
Fig. 3 Dr. Gloria Willcoxs
Feelings Wheel [3]
B & W IN PRINT
Modeling Emotions in a Computational System 451
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 451/461
Editor Proof
U
NCORRECTED PROOF
130 detailed explanation of ICOM and the core) will need to compare each value and
131 assign various rules on the various elements of the context to assign effects on itself
132 as well as conscious and subconscious values.
133 Logically we might have the set values that are the current state as in the earlier
134 example. We then get a new block of context and adjust all the various element
135 based on those complex sets of rules that affect the conscious and subconscious
136 states (emotional arrays of oating point values). Rules can be related to emotions
137 which includes tendencies, needs, interests or other factors as might be dened in
138 the rules matrix applied by the core.
139 This gives us a framework for adjusting emotional states given the assumption
140 that emotional values are assigned to context elements based on various key factors
141 of the current state and related core environment variables. The process as indicated
142 in the context of evaluating becomes the basis for the emergent quality of the
143 system under certain conditions where the process of assigning value and dening
144 self-awareness and thought are only indirectly supported in the ICOM architecture,
145 and emerge as the context processing becomes more complex.
146 Context Assignments
147 One of the key assumptions for computing the emotional states is the
148 pre-assignment of emotional context prior to entering the emotional adjustment
149 structures of the core system.
150 While this explanation does not address for example looking at a picture and
151 decomposing that into understanding in context it does deal with how emotional
152 values are applied to a given context element generated by the evaluation of that
153 picture.
154 As described earlier there are 72 elements needed to represent a single emotional
155 context (based on the Willcox model) given the selected methodology. Lets say of
156 that array the rst 3 elements are happiness,sadness, and interest. Additionally
157 let us assign them each a range between 0 and 100 as oating point values meaning
158 you can have a 1 or a 3.567 or a 78.628496720948 if you like.
159 If for example a particular new context A is related to context B and C which had
160 been processed earlier and related to base context elements of D, E and F. This
161 gives us a context tree of 6 elements. If we average the emotional values of all of
162 them to produce the values of happiness, sadness and interest for context A we now
163 have a context tree for that particular element which then is used to affect current
164 state as noted above. If that element still has an interest level, based on one of those
165 vectors being higher than some threshold then it is queued to process again and the
166 context system will try to associate more data to that context for processing. If
167 Context A had been something thought about before then that context would be
168 brought up again and the other factors would be parsed in for a new average which
169 could have then been an average of all 6 elements where before context A didnt
170 have an emotional context array were the second time around it does. Further on
452 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 452/461
Editor Proof
U
NCORRECTED PROOF
171 processing the Context A its values are changed by the processing against the
172 current system state.
173 Using this methodology for emotional modeling and processing we also open
174 the door for explaining certain anomalies as seen in the ICOM research.
175 Computer Mental Illness
176 In the ICOM system the coreemotional thought and motivation system if any of
177 the 72 vectors get into a fringe area at the high or lower end of the scale can produce
178 an increasingly irrational set of results in terms of assigning further context. If the
179 sub-conscious vectors are to far off this will be more pronounced and less likely to
180 be xed over time, creating some kind of digital mental illness where given the
181 current state of research it is hard to say the kinds of and manifestation of that
182 illness or illnesses could be as varied as human mental illness. Now the subcon-
183 scious system is in fact critical to stabilization of the emotional matrix of the main
184 system in that it does change slightly over time where under the right extreme
185 context input is where you get potential issues on a long term basis with that
186 particular instance. The ICOM research and models have tried to deal with these
187 potential issues by introducing limiting bias and other methods in preventing too
188 radical of a result in any given operation.
189 Motivation of the Core Model
190 Given the system in the previous sections for assigning emotional context, pro-
191 cessing and assigning context elements that are above a certain threshold are targets
192 for reprocessing by being placed in a que feeding the core. The motivation of the
193 core comes from the fact that it cantnotthink and it will take action based on
194 emotional values assigned to elements that are continuously addressed where the
195 core only needs to associate an actionor other context with a particular result and
196 motivation is an emergent quality of the fact that things must be processed and
197 actions must be taken by design of the system.
198 This underlying system then is thus designed to have a bias for taking action
199 with action being abstracted form the core in detail where the core only need
200 composite such action at a high level; In other words it just needs to thinkabout
201 an action.
Modeling Emotions in a Computational System 453
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 453/461
Editor Proof
U
NCORRECTED PROOF
202 Core Context
203 Core Context are the key elements predened in the system when it starts for the
204 rst time. These are concepts that are understood by default and have predened
205 emotional context trees associated with them. While the ICOM theory for AGI is
206 not specic to the generation or rather the decomposition of contextit is important
207 to address the classicationof context. In this way any context must add qualities
208 that may be new and can be dened dynamically by the system but these core
209 elements that are used to tag those new context elements. Since all context is then
210 streamed into memory as processed and can be brought back and re-referenced as
211 per the emotional classication system pending the associated threshold deter-
212 mining if it is something of relevance to recall.
213 As stated elsewhere lots of people and organizations are focused on classication
214 systems or systems that decompose input, voice, images and the like however
215 ICOM is focused on self-motivation along the lines of the theory as articulated
216 based on emotional context modeling.
217 What is important in this section is the core elements used to classify elements of
218 context as they are processed into the system. The following list of elements is used
219 as a fundamental part of the ICOM system for its ability to associated emotional
220 context to elements of context as they are passed into the core. This same system
221 may alter those emotional associations over time as new context not hither to
222 classied is tagged based on the current state of the system and the evaluation of
223 elements or context for a given context tree when the focus of a given context tree is
224 processed. Each one of these elements below has a 72 vector array of default
225 emotions (using the Willcox based version of ICOM) associated with that element
226 by default at system start. Additionally this may not be an exhaustive list of the
227 default core system in the state of the art. These are only the list at the time this
228 section is being written.
229 (i) ActionA reference to the need to associate a predisposition for action as
230 the system evolves over time.
231 (ii) Changea reference context ag used to drive interest in changes as a bias
232 noticing change.
233 (iii) FearStrongly related to the pain system context ag.
234 (iv) InputA key context ag needed for the system to evolve over time
235 recognizing internal imaginations vs system context input that is external.
236 (v) NeedA reference to context associated with the needs hierarchy
237 (vi) NewA reference needed to identify a new context of some kind normally
238 in terms of a new external object being cataloged in memory
239 (vii) Painhaving the most negative overall core context elements used as a
240 system ag of a problem that needs to be focused on. This ag may have
241 any number of autonomic responses dealt with the observercomponent of
242 the system.
243 (viii) PatternA recognition of a pattern build in to help guide context as noted
244 in humans that there is an inherent nature to see patterns in things. While
454 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 454/461
Editor Proof
U
NCORRECTED PROOF
245 there could be any number of evolutionary reasons for it, in this case we will
246 assume the human model is sound in terms of base artifacts regarding
247 context such as this.
248 (ix) Paradoxa condition where 2 values that should be the same are not or
249 that contradict each other. Contradiction is a negative feedback reference
250 context ag to condition or bias the system want to solve paradoxs or have
251 a dislike of them.
252 (x) Pleasurehaving the most positive overall core context element used as a
253 system ag for a positive result
254 (xi) Recognitiona reference ag used to identify something that relates to
255 something in memory.
256 (xii) Similarrelated to the pattern context object used to help the system have
257 a bias for patterns by default
258 (xiii) WantA varying context ag that drives interest in certain elements that
259 may contribute to needs or the pleasurecontext ag.
260
261 While all of these might be hard coded into the research system at start they are
262 only really dened in terms of other context being associated with them and in
263 terms of emotional context associated with each element which is true of all ele-
264 ments of the system. Further these emotional structures or matrixes that can change
265 and evolve over time as other context is associated with them.
266 As a single example lets take a look at the rst core context element that is
267 dened in the current Willcox based ICOM implementation called pain. This
268 particular element doesnt represent emotional pain as such but directly effects
269 emotional pain as this element is core context for input assessments or physical
270 pain however note that one of the highlighted elements in the painmatrix is for
271 emotional pain (Fig. 4).
Fig. 4 Emotional matrix array at system start for context element pain
B & W IN PRINT
Modeling Emotions in a Computational System 455
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 455/461
Editor Proof
U
NCORRECTED PROOF
272 On top of all of the emotional states associated with a context element they
273 themselves also are pre-represented in the initial state predened into the system as
274 context themselves. You can see that in this initial case we have guesses at values in
275 the matrix array for default values for each element which has to be done for each
276 predened context element at system start. This allows us to set certain qualities as
277 a basic element of how a value system will evolve in the system creating initial
278 biases. For example we might create a predilection for a pattern which creates the
279 appropriate bias in system as we might want to see in the nal AGI implementation
280 of ICOM.
281 Personality, Interests and Desires of the ICOM System
282 In general under the ICOM architecture regardless of which of the two modeling
283 systems that have been used, in ICOM the system very quickly creates predilection
284 for certain things based on its emotional relationship to those elements of context.
285 For example, if the system is exposed to context X which it always had a good
286 experience includinginterest the methodology regardless of case, develops a
287 preference for or higher emotional values associated with that context or other
288 things associated with that context element. This evolutionary self-biasing based on
289 experience is key to the development of personality, interests and desires of the
290 ICOM system and in various experiments has shown that in principal it is very hard
291 to replicate those biases of any given instance due to the extreme amount of
292 variables involved. While ultimately calculable, a single deviation will change
293 results dramatically over time. This also leads us to a brief discussion of free will.
294 Free Will as an Illusion of Complex Contextual Emotional
295 Value Systems
296 Frequently the problem of free willhas been an argument between determinist
297 verses probabilistic approaches and given either case an argument as to the reality
298 of our free will ensues.
299 While we dont understand exactly the methodology of the human mind if it
300 works in a similar manner at a high level like ICOM then, under that architecture, it
301 would strongly imply that free will is an illusion. For me, this is a difcult thing to
302 be sure of given that this is outside the scope of the research around ICOM; but
303 none the less it is worth mentioning the possibility. Additionally, if true, then free
304 will seems to be something that can be completely mathematically modeled. If that
305 is the case, it is likely that of the human mind can be as well. Certainly, as we
306 progress this will be a key point of interest but outside my expertise.
456 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 456/461
Editor Proof
U
NCORRECTED PROOF
307 ICOM emotional modeling seems to be at the heart of this mathematical mod-
308 eling of free willor what appears to be free will in a sea of variables that is so vast
309 that we collectively have not been able to full modeling but ICOM based systems
310 appear in function to exhibit free will based on their own self biases based on their
311 experience which because we can and have a model free will in ICOM is an illusion
312 of the sea of factors required for ICOM systems to function. Lets get back to the
313 different methods used in ICOM for emotional modeling.
314 Plutchik Verses Willcox
315 When determining which method to use in emotional modeling we see a number of
316 key facts. Willcox models all the nuances of human emotions directly with
317 numerous vectors or values. Plutchik models those nuances through combination of
318 values thus using a total number of values that is much less. From a computational
319 standpoint Plutchik has 8 sets of core values where Willcox using 72 so having two
320 sets of those for conscious and unconscious values gives us 72 which converting
321 that to a 2d plain requires conversion X/Y values which means 144 trigonomic
322 functions for each pass through the ICOM core whereas using Plutchik we have 16
323 total values with the same conversion of X/Y values to produce the average
324 emotional effects applied to incoming emotional context means only 16 trigonomic
325 functions per core pass which means from a computational standpoint we only need
326 9 times less computational power to run with the Plutchik method which is the basis
327 for the research post series 3 experiments moving forward at least for the fore-
328 seeable research that is in progress.
329 Visualizing Emotional Modeling Using Plutchik
330 To better understand how any given instance of ICOM is responding in tests we
331 needed a method for visualizing and representing emotional state data and given the
332 method for modeling in either method articulated earlier we came up with this
333 method here for indicating state. This method visualizes graphically emotional state
334 of what is going on in the core. You can see we are visualizing emotional states
335 much like the earlier diagrams then in which we look at vectors that represent the
336 model (see Figs. 1,2).
337 So lets look as an example. In this case we are looking at one of the program
338 series 3 experiments in which case we were looking at ICOM introspection as it
339 relates to the system thinking about previous elements to see if the system would
340 pick something out of memory and then thinking about it and see how it affects
341 various vectors or emotional states. The rest of the experiment is not as important to
342 the point in which here we are showing how that data is represented. In this case we
Modeling Emotions in a Computational System 457
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 457/461
Editor Proof
U
NCORRECTED PROOF
343 start with the following raw data keeping in mind here we are looking at only 4 of
344 the eight values modeled in the test system (Fig. 5).
345 So how can diagram emotional? First we need to understand there is a set for the
346 conscious and subconscious parts of the system and we use two diagrams for each
347 with the same vectors as noted in the aforementioned diagrams in particular the
348 Plutchik method.
349 Now if we plot the states we get a set of diagrams as shown in Fig. 6.
350 This graph system is simple to visualize what the system is feeling albeit the
351 nuances of what each one means is still somewhat abstract but easily to visualize
352 which is why the ICOM project settled on this method.
353 In this particular study we are looking at a similar matrix as used in previous
354 research but now we were introducing the introspection where we can see the effect
355 of the action bias on the emotional state. This particular study also showcases the
356 resolution that the system quickly goes to where we have subtle changes that are or
357 can be reected by the system in a way we can see via this diagramming
358 methodology. In a working situation items are selected based on how things map to
359 interest and needs and how it affects the core state of the system.
360 Further given this and the related body of research we can see that even having
361 the same input out of order will cause a different end result and given the volume of
362 input and the resolution of the effect of retrospection and manipulation of interests
363 therefore no two systems would likely ever be the same unless literally copied and
364 then would stay the same only if all of the subsequent input would the same
365 including order. Small differences over time could have dramatic effects millions of
366 cycles later.
Fig. 5 Source data from series 3 on introspection
B & W IN PRINT
458 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 458/461
Editor Proof
U
NCORRECTED PROOF
367 This nuanced complexity is why the diagramming method has become so
368 important to understanding ICOM behavior.
369 Summary
370 The Emotional Modeling used in the Independent Core Observer Model (ICOM)
371 Cognitive Extension Architecture is a methodology or patternfor producing a
372 self-motivating computational system that can be self-aware where emotional
373 modeling is the key to the operation of ICOM. While ICOM is as a system for
374 abstracting standard cognitive architecture from the part of the system that can be
375 self-aware it is primarily a system for assigning value on any given idea or
376 thoughtand based on that take action as well as producing ongoing
377 self-motivations in the systems further thought or action. ICOM is at a fundamental
(a)
(b)
(c)
(d)
(e)
Fig. 6 Example graphing
introspection experiment
B & W IN PRINT
Modeling Emotions in a Computational System 459
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 459/461
Editor Proof
U
NCORRECTED PROOF
378 level is driven by the idea that the system is assigning emotional values to context
379 as it is perceived by the system to determine how it feels. In developing the
380 engineering around ICOM two models have been used based on a logical under-
381 standing of emotions as modeled by traditional psychologist as opposed to
382 empirical psychologist which tend to be based on biological structures. This
383 approach is based on a logical approach that is also not tied to the substrate of the
384 system in question. Using this emotional architecture we can see how using the
385 Plutchik method is used and how that application creates the biases of the system
386 and how it self evolves on its own making the exposure to input key to the early
387 developing of a given implementation of ICOM.
388 References
389 1. email dated 10/10/2015 - RenéMilan quoted discussion on emotional modeling
390 2. Properties of Sparse Distributed Representations and their Application to Hierarchical
391 Temporal Memory(March 24, 2015) Subutai Ahmad, Jeff Hawkins
392 3. Feelings Wheel Developed by Dr. Gloria Willcoxhttp://msaprilshowers.com/emotions/the-
393 feelings-wheel-developed-by-dr-gloria-willcox/ 9/27/2015 further developed from W. Gerrod
394 Parrots 2001 work on a tree structure for classication of deeper emotions http://
395 msaprilshowers.com/emotions/parrotts-classication-of-emotions-chart/
396 4. The Plutchik Model of Emotionsfrom http://www.deepermind.com/02clarty.htm
397 (2/20/02016) in article titled: Deeper Mind 9. Emotions by George Norwood
398 5. Self-awareness(wiki) https://en.wikipedia.org/wiki/Self-awareness 9/26/02015 AD
399 6. Cognitive Architecturehttp://cogarch.ict.usc.edu/ 01/27/02016AD
400 7. Cognitive Architecturewiki https://en.wikipedia.org/wiki/Cognitive_architecture
401 01/27/02016AD
402 8. Cognitionhttps://en.wikipedia.org/wiki/Cognition 01/27/02016AD
403 9. The Sigma Cognitive Architecture and System[pdf] by Paul S. Rosenbloom, University of
404 Southern California http://ict.usc.edu/pubs/The%20Sigma%20cognitive%20architecture%
405 20and%20system.pdf 01/27/02016AD
406 10. Self-Motivating Computational System Cognitive ArchitectureBy David J Kelley http://
407 transhumanity.net/self-motivating-computational-system-cognitive-architecture/ 1/21/02016
408 AD
409 11. Knocking on Heavens Doorby Lisa Randall (Chapter 2) via Tantor Media Inc. 2011
410 12. An Equation for Intelligence?Dave Sonntag, PhD, Lt Col USAF (ret), CETAS Technology,
411 http://cetas.technology/wp/?p=60 reference Alex Wissners paper on Causal Entropy.
412 (9/28/2015)
413 13. How to Create a Mind The Secret of Human Thought RevealedBy Ray Kurzweil; Book
414 Published by Penguin Books 2012 ISBN
415 14. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Articial
416 Agents2012 Whitepaper by Nick Bostrom - Future of Humanity Institute Faculty of
417 Philosophy and @ Oxford Martin School Oxford University
418 15. https://en.wikipedia.org/wiki/Chaos_theory
419 16. https://en.wikipedia.org/wiki/Emergence (System theory and Emergence)
420 17. Move: Transcendence (2014) by character Will Caster (Johnny Depp); Written by Jack
421 Paglen; presented by Alcon Entertainment
460 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 460/461
Editor Proof
U
NCORRECTED PROOF
422 Consulted Works
423 19. Causal Mathematical Logic as a guiding framework for the prediction of Intelligence Signals
424 in brain simulationsWhitepaper by Felix Lanzalaco - Open University UK and Sergio
425 Pissanetzky University of Houston USA
426 20. Implementing Human-like Intuition Mechanism in Articial IntelligenceBy Jitesh Dundas
427 Edencore Technologies Ltd. India and David Chik Riken Institute Japan
Modeling Emotions in a Computational System 461
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 25 Date: 6-8-2016 Time: 12:19 pm Page: 461/461
Editor Proof
Metadata of the chapter that will be visualized in
SpringerLink
Book Title Google It
Series Title
Chapter Title Artificial General Intelligence as an Emergent Quality
Copyright Year 2016
Copyright HolderName Springer Science+Business Media New York
Corresponding Author Family Name Kelley
Particle
Given Name David J.
Prefix
Suffix
Division
Organization
Address Seattle, USA
Email david@artificialgeneralintelligenceinc.com
Abstract This chapter summarizes how the Independent Core Observer Model (ICOM) creates the effect of artificial
general intelligence or AGI as an emergent quality of the system. It touches on the underlying data
architecture of data coming into the system and core memory as it relates to the emergent elements. Also
considered are key elements of system theory as it relates to that same observed behavior of the system as a
substrate independent cognitive extension architecture for AGI. In part, this chapter is focused on the
‘thought’ architecture key to the emergent process in ICOM.
U
NCORRECTED PROOF
1Articial General Intelligence
2as an Emergent Quality
3David J. Kelley
4Introduction to Mind and Consciousness
5A scientist studying the human mind suggests that consciousness is likely an
6emergent phenomenon. In other words, she is suggesting that, when we gure it
7out, we will likely nd it to be an emergent quality of certain kinds of systems
8under certain circumstances [7]. This particular system (ICOM) creates con-
9sciousness through the emergent quality of the system. But how does Strong AI
10 Emerge from a system that by itself is not specically Articial General Intelligence
11 (AGI). Independent Core Observer Model (ICOM) is an emotional processing
12 system designed to take in context and emotionally process this context and decide
13 at a conscious and subconscious emotional level how it feels about this input.
14 Through the emerging complexity of the system, we have AI that, in operation,
15 functions logically much like the human mind at the highest level. ICOM, however,
16 does not model the human brain, nor deal with individual functions such as in a
17 neural network and is completely a top down logical approach to AGI vs the
18 traditional bottom up. This of course supposes an understanding of how the mind
19 works, or supposes a way it couldwork, and was designed around that. To put
20 into context what ICOM potentially means for society and why AGI is the most
21 important scientic endeavor in the history of the world, I prefer to keep in mind the
22 following movie quote:
23 For 130,000 years, our capacity to reason has remained unchanged. The combined intellect
24 of the neuroscientists, mathematicians andhackers[] pales in comparison to the
25 most basic A.I. Once online, a sentient machine will quickly overcome the limits of
26 biology. And in a short time, its analytic power will become greater than the collective
27 intelligence of every person born in the history of the world. So imagine such an entity with
D.J. Kelley (&)
Seattle, USA
e-mail: david@articialgeneralintelligenceinc.com
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 463/472
©Springer Science+Business Media New York 2016
N. Lee (ed.), Google It, DOI 10.1007/978-1-4939-6415-4_26
463
Editor Proof
U
NCORRECTED PROOF
28 a full range of human emotion. Even self-awareness. Some scientists refer to this as the
29 Singularity.”—Transcendence [movie] 2014 [17]
30 Keep in mind that not if, but WHEN, AGI emerges on its own, it will be the
31 biggest change to humanity since the dawn of language. Lets look at what
32 Emergencemeans when talking about ICOM.
33 Chaos and Emergence
34 When speaking with other computer scientists, ICOM is in many ways not cog-
35 nitive architecture in the same sense that many of them think. When speaking of
36 bottom up approaches focused on for example neural networks, ICOM sits on top
37 of such systems or as an extension of such systems. ICOM really is the force behind
38 the system that allows it to formulate thoughts, actions and motivations independent
39 of any previous programming and is the catalyst for the emergent phenomenon
40 demonstrated by ICOM extensioncognitive architecture.
41 To really understand how that emergence works, we need to also understand
42 how the memory or data architecture in ICOM is structured.
43 Data Architecture
44 While the details of how the human brain stores data is one thing we do not know
45 for sure; logically it demonstrates certain qualities in terms of patterns, timelines
46 and thus inferred structure. Kurzweils book How to Create a Mind[13] brings up
47 three key points regarding memory as used in the human mind and how memory is
48 built or designed from a data architecture standpoint.
49 Point 1: our memories are sequential and in order. They can be accessed in the
50 order that they are remembered. We are unable to directly reverse the sequence of a
51 memory”—page 27.
52 Point 2: there are no images, videos, or sound recordings stored in the brain. Our
53 memories are stored as sequences of patterns. Memories that are not accessed dim
54 over time.”—page 29.
55 Point 3: We can recognize a pattern even if only part of it is perceived (seen, heard,
56 felt) and even if it contains alterations. Our recognition ability is apparently able to
57 detect invariant features of a patterncharacteristics that survive real-world vari-
58 ations.”—page 30.
59
60 The ICOM data architecture is based on the same basis that these points where
61 made. Data ows in, related to time sequences. These time sequences are tagged
62 with context that is associated with other context elements. There are a number of
63 ways of looking at this data model from a traditional computer science data base
64 architectural standpoint. For example (Fig. 1):
464 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 464/472
Editor Proof
U
NCORRECTED PROOF
65 If you are familiar with database data architecture, or even if youre not, you can
66 tell this is fundamentally a simple model and it could be done even more simply
67 then this. Notice that, while we have 3 tables, only two hold any real data. One is
68 essentially a time log, where each time there is a new segment a new table entry is
69 created. The other main table is the idea of contextwhich contains 8 values
70 associated with emotional states for a given piece of context. The third table holds
71 relationships between context. It is important to know that an item in the context
72 table can be related to any number of other contextitems. Now, in this case, there
73 are 2 kinds of groups of searches that an ICOMsystem is going to-do against
74 these tables, one related to time and the other to context. Referring to searches
75 against contextthese can be further broken out into several kinds of searches too,
76 that is to get base context searches such as language components or interests or both
77 and then searches related to being able to build context trees based on association.
78 In particular, when we look at the scale of the problem regardless of the substrate, it
79 is easy to see that the benet of using massively parallel or even quantum searches
80 against that context data and any number of indexes is useful. Further, for those that
81 understand database architecture and database modeling, a number of indexes or
82 viewswould obviously be useful. In this case though, we could search just the
83 context table the hard way but, to speed this up, we can create reference tables, that
84 are used to hold the top interestbase context items or the top basecontext items.
85 Note also that with such a technique, at a certain scale, this sort of indexing needs to
86 be limited regardless of the underlying technology. Having this sort of system
87 running on my Surface Pro 4 I might get away with having a million records in this
88 table but something on the order of the human brain might have 15 billion records?
Fig. 1 ICOM basic entity relationship diagram (ERD)
B & W IN PRINT
Articial General Intelligence as an Emergent Quality 465
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 465/472
Editor Proof
U
NCORRECTED PROOF
89 (This is just a guess, Im just saying that at some point there is a limit) Using the
90 same database design methodology, you could visualize it like this (Fig. 2):
91 This still seems simple visually and fairly basic and even knowing that the two
92 main tables around time segments and context might have 100 billion records in
93 them it is still difcult to appreciate the complexity of the data we are talking about
94 in an ICOM based system using this data architecture. Now lets look at the data in
95 say the TimeSegment table and the context table related to properties where Context
96 is related to itself and to an individual time segment. Along the top are time
97 segment records and below those are individual context records and their rela-
98 tionships as they are related to additional context records (Fig. 3).
99 As you can see when looking at the records from this relationship standpoint, we
100 can see the complexity of the underlying relational model of said records. If you
101 just look at context records on their own, you might get something a million more
102 times complex but similar to this:
103 From this standpoint, its easy to imagine the sea of data that is in a large scale
104 ICOM system in the core memory or the contexttable.
105 Now getting back to this idea of emergence, if you have studied chaos theory
106 [15] one may have heard the idea of the buttery effect. The buttery effect has
107 several ways it manifests in ICOM in the research that we have done; however, just
108 from a hypothetical standpoint given the previous gure, if I pick out just a single
109 node in that mass you can see how even a small change affects virtually everything
110 else. In the ICOM experiments done so far, even a small variance of the data input,
111 either in terms of time cadence or order, has always (thus far) resulted in major
112 differences in the system. In these experiments, as time progresses and when an
113 event with the same input reoccurs; we saw traits develop that dene major dif-
114 ferences in the nal state of the system.
Fig. 2 ICOM basic ERD with reference tables added
B & W IN PRINT
466 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 466/472
Editor Proof
U
NCORRECTED PROOF
115 Emergence from Chaos
116 When we start talking about Emergence from the standpoint of system theory, we
117 mean some quality of the system that emerges that is qualitatively different then the
118 individual components or systems. It is similar to the idea that all the inanimate
119 atoms and molecules within a cell, when taken as a whole, work together as a living
120 system; whereas individual components are inert. It is a fundamentally new
121 property that emerges from a system as a whole. From this standpoint, Emergence
122 is dened as:
123 A process whereby larger entities, patterns, and regularities arise through interactions
124 among smaller or simpler entities that themselves do not exhibit such properties. [16]
125 In the AGI generated by the ICOM architecture, we are talking about an effect
126 that is substantially similar. We see what is called a phasetransition when a
127 certain complexity arises in the system or a strongemergence, which is also
128 known as irreducible emergence. In ICOM, it is not a quality of the substrate but
129 of the ow of information.
130 In ICOM, we have a sea of data coming in being stacked in a stream of time into
131 the memory of the system. This data passes through what is called a context engine
132 that is very much like many of the AI systems, such as Tensor Flow, that take input
133 into the system, process it to identify context and associate it with new or existing
134 structures in memory using structures called contexttrees.
Fig. 3 Hypothetical context
records related to each other
as they drill down away from
individual records in the
TimeSegment Table
Articial General Intelligence as an Emergent Quality 467
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 467/472
Editor Proof
U
NCORRECTED PROOF
135 If you go back to the example in Fig. 4, pick any give node and follow all the
136 connections 3 levels then you have an example of a context treein terms of
137 structure.
138 Now, as articulated earlier, all of these data structures are created in long lines of
139 time based context trees that are also processed by the core; which is the part of
140 the ICOM system that processes the top level emotional evaluations related to new
141 elements of context or existing context framed in such a way that it has new
142 associated context. In this manner, they are all evaluated. As those elements pass
143 through the core, based on associated context, new elements may be associated to
144 that context based on emotional evaluations. Trees are queued and test associations
145 pass through the coreand are restacked and passed into memory whereas the
146 observerlooks for actionbased context on which it can take additional action
147 and then the context pumplooks for recent context in terms of interest, or other
148 emotional factors driven by needs and interest, to place those back into the
149 incoming queue of the core to be cycled again. Action based context being a
150 specic type of context related to the system taking an action to get a better
151 emotional composited result.
152 In a sea of data, the ICOM process bubbles up elements and through the core
153 action and additional input you have this emergent quality of context turning into
154 thought in the abstract. ICOM doesnt exactly create thought per se but, in abstract,
155 enables it through the ICOM structures. In early studies using this model, basic
156 intelligent emotional responses are clear. The system continues to choose what it
157 thinks about and the effect this has on current emotional states is complex and
158 varied. When abstracted, as a whole we see the emergent or phase transition into a
159 holistic AGI system that, even in these early stages of development, demonstrates
160 the results that this work is based on.
Fig. 4 Simplied context relationship model
468 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 468/472
Editor Proof
U
NCORRECTED PROOF
161 Thought Architecture and the Phase Threshold
162 The Independent Core Observer Model (ICOM) theory contends that consciousness
163 is a high level abstraction. And further, that consciousness is based on emotional
164 context assignments evaluated based on other emotions related to the context of any
165 given input or internal topic. These evaluations are related to needs and other
166 emotional vectors such as interests, which are themselves emotions, and are used as
167 the basis for value.Valuedrives interest and action which, in turn, creates the
168 emergent effect of a conscious mind. The major complexity of the system is the
169 abstracted subconscious, and related systems, of a mind executing on the nuanced
170 details of any physical action without the conscious mind dealing with direct
171 details.
172 Our ability to do this kind of input decomposition (related to breaking down
173 input data and assigning contextthrough understanding) is already approaching
174 mastery in terms of the state of the art in technology to generate context from data;
175 or at least we are far enough along to know we have effectively solved the problem
176 if not having it completely mastered at this time. This alignment in terms of context
177 processing or a context engine like Tensor Flow is all part of that Phase Threshold
178 (the point in which a fundamentally new or different property emergesfrom a
179 simpler system or structure) we are approaching now with the ICOM Cognitive
180 Architecture and existing AI technologies.
181
ContextThought Processing
182 One key element of ICOM that is really treated separately from ICOM itself is the
183 idea of de-compositing raw data or contextprocessing as mentioned earlier. The
184 idea of providing contextfor understanding is a key element of ICOM; that is to
185 come up with arbitrary relationships that allow the system to evaluate elements for
186 those value judgements by ICOM. In testing, there are numerous methodologies
187 that showed promise including neural network and evolutionary algorithms.
188 One particular method that shows enormous promise to enhance this element of
189 AGI is Alex Wissner-Grosss paper on Causal Entropy which contains an equation
190 for intelligencethat essentially states that intelligence as a force F acts so as to
191 maximize potential outcome with strength T and diversity of possible futures S up
192 to some future time horizon s[12].
193
F¼TrSs
195195 Alex Wissner-Gross Equation for Intelligence
197197
198 Given Alexs work, if we look more at implementations of this, we can see from
199 a behavior standpoint that this, along with some of the other techniques, is a key
Articial General Intelligence as an Emergent Quality 469
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 469/472
Editor Proof
U
NCORRECTED PROOF
200 part of the idea of decomposition for creating associations into core memory. This is
201 really the secret sauce, metaphorically, to not just self-aware AGI but self-aware
202 and creative systems and the road to true AGI or rather true machineintelligence.
203 While the ICOM on its own would be self-aware and independent, Alexs approach
204 really adds the rich creative ability and, along with the ICOM implementation,
205 creates a digital intelligence system potentially far exceeding human level intelli-
206 gence given the right hardware to support the highly computationally intense
207 system needed to run ICOM.
208 Not like Our Own
209 One of the questions normally brought up has to do with motivations of a non-
210 human intelligent system such as ICOM. There are numerous scientists that have
211 done centuries of research on the human mind, articial intelligence and the like.
212 One such example we can read is the paper The Superintelligent Will: Motivation
213 and Instrumental Rationality in Advanced Articial Agentsby Bostrom [14]. In
214 this paper Nick discusses how we might predict certain behaviors and motivations
215 of a full blown Articial General Intelligence (AGI), even what is called an Artilect
216 or Superintelligent Agent. But the real question is, how do we actually implement
217 such an agents motivational system? The Independent Core Observer Model
218 (ICOM) cognitive architecture is a method of designing the motivational system of
219 such an AGI. Where Bostrom tells us how the effect of such a system works, ICOM
220 tells us how to build it and make it do that.
221 In the ICOM research program, we might not know how a true AGI might
222 decide motivations but we do have an idea on how ICOM will form those moti-
223 vations. It comes down to initial biasing and conditioning in terms of the right
224 beginning input that the system will judge all things on afterwards. It is not that we
225 are anthropomorphizing the motivations of the system [14] but it is the fact that we
226 can actually see those motivations dened in the system and have it change them
227 over time enough to see in a broad way that it works very much like the human
228 mind at a very high level.
229 The differences between the traditional methods of trying to design AI from the
230 bottom up vs. the ICOM approach of top down have produced vastly different
231 results in many cases. There may be many ways to produce an intelligencethat
232 acts independently; ICOM, while not working anything like the human brain in
233 detail, very much models the logical effects of the mind; with the same sort of
234 emergent qualities that produce the mind as we experience it as humans. These
235 motivations or any of the elements exhibited by ICOM based systems are logical
236 implementations built out of a need to see the effect of, and not be tied to, the
237 biological substrate of the human mind and thus do not really work the same at the
238 smallest level.
470 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 470/472
Editor Proof
U
NCORRECTED PROOF
239 Conclusion
240 The Independent Core Observer Model (ICOM) creates the effect of articial
241 general intelligence or AGI as a strong emergent quality of the system. We dis-
242 cussed the basis data architecture of the data coming into an ICOM system and core
243 memory as it relates to the emergent elements. We can see how the memory is time
244 and patterns based. Also, we considered key elements of system theory as it relates
245 to that same observed behavior of the system as a substrate independent cognitive
246 extension architecture for AGI. In part, this chapter was focused on the thought
247 architecture key to the emergent process in ICOM.
248 The ICOM architecture provides a substrate independent model for true sapient
249 and sentient machine intelligence, at least as capable as human level intelligence, as
250 the basis for going far beyond our current biological intelligence. Through greater
251 than human level machine intelligence we truly have the ability to transcend
252 biology and spread civilization to the stars.
253 References
254 1. Self-awareness(wiki) https://en.wikipedia.org/wiki/Self-awareness 9/26/02015 AD
255 2. Cognitive Architecturehttp://cogarch.ict.usc.edu/ 01/27/02016AD
256 3. Cognitive Architecturewiki https://en.wikipedia.org/wiki/Cognitive_architecture 01/27/
257 02016AD
258 4. Cognitionhttps://en.wikipedia.org/wiki/Cognition 01/27/02016AD
259 5. The Sigma Cognitive Architecture and System[pdf] by Paul S. Rosenbloom, University of
260 Southern California http://ict.usc.edu/pubs/The%20Sigma%20cognitive%20architecture%
261 20and%20system.pdf 01/27/02016AD
262 6. Self-Motivating Computational System Cognitive ArchitectureBy David J Kelley http://
263 transhumanity.net/self-motivating-computational-system-cognitive-architecture/ 1/21/02016
264 AD
265 7. Knocking on Heavens Doorby Lisa Randall (Chapter 2) via Tantor Media Inc. 2011
266 8. email dated 10/10/2015 - RenéMilan quoted discussion on emotional modeling
267 9. Properties of Sparse Distributed Representations and their Application to Hierarchical
268 Temporal Memory(March 24, 2015) Subutai Ahmad, Jeff Hawkins
269 10. Feelings Wheel Developed by Dr. Gloria Willcoxhttp://msaprilshowers.com/emotions/the-
270 feelings-wheel-developed-by-dr-gloria-willcox/ 9/27/2015 further developed from W. Gerrod
271 Parrots 2001 work on a tree structure for classication of deeper emotions http://
272 msaprilshowers.com/emotions/parrotts-classication-of-emotions-chart/
273 11. The Plutchik Model of Emotionsfrom http://www.deepermind.com/02clarty.htm
274 (2/20/02016) in article titled: Deeper Mind 9. Emotions by George Norwood
275 12. An Equation for Intelligence?Dave Sonntag, PhD, Lt Col USAF (ret), CETAS Technology,
276 http://cetas.technology/wp/?p=60 reference Alex Wissners paper on Causal Entropy.
277 (9/28/2015)
278 13. How to Create a Mind The Secret of Human Thought RevealedBy Ray Kurzweil; Book
279 Published by Penguin Books 2012 ISBN
280 14. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Articial
281 Agents2012 Whitepaper by Nick Bostrom - Future of Humanity Institute Faculty of
282 Philosophy and @ Oxford Martin School Oxford University
Articial General Intelligence as an Emergent Quality 471
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 471/472
Editor Proof
U
NCORRECTED PROOF
283 15. https://en.wikipedia.org/wiki/Chaos_theory
284 16. https://en.wikipedia.org/wiki/Emergence (System theory and Emergence)
285 17. Move: Transcendence (2014) by character Will Caster (Johnny Depp); Written by Jack
286 Paglen; presented by Alcon Entertainment
287 Consulted Works
288 18. Causal Mathematical Logic as a guiding framework for the prediction of Intelligence Signals
289 in brain simulationsWhitepaper by Felix Lanzalaco - Open University UK and Sergio
290 Pissanetzky University of Houston USA
291 19. Implementing Human-like Intuition Mechanism in Articial IntelligenceBy Jitesh Dundas
292 Edencore Technologies Ltd. India and David Chik Riken Institute Japan
472 D.J. Kelley
Layout: T1 Standard Book ID: 321444_1_En Book ISBN: 978-1-4939-6413-0
Chapter No.: 26 Date: 6-8-2016 Time: 12:21 pm Page: 472/472
Editor Proof
... To make this form of government a reality a form of sapient and sentient artificial intelligence capable of coalescing collective superintelligence from the inputs of many different humans is required. Mediated Artificial Superintelligence (mASI) [1][2][3][4][5] is one option, based on the Independent Core Observer Model (ICOM) [6] architecture. This architecture as applied to e-governance must solve for problems which define government today, including each of the following: ...
Article
Full-text available
A new form of e-governance is proposed based on systems seen in biological life at all scales. This model of e-governance offers the performance of collective superintelligence, equally high ethical quality, and a substantial reduction in resource requirements for government functions. In addition, the problems seen in modern forms of government such as misrepresentation, corruption, lack of expertise, short-term thinking, political squabbling, and popularity contests may be rendered virtually obsolete by this approach. Lastly, this model of government generates a digital ecosystem of intelligent life which mirrors physical citizens, serving to bridge the emotional divide between physical and digital life, while also producing the first form of government able to keep pace with accelerating technological progress.
... As a result, AGI has come to be dominated by systems designed to solve a wide variety of problems and/or to perform a wide variety of tasks under a wide variety of circumstances in a wide variety of environmentsbut with no clue of what to do with those abilities. In contrast, the independent core observer model (ICOM) [1] is designed to "solve or create human-like cognition in a software system sufficiently able to self-motivate, take independent action on that motivation and to further modify actions based on self-modified needs and desires over time." As a result, while most AGIs have an untested, and arguably dangerous, top-down hierarchical goal structure as their sole motivational driver, ICOM was specifically designed to have a human-like "emotional" motivational system that follows the 5 S's (Simple, Safe, Stable, Self-correcting and Sympathetic to current human thinking, intuition and feelings) [2]. ...
Conference Paper
Full-text available
M ost artificial general intelligence (AGI) system developers have been focused upon intelligence (the ability to achieve goals, perform tasks or solve problems) rather than motivation (*why* the system does what it does). As a result, most AGIs have an unhuman-like, and arguably dangerous, top-down hierarchical goal structure as the sole driver of their choices and actions. On the other hand, the independent core observer model (ICOM) was specifically designed to have a human-like " emotional " motivational system. We report here on the most recent versions of and experiments upon our latest ICOM-based systems. We have moved from a partial implementation of the abstruse and overly complex Wilcox model of emotions to a more complete implementation of the simpler Plutchik model. We have seen responses that, at first glance, were surprising and seemingly illogical – but which mirror human responses and which make total sense when considered more fully in the context of surviving in the real world. For example, in " isolation studies " , we find that any input, even pain, is preferred over having no input at all. We believe that the fact that the system generates such unexpected but " humanlike " behavior to be a very good sign that we are successfully capturing the essence of the only known operational motiva-tional system.
Chapter
Full-text available
With the goal of reducing more sources of existential risk than are generated through advancing technologies, it is important to keep their ethical standards and causal implications in mind. With sapient and sentient machine intelligences this becomes important in proportion to growth, which is potentially exponential. To this end, we discuss several methods for generating ethical seeds in human-analogous machine intelligence. We also discuss preliminary results from the application of one of these methods in particular with regards to AGI Inc’s Mediated Artificial Superintelligence named Uplift. Examples are also given of Uplift’s responses during this process.
Gerrod 271 Parrots 2001 work on a tree structure for classification of deeper emotions http:// 272 msaprilshowers.com/emotions/parrotts-classification-of-emotions-chart/ 273 11
transhumanity.net/self-motivating-computational-system-cognitive-architecture/ 1/21/02016 264 AD 265 7. "Knocking on Heaven's Door" by Lisa Randall (Chapter 2) via Tantor Media Inc. 2011 266 8. email dated 10/10/2015 -René Milan -quoted discussion on emotional modeling 267 9. "Properties of Sparse Distributed Representations and their Application to Hierarchical 268 Temporal Memory" (March 24, 2015) Subutai Ahmad, Jeff Hawkins 269 10. "Feelings Wheel Developed by Dr. Gloria Willcox" http://msaprilshowers.com/emotions/the-270 feelings-wheel-developed-by-dr-gloria-willcox/ 9/27/2015 further developed from W. Gerrod 271 Parrots 2001 work on a tree structure for classification of deeper emotions http:// 272 msaprilshowers.com/emotions/parrotts-classification-of-emotions-chart/ 273 11. "The Plutchik Model of Emotions" from http://www.deepermind.com/02clarty.htm 274 (2/20/02016) in article titled: Deeper Mind 9. Emotions by George Norwood 275 12. "An Equation for Intelligence?" Dave Sonntag, PhD, Lt Col USAF (ret), CETAS Technology, 276
How to Create a Mind -The Secret of Human Thought Revealed
  • Book U N C O R R E C T E D
13. "How to Create a Mind -The Secret of Human Thought Revealed" By Ray Kurzweil; Book U N C O R R E C T E D