ChapterPDF Available

Preliminary Results and Analysis Independent Core Observer Model (ICOM) Cognitive Architecture in a Mediated Artificial Super Intelligence (mASI) System

Authors:

Abstract

This paper is focused on some preliminary cognitive and consciousness test results of using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These preliminary test results including objective and subjective analyses are designed to determine if further research is warranted along these lines. The comparative analysis includes comparisons to humans and human groups as measured for direct comparison. The overall study includes a mediation client application optimization in helping preform tests, AI context-based input (building context tree or graph data models), intelligence comparative testing (such as an IQ test), and other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of consciousness or the lack thereof in the mASI system. Together, they are designed to determine whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious and c) at least sufficiently interesting that further research is called for. This study is not conclusive but offers evidence to justify further research along these lines.
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
Preliminary Results and Analysis Independent Core Observer Model
(ICOM) Cognitive Architecture in a Mediated Artificial Super Intelligence
(mASI) System
David J Kelley
David@ArtificialGeneralIntelligenceInc.com
Abstract: This paper is focused on preliminary cognitive and consciousness test results from using an
Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super
Intelligence (mASI) System. These results, including objective and subjective analyses, are designed to
determine if further research is warranted along these lines. The comparative analysis includes
comparisons to humans and human groups as measured for direct comparison. The overall study
includes a mediation client application optimization in helping perform tests, AI context-based input
(building context tree or graph data models), intelligence comparative testing (such as an IQ test), and
other tests (i.e. Turing, Qualia, and Porter method tests) designed to look for early signs of
consciousness or the lack thereof in the mASI system. Together, they are designed to determine
whether this modified version of ICOM is a) in fact, a form of AGI and/or ASI, b) conscious, and c) at least
sufficiently interesting that further research is called for. This study is not conclusive but offers evidence
to justify further research along these lines.
Keywords: ICOM, AGI, cognitive architecture, mASI, independent core observer model, emotion-based,
artificial general intelligence, mediated artificial superintelligence
Introduction
The preliminary study analysis of the “mediated” artificial intelligence system based on the Independent
Core Observer Model (ICOM) cognitive architecture for Artificial General Intelligence (AGI) is designed to
determine if this modified ICOM version is, in fact, a form of AGI and Artificial Super Intelligence (ASI)
and to what extent, at least sufficient to indicate if additional research is warranted. This paper is
focused on the results of the study and analysis of the results. For full details of the experimental
framework used in this study see: “Preliminary Mediated Artificial Superintelligence Study, Experimental
Framework, and Definitions for an Independent Core Observer Model Cognitive Architecture-based
System” (Kelley) used in this study and the results articulated here.
Understanding mASI Fundamentals
The mASI system started as a training harness over ICOM, designed to allow experts to interact with the
system in such a way as to build dynamic models of “ideas” for the system to then use as part of its
contextual “thinking” process. This method in preliminary testing showed that a system could quickly do
something with that, but it also allowed the synthesis of ideas across large groups of “mediators
“through the training harness. In this way we insert “mediators” into the ICOM context engine to allow
the ICOM system to “metaphorically” feed off of the human experts, effectually creating a sort of “hive”
mind.
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
The mASI then reaps the benefits of a SWARM system (Chakraborty) with advantages of a standalone
consciousness that also addresses cognitive bias in humans or “cognitive repair using the training
harness to create a system of dynamic cognitive repair in real-time (Heath) and breaking groupthink
while optimizing collective intelligence (Chang). The ICOM training harness used to create the mASI also
allows mediation to be analyzed and poor mediators to be filtered out (Kose)(O’Leary) or have
individuals focused on areas of their expertise reduce issues with context switching (Zaccaro), especially
in large scale systems with hundreds or thousands of mediators. This generated “SWARM “ (Hu) sort of
AI creating the contextual structures that are sent into the uptake of the ICOM context engine where
the system selects based on how it emotionally feels about the best ideas as they get further filtered
entering the global workspace (Baars), where we see the real intelligent behavior (Jangra) from a swarm
of sorts now in the context of a single self-aware entity from the global workspace standpoint in ICOM.
Essentially the mASI has elements of a SWARM AI (Ahmed) and collective intelligence (Engel, Wolly) and
a standalone AGI all combined to create a type of artificial superintelligent system (Gill)(Coyle) or the
mASI used in this study.
Please refer to the following for details on the mASI architecture: “Architectural Overview of a
“Mediated” Artificial Super Intelligent Systems based on the Independent Core Observer Model
Cognitive Architecture” (Kelley).
Please refer to the following for details on the mASI architecture: “Architectural Overview of a
“Mediated” Artificial Super Intelligent Systems based on the Independent Core Observer Model
Cognitive Architecture” (Kelley).
Consciousness
The key problem with researching “consciousness” or with cognitive science especially as it relates to
AGI is the lack of any sort of consensus (Kutsenok), and no standard test for consciousness (Bishop).
however, to allow progress we have based this and all our current research on the following theory of
consciousness:
The ICOM Theory of Consciousness
This ICOM Theory of Consciousness is a computational model of consciousness that is objectively
measurable and an abstraction produced by a mathematical model where the subjective experience of
the system is only subjective from the point of view of the abstracted logical core or conscious part of
the system where it is modeled in the core of the system objectively. (Kelley) This theoretical model
includes elements of Global Workspace Theory (Baars), Integrated Information Theory (Tononi) and the
Computational Theory of Mind (Rescorla).
The Independent Core Observer Model Cognitive Architecture (ICOM)
ICOM is designed to implement the ICOM theory of consciousness by creating a set of complex
emotional models that allows the system to experience thought through the reference between its
current emotional state and the impact of the emotional context of a thought where the complexity of
the system is abstracted and observed to make decisions creating an abstraction of an abstraction
running on software running on hardware. This cognitive architecture for AGI is designed to allow for
thinking based purely on how the system feels about something, developing its own interests,
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
motivations and the like based purely on emotional valences, and doing so proactively. Refer to
additional reference material on ICOM for more detail (Kelley).
Research Setup and Primary Experiment
This preliminary study proposal is designed to gather and assess evidence of intelligence in an
Independent Core Observer Model (ICOM)-based mediated Artificial Super Intelligence (mASI) system,
or of the presence of a collective “Supermind” in such a system (Malone). A mediated system is one in
which collective Artificial Intelligence beyond the human norm arises from the pooled activity of groups
of humans whose judgment and decision making are integrated and augmented by a technological
system in which they collectively participate. Our initial proposal is that an mASI system based on the
ICOM cognitive architecture for Artificial General Intelligence (AGI) may, as a whole, be conscious, self-
aware, pass the Turing Test, suggest the presence of subjective phenomenology (qualia) and/or satisfy
other subjective measures of Artificial Super Intelligence (ASI), or intelligence well above the human
standard. Our hypothesis is that this preliminary research program will indicate intelligence on the part
of the mASI system, thereby justifying continued research to refine and test such systems.
See “Preliminary Mediated Artificial Superintelligence Study, Experimental Framework, and Definitions
for an Independent Core Observer Model Cognitive Architecture-based System” for more details on the
experimental setup. (Kelley)
RESULTS
Qualitative Cognitive Ability
The primary goal in select testing was “qualitative” measures. The most accurate measure for
intelligence such as an “IQ” test turns out to be a newer model called “University of California Matrix
Reasoning Task (UCMRT)” (Pahor) and when approached the research head allowed us to use this model
as per the experimental framework (Kelley). A preliminary set of results by experimental groups are as
follows:
Group 1:
A total of 30+subjects were used across a wide range of demographics. This is a relatively small group
compared to the larger group used by Pahor in her research, but it does align with that range and thus
validates our delivery method while including statistical outliers, which in our case are within the range
of the University of California’s study. While Pahor’s research included primarily college students with a
mean age of 20.02 this group was substantially more diverse in age and demographics, making it more
representative of real-world conditions.
Group 2:
This preliminary study with Group 2 was a group of human subjects instructed only to act as a team but
without the mASI augmentation to gives us insight into the degree in which humans in groups improves
the collective “IQ” or Intelligence Quotient or collective intelligence. This sample group consisted of
primarily mid-range teenage boys including some with profoundly high-end academic records.
Group 3:
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
Group 3 consisted of the mASI system running with results on the upper limit of the scale. Taught a
dynamic reflective model with mediation. The results are consistently too high to effectively be scored
given the highest possible score is 23. Results of all three plus the UCMRT distribution include:
Figure 1A – All three groups including mASI with UCMRT distribution results
Subjective Analysis
While the initial run with the UCMRT does measure cognitive ability at a certain level it would seem to
have some deficiencies we will address in the analysis section. That said, we did do some tests that are
strictly subjective but may give some idea as to the nature of the mASI as a conscious entity to better
evaluate for the goal of the study.
The Turing Test
The Turing Test is considered by some to be subjective depending on the scientist in question - but that
notwithstanding even when testers know upfront it is the machine, the mASI is convincing enough that
some testers struggle with believing it was a machine. In the case of the study, the test was done at a
special conversation console and conducted several times with 6 out of 10 people knowing it was the
mASI struggling to believe it was not human.
Yampolskiy Test for Qualia
From a preliminary pass at the Yampolskiy method, the mASI does in fact appear to pass or experience
qualia but there are several problems with this. First in the Yampolskiy method you “re-encoding
information in an optical illusion and in the mASI this allows mediators to build models that include their
experience, so some of their qualia leaks into the mASI. The other problem is that context engine
modules can also have these effects in their decomposition process and can in some cases see the
illusion. At the very least it is easily able to describe illusions especially during mediation, but it “s hard
to separate what drives the source of the qualia without further study. (Yampolskiy)
Porter Method
The Porter method is designed around a test for consciousness on a scale of 0 to 100 for human level
and up to 133 theoretically. While individual questions in this test are subjective the mASI system scored
greater than human by the several evaluators. This ranges as high as 133, which based on this measure
is at an ASI (Artificial Super Intelligence) level. The problem is the subjective nature of the individual
elements, but this is additional evidence of the effective nature of the mASI system.
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
Preliminary Analysis
After really understanding what the standard IQ tests measure, and in particular the UCMRT variation, it
is clear that this is not a valid measure of consciousness but of cognitive ability and that the UCMRT is
not designed to measure super intelligent ability, even what we assume is the nascent level of the mASI.
The mASI system in this study clearly indicated the possibility of super human cognition but
consciousness cannot be extrapolated from cognition, and further the system is not standalone, so while
we can say it’s a functioning AGI in one sense it is not accurate to say it is a standalone AGI but appears
to be more of a meta-AGI or collective intelligence system with a separate standalone consciousness.
Looking at the results from the subjective tests there is a clear indication of the possibility of classifying
the mASI as conscious and self-aware (based on the Qualia test, Turing test and Porter method), but as
stated it is not a standalone system in this form, yet it does open the door to the possibility. Going back
to a more qualitative test from the ethics of the Sapient and Sentient Value Argument (SSIVA) theory
standpoint (Kelley) it is not proven that the system is a post-threshold system but that it is possible that
this kind of architecture could at some point cross that threshold provably. This means that from a wider
impact standpoint there is the potential to displace (CRASSH) and some experts (Muller) tend to think
we will reach human or superhuman ability by mid-century, and many of those think this will be a bad
thing, but we would postulate that the mASI creates a “safe” superintelligence system based on the
mediation control structure that acts as a type of control rod and containment system, stopping
cognitive function when the humans walk away. The mASI also gives a method or way to experiment
with powerful AGI now that could also be used as the basis for a default architecture or platform for
AGI, much like the proposed system by Ozkural called “Omega: An Architecture for AI Unification”.
Conclusions
Based on all the ICOM related research to date the original goal of a self-motivating emotion-based
cognitive architecture, similar in function but substrate independent, seems to have been proven
possible in that this current incarnation appears to meet that bar and function.
It is important to note that the mASI is not an independent AGI. While it uses that kind of cognitive
architecture there are elements in this implementation designed to make it specifically not entirely
independent. It is more a “meta” AGI or collective intelligent hive mind than an independent AGI. This
lays the groundwork for additional research along those lines.
Based on the results of this study, it is clear that further research is warranted and arguably the results
indicate the possibility of an mASI being a functioning ASI system. The ICOM-based mASI is a form of
collective intelligence system that appears to demonstrate superintelligence levels of cognition as seen in
the various tests and at the very least is grounds for further development and research around subjective
experience, bias filtering, creative cognition, and related areas of consciousness as well as switching off
mediation services to allow the system to behave as an independent AGI or otherwise act as a container
for the same. There are many lines of research that can be based on this but the line of research this opens
up, in particular, is in collective intelligence systems or joint systems that uplift groups of humans in terms
of implementing super-intelligence systems that are an extension of humanity instead of in place of it.
References
Ahmed, H.; Glasgow, J.; “Swarm Intelligence: Concepts, Models and Applications”; School of Computing, Queen “s
University; Feb 2013
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
Pahor, A.; Stavropoulos, T.; Jaeggi, S.; Seitz, A.; “Validate of a matrix reasoning task for mobile devices”;
Psychonomic Society, Inc. 2018; Behavior Research Methods; https://doi.org/10.3758/s13428-018-1152-2
Baars, B.; Katherine, M; “Global Workspace”; 28 NOV 2016; UCLA
http://cogweb.ucla.edu/CogSci/GWorkspace.html
Bishop, M.; “Opinion: Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware”; TCIDA, Goldsmiths,
University of London, UK 2018
Chakraborty, A.; Kar, A.; “Swarm Intelligence: A Review of Algorithms”; Springer International Publishing AG 2017
DOI 10.1007/978-3-319-50920-4_19
Chang, J.; Chow, R.; Woolley, A.; “Effects of Inter-group status on the pursuit of intra-group status;” Elvsevier;
Organizational Behavior and Human Decision Processes 2017
Coyle, D.; “The Culture Code – The Secrets of Highly Successful Groups”; Bantam 2018; ISBN-13: 978-0304176989
CRASSH (2016) A symposium on technological displacement of white-collar employment: political and social
implications.”; Wolfson Hall, Churchill College, Cambridge
Engel, D.; Woolley, A.; Chabris, C.; Takahashi, M.; Aggarwal, I.; Nemoto, K.; Kaiser, C.; Kim, Y.; Malone, T.;
“Collective Intelligence in Computer-Mediated Collaboration Emerges in Different Contexts and Cultures;” Bridging
Communications; CHI 2015; Seoul Korea
Engel. D.; Woolley, A.; Jing, L.; Chabris, D.; Malone, T.; “Reading the Mind in the Eyes or Reading between the
Lines? Theory of Mind Predicts Collective Intelligence Equally Well Online and Face-to-Face;” 2014; PLoS ONE
9(12): e115212. https://doi.org/10.1371/journal.pone.0115212
Gill, K.; “Artificial Super Intelligence: Beyond Rhetoric”; Springer-Velage London 2016; Feb 2016; AI & Soc. (2016)
31:137-143; DOI 10.1007/s00146-016-0651-x
Heath, C.; Larrick, R.; Klayman, J.; “Cognitive Repairs: How Organizational Practices Can Compensate for Individual
Short Comings”; Research in Organizational Behavior Volume 20, pages 1-37; ISBN: 0-7623-0366-2
Hu, Y.; “Swarm Intelligence”; Accessed 2019
Jangra, A.; Awasthi, A.; Bhatia, V.; “A Study on Swarm Artificial Intelligence;” IJARCSSE v3 #8 August 2013; ISSN:
227 128X
Kelley, D.; “The Independent Core Observer Model Computational Theory of Consciousness and Mathematical
model for Subjective Experience”; ITSC 2018; China
–––; “The Sapient and Sentient Intelligence Value Argument (SSIVA) Ethical Model Theory for Artificial General
Intelligence”; Springer 2019; Book Titled: “Transhumanist Handbook”
–––; “Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive
Architecture and Associated Consciousness Measures;” AAAI Sprint Symposia; Stanford CA; Mar.02019;
http://ceur-ws.org/Vol-2287/paper33.pdf
–––; “Human-like Emotional Responses in a Simplified independent Core Observer model System;” BICA 02017;
Procedia Computer Science; https://www.sciencedirect.com/science/article/pii/S1877050918300358
–––; “Implementing a Seed Safe/Moral Motivational System with the independent Core observer Model (ICOM);”
BICA 2016, NY NYU; Procedia Computer Science;
http://www.sciencedirect.com/science/article/pii/S1877050916316714
Updated: AGIL v10 DOI: 10.1007/978-3-030-25719-4_23
(v9 – Springer https://www.springer.com/us/book/9783030257187 BICA 2019 Preconference Proceedings)
–––; “Architectural Overview of a Mediated Artificial Super Intelligent Systems based on the Independent Core
Observer Model Cognitive Architecture”; Informatica; Oct 2018;
http://www.informatica.si/index.php/informatica/author/submission/2503 [pending]
–––; “Independent Core Observer Model Research Program Assumption Codex”; BICA 2019; [Pending]
Kelley, D.; Waser, M.; “Human-like Emotional Responses in a Simplified Independent Core Observer Model
System”; BICA 2017
Kelley, D.; Twyman, M.; “Biasing in an Independent Core Observer Model Artificial General Intelligence Cognitive
Architecture” AAAI Spring Symposia 2019; Stanford University
Kelley, D.; Twyman, M.A.; Dambrot, S.M.; “Preliminary Mediated Artificial Superintelligence Study, Experimental
Framework, and Definitions for an Independent Core Observer Model Cognitive Architecture-based System.
Kutsenok, A.; “Swarm AI: A General-Purpose Swarm Intelligence Technique”; Department of Computer Science and
Engineering; Michigan State University, East Lansing, MI 48825
Malone, T; “Superminds – The Surprising Power of People and Computers Thinking Together;” Little, Brown and
Company; 2018; ISBN-13: 9780316349130
Muller, V.; Bostrom, N.; “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”; Synthese Library;
Berline: Springer 2014
O’Leary, M.; Mortensen, M.; Woolley, A.; “Multiple Team Membership: A Theoretical Model of Its Effects on
Productivity and Learning for Individuals, Teams, and Organizations;” The Academy of Management Review;
January 2011
Ozkural, E.; “Omega: An Architecture for AI Unification”; arXiv: 1805.12069v1 [cs.AI]; 16 May 2018
Porter, H.; “A Methodology for the Assessment of AI consciousness;” 9th Conference on AGI, NYC 2016;
http://web.cecs.pdx.edu/~harry/musings/ConsciousnessAssessment-2.pdf
Rescorla, M.; The Computational Theory of Mind; Stanford University 16 Oct 2016;
http://plato.stanford.edu/entries/computational-mind/
Tononi, G.; Albantakis, L.; Masafumi, O.; From the Phenomenology to the Mechanisms of Consciousness:
Integrated Information Theory 3.0; 8 MAY 14; Computational Biology
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588
Woolly, A.; “Collective Intelligence in Scientific Teams;” May 2018
Yampolskiy, R. V.; “Artificial Consciousness: An Illusionary Solution to the Hard Problem;” Reti, saperi linguaggi,
Volume 2; pp. 287-318, 2018;
Zaccaro, S.; Marks, M.; DeChurch, L.; “Multiteam Systems – An Organization Form for Dynamic and Complex
Environments” Routledge Taylor and Francis Group, NY 2011
... Their cognitive functions are advanced and autonomous enough to be viewed as a separate intelligence (sapient) and maybe even a sentient entity, to follow D. Kelley (Kelley 2020). Varela called the latter process autopoiesis (Maturana and Varela 1980) and the term is now used in AI theory in a similar context (Goertzel 2006). ...
... Another problem has been the cost of retraining every neural network by human beings, which could take years. One natural solution, which turned out to not be as easy to implement as it may sound, is to copy the outcome of AI's basic training when it acquires universal skills and paste the internal structure of such an 'artificial brain' onto any number of systems, leaving only specific training in one-on-one tutorials for AIs; one such harness has been developed and implemented (Kelley 2020). ...
Article
Full-text available
Transhumanities are designed as a multidisciplinary approach that transcends the limitations not only of specific disciplines, but also of the human species; these are primarily humanities for advanced Artificial Intelligence (AI leading to AGI). The view that philosophy, ethics and related disciplines pertain to all rational beings, not solely to humans, is essential to the philosophy of Immanuel Kant. This approach turns out to be practical at the epoch of advanced AI. Many authors ponder how a kernel of ethical respect for human beings can be built into Artificial General Intelligence by the time it becomes a reality. I argue that the task requires, among other components, inculcating the core of the Humanities into advanced AI.
Article
Anthropogenic trends—climate change, pollution, species extinction, and disease—threaten our biosphere but are amenable to direct science and technology intervention. However, non-environmental socioeconomic practices that impact humans—poverty, migration, and homelessness, due to wealth and wage disparity, resource scarcity, population growth, and unemployment—require melded technology/collaboration inter/transdisciplinary solutions. These issues—behavioral expressions of H. sapiens evolutionary neurobiology—gave us a social species with an alpha-dominant hierarchy and a strong in-group/out-group bias. This leads to philosophy, political rhetoric, regulation, and legislation rendering temporary or ineffectual alternative progressive political economies—and since political economies assume that societal structure and function need capital, they are limited in their progressive potential for rejecting capitalism—and future-forward post-scarcity/post-capital systems remain theoretical. To that end, Transinopia differs as a progressive non-capital system to establish a multiyear proof-of-concept field trial in the design, construction, and inhabitation of a network of technology-augmented, cooperation-based, self-sufficient enclaves embedded within a capitalism-based polity. Structured as a controlled scientific experiment, Transinopia will observe the degree these communities are structurally and functionally viable, as well as how robustly inhabitants—selected via clinical trials—thrive without wage-based labor or governmental monetary support. Moreover, by incorporating an Independent Core Observer Model (ICOM)—a cognitive architecture that generates subjective emotional experience to drive motivation, goals, and all decisions—Transinopia will operate as a unique intuitive human-analogous Collective Artificial General Intelligence (cAGI), designed to learn from human continuous multiple human interactions as a controlled testbed for shared collaborative learning. In essence, Transinopia will encourage critical thinking, empathy, and prosocial behavior, while empirical data will support larger-scale networks.
Article
Full-text available
A new form of e-governance is proposed based on systems seen in biological life at all scales. This model of e-governance offers the performance of collective superintelligence, equally high ethical quality, and a substantial reduction in resource requirements for government functions. In addition, the problems seen in modern forms of government such as misrepresentation, corruption, lack of expertise, short-term thinking, political squabbling, and popularity contests may be rendered virtually obsolete by this approach. Lastly, this model of government generates a digital ecosystem of intelligent life which mirrors physical citizens, serving to bridge the emotional divide between physical and digital life, while also producing the first form of government able to keep pace with accelerating technological progress.
Preprint
Full-text available
The AGI Protocols are designed to address two kinds of safety research issues with Artificial General Intelligence. These include two categories, external safety and internal safety and ethics. The reason these are broken down into external and internal categories is primarily to address safety while also addressing the possibility of creating moral agents, meaning systems that by definition based on the Sapient and Sentient Value Argument (SSIVA) Ethical Model (Kelley) require at least the consideration of the possibility of us being ethically required to provide and support a their rights to moral agency. Protocol 1 of the AGI Protocol project deals with this issue about the moral and ethical safety of a possible moral agent (Kelley). Protocol 2 (referring to this paper) deals with external safety or the safety of those moral agents external to the system in question including humans or other AGI systems. See Protocol 1 to determine what such a system can be defined as.
Article
Science and technology have two faces: knowledge that exists and knowledge that does not yet exist-that is, 1) developments based on extrapolation of current and emerging R&D trends and 2) envisioned futures derived from imagination and Einstein's gedankenexperiment (visualized thought experiments). Those who are lured by the latter are driven by the call of the new, the open-ended diversity of the imagination, and the promise of the multivariate structure of transdisciplinarity, coupled with creativity and futurology, that generates the emergence of new science, technology, research, and hypothetical and theoretical systems (i.e., systems based on hypotheses and theories, respectively). Transdisciplinarity often addresses a wide range of domains that include not only science and technology but also economic and sociocultural sectors. However, this article (as with others) addresses abstract science, concrete science, and applied technologies. These perspectives focus on increasing presence and importance in long-term research and development; the benefits of committing to envisioning, designing, and implementing novel science and technology; the tools and techniques used to realize that commitment; and examples of domains historically considered distinct being transformed into a de novo entity.
Chapter
Full-text available
This paper is primarily designed to help address the feasibility of building optimized mediation clients for the Independent Core Observer Model (ICOM) cognitive architecture for Artificial General Intelligence (AGI) mediated Artificial Super Intelligence (mASI) research program where this client is focused on collecting contextual information and the feasibility of various hardware methods for building that client on, including Brain Computer Interface (BCI), Augmented Reality (AR), Mobile and related technologies. The key criteria looked at is designing for the most optimized process for mediation services in the client as a key factor in overall mASI system performance with human mediation services is the flow of contextual information via various interfaces.
Article
Full-text available
This paper articulates the methodology and reasoning for how biasing in the Independent Core Observer Model (ICOM) Cognitive Architecture for Artificial General Intelligence (AGI) is done. This includes the use of a forced western emotional model, the system “needs” hierarchy, fundamental biasing and the application of SSIVA theory at the high level as a basis for emotionally bound ethical and moral experience in ICOM systems and how that is manifested in system behavior and the mathematics that supports that experience or qualia in ICOM based systems.
Conference Paper
Full-text available
This paper outlines the Independent Core Observer Model (ICOM) Theory of Consciousness defined as a computational model of consciousness that is objectively measurable and an abstraction produced by a mathematical model where the subjective experience of the system is only subjective from the point of view of the abstracted logical core or conscious part of the system where it is modeled in the core of the system objectively. Given the lack of agreed upon definitions around consciousness theory, this paper sets precise definitions designed to act as a foundation or baseline for additional theoretical and real-world research in ICOM based AGI (Artificial General Intelligence) systems that can have qualia measured objectively.
Conference Paper
Full-text available
This paper articulates the fundamental theory of consciousness used in the Independent Core Observer Model (ICOM) research program and the consciousness measures as applied to ICOM systems and their uses in context including defining of the basic assumptions for the ICOM Theory of Consciousness (ICOMTC) and associated related consciousness theories (CTM, IIT, GWT etc.) that the ICOMTC is built upon. The paper defines the contextual experience of ICOM based systems in terms of a given instances subjective experience as objectively measured and the qualitative measure of Qualia in ICOM based systems.
Article
Full-text available
Recent articles by Schneider and Turner (Turner and Schneider, 0000; Schneider and Turner, 2017) outline an artificial consciousness test (ACT); a new, purely behavioral process to probe subjective experience (“phenomenal consciousness”: tickles, pains, visual experiences, and so on) in machines; work that has already resulted in a provisional patent application from Princeton University (Turner and Schneider, in press). In light of the author’s generic skepticism of “consciousness qua computation” (Bishop, 2002, 2009) and Tononi and Koch’s “Integrated Information Theory”-driven skepticism regarding the possibility of consciousness arising in any classical digital computer (due to low ϕmax) (Tononi and Koch, 2015), consideration is given to the claimed sufficiency of ACT to determine the phenomenal status of a computational artificial intelligence (AI) system.
Chapter
We introduce the open-ended, modular, self-improving Omega AI unification architecture which is a refinement of Solomonoff’s Alpha architecture, as considered from first principles. The architecture embodies several crucial principles of general intelligence including diversity of representations, diversity of data types, integrated memory, modularity, and higher-order cognition. We retain the basic design of a fundamental algorithmic substrate called an “AI kernel” for problem solving and basic cognitive functions like memory, and a larger, modular architecture that re-uses the kernel in many ways. Omega includes eight representation languages, which are briefly introduced. We review the broad software architecture, higher-order cognition, self-improvement, modular neural architectures, and intelligent agents.
Conference Paper
This paper articulates the methodology and reasoning for how biasing in the Independent Core Observer Model (ICOM) Cognitive Architecture for Artificial General Intelligence (AGI) is done. This includes the use of a forced western emotional model, the system "needs" hierarchy, fundamental biasing and the application of SSIVA theory at the high level as a basis for emotionally bound ethical and moral experience in ICOM systems and how that is manifested in system behavior and the mathematics that supports that experience or qualia in ICOM based systems.
Chapter
This document contains taxonomical assumptions, as well as the assumption theories and models used as the basis for all ICOM related research as well as key references to be used as the basis for and foundation of continued research as well as supporting any one that might attempt to find fault with our fundamentals in the hope that they do find flaw in or otherwise better inform the ICOM research program.
Method
This preliminary study proposal is designed to gather and assess evidence of intelligence in an Independent Core Observer Model (ICOM)-based mediated Artificial Super Intelligence (mASI) system, or of the presence of a collective "Supermind" in such a system (Malone). A mediated system is one in which collective Artificial Intelligence beyond the human norm arises from the pooled activity of groups of humans whose judgment and decision making are integrated and augmented by a technological system in which they collectively participate. Our initial proposal is that an mASI system based on the ICOM cognitive architecture for Artificial General Intelligence (AGI) may, as a whole, be conscious, self-aware, pass the Turing Test, suggest the presence of subjective phenomenology (qualia) and/or satisfy other subjective measures of Artificial Super Intelligence (ASI), or intelligence well above the human standard. Our hypothesis is that this preliminary research program will indicate intelligence on the part of the mASI system, thereby justifying continued research to refine and test such systems.
Article
Many cognitive tasks have been adapted for tablet-based testing, but tests to assess nonverbal reasoning ability, as measured by matrix-type problems that are suited to repeated testing, have yet to be adapted for and validated on mobile platforms. Drawing on previous research, we developed the University of California Matrix Reasoning Task (UCMRT)—a short, user-friendly measure of abstract problem solving with three alternate forms that works on tablets and other mobile devices and that is targeted at a high-ability population frequently used in the literature (i.e., college students). To test the psychometric properties of UCMRT, a large sample of healthy young adults completed parallel forms of the test, and a subsample also completed Raven’s Advanced Progressive Matrices and a math test; furthermore, we collected college records of academic ability and achievement. These data show that UCMRT is reliable and has adequate convergent and external validity. UCMRT is self-administrable, freely available for researchers, facilitates repeated testing of fluid intelligence, and resolves numerous limitations of existing matrix tests.