Conference PaperPDF Available

The Intelligence Corpus, an Annotated Corpus of Definitions of Intelligence: Annotation, Guidelines, and Student Research Projects

Authors:

Abstract

Delineating the boundaries of the discourse on machine or artificial intelligence (AI) may help in defining and understanding its most discussed concept, the concept of intelligence. Furthermore, better insights into both definitions and how to define them well has proven to be essential for a better understanding of concepts, intelligence included. These and related cognitive abilities (e.g. defining, analyzing, understanding, discussing, and comparing definitions of intelligence, among others) are expected for AI researchers and practitioners in the first place. Yet, they are also central to extending or at least providing the basics of AI literacy to other stakeholders of our society. Intelligent systems are transforming the way we interact with technology, with each other, and with ourselves, and knowing at least what AI or intelligence mean is becoming essential for designing, developing, deploying, using, and even regulating intelligent artefacts. However, defining intelligence has been one of the most controversial and studied challenges of both ancient and modern human thinking. A lack of consensus on what intelligence is has remained almost constant over the centuries. Interested scholars have not come up with a consensus or cross-domain accepted definition of intelligence. Neither in the ancient Eastern nor in the ancient and contemporary Western conceptions of intelligence. Nor in the more recent perspectives from the last 70 years within the field of AI. We argue that a better understanding of contemporary technologies, AI-based but not only, starts with a grounded exposure to their conceptual pillars. These include fundamental concepts like the concept of intelligence, in general, and of AI, in particular. Learners and decision makers at all levels should face them, as well as be able to discuss their importance and limitations critically and in an informed way. For doing that, they must be confronted with definitions of intelligence and understand their meaning well, for instance. If these contents are already part of study programs, the better. It is the main goal of this paper to present how a few hundreds of definitions of intelligence were annotated, i.e. their properties and characteristics systematically analyzed and commented, in order to construct a corpus (i.e. a collection) of definitions of intelligence for further uses in AI and other fields. The work and particular application domain presented here has not yet been considered in the extended work on linguistic annotation. Even though, both the annotation and the data merit special attention, for they deal with the elusive, important concept of intelligence, i.e. with definitions of both human and machine (or artificial) intelligence. Undergraduate Computer Science students carried out the annotation process and several research activities. They were involved in an AI research project led by faculty and included their findings and work as part of their undergraduate student research projects in their last study year. We provide details about how the student research projects were conceived, conducted, and mentored. We also describe the properties or quality criteria that were considered for annotating the definitions from the intelligence corpus.
THE INTELLIGENCE CORPUS, AN ANNOTATED CORPUS OF
DEFINITIONS OF INTELLIGENCE: ANNOTATION, GUIDELINES, AND
STUDENT RESEARCH PROJECTS
D. Monett1, L. Hoge2, L. Haase3, L. Schwarz4, M. Normann5, L. Scheibe6
1Berlin School of Economics and Law (GERMANY)
2Robert Koch Institute (GERMANY)
3Technical University of Applied Sciences Wildau (GERMANY)
4Hochschule Stralsund (GERMANY)
5NORDAKADEMIE Graduate School (GERMANY)
6DB Systel GmbH (GERMANY)
Abstract
Intelligent systems are transforming the way we interact with technology, with each other, and with
ourselves, and knowing at least what artificial intelligence (AI) means is becoming essential for
designing, developing, deploying, using, and even regulating intelligent artefacts. Although defining
intelligence has been one of the most controversial and studied challenges of both ancient and modern
human thinking, a lack of consensus on what intelligence is has remained almost constant over the
centuries. We argue that a better understanding of contemporary technologies, AI-based but not only,
starts with a grounded exposure to their conceptual pillars. These include fundamental concepts like the
concept of intelligence, in general, and of AI, in particular. Learners and decision makers at all levels
should face them, as well as be able to discuss their importance and limitations critically and in an
informed way. For doing that, they must be confronted with definitions of (artificial) intelligence and
understand their meaning well, for instance. If these contents are already part of study programs, the
better. In this paper we present how several definitions of intelligence were annotated, i.e. their
properties and characteristics systematically analyzed and commented, in order to construct a corpus
(i.e. a collection) of definitions of intelligence for further uses in AI and other fields. The work and the
concrete application domain presented here have not yet been considered in the extended work on
linguistic annotation (i.e. annotating definitions). Even though, both the annotation and the data merit
special attention, for they deal with the elusive, important concept of intelligence, i.e. with definitions of
both human and machine (or artificial) intelligence. Undergraduate Computer Science students carried
out the annotation process and other related research activities. They were involved in a more general
AI research project and included their findings and work as part of their undergraduate student research
projects in their last study year. We provide details about how the student research projects were
conceived, conducted, and mentored.
Keywords: AI literacy, annotation, artificial intelligence, corpus, intelligence, student research projects.
1 INTRODUCTION
A lack of consensus on defining intelligence has been a shaky stepping-stone not only for the artificial
intelligence (AI) community: interested scholars have not come up with a cross-domain accepted
definition of intelligence. Neither in the ancient Eastern nor in the ancient and contemporary Western
conceptions of intelligence (see e.g. [1], [2], [3]) nor in the more recent perspectives from the last 70
years within the field of AI (see e.g. [4], [5], [6]).
There are several underlying reasons for disagreement on defining intelligence whose analysis would
be beyond the scope of this paper (we refer the interested reader to [7] and [8] for related discussions
on the lack of consensus). In Hunt and Jaeggi’s [7] words, “[i]t is not surprising that defining the subject
matter of intelligence research has been difficult, for in everyday discourse the word intelligence is used
in various ways.” Dickson [9] emphasizes that the definition of (artificial) intelligence “shifts with
technological advances and our expectations from computers. That's why it's pretty hard to determine
what is or isnt AI.” And Chollet [10] states that “[t]o make progress towards the promise of [the AI] field,
we need precise, quantitative definitions and measures of intelligencein particular human-like general
intelligence.” Furthermore, the pressing need for clearer, good definitions of intelligence has crossed
the academic river, reaching the industry, law, and public shores in unprecedented ways.
Proceedings of ICERI2021 Conference
8th-9th November 2021
ISBN: 978-84-09-34549-6
3626
Delineating the boundaries of the discourse on intelligence may help in defining and understanding its
most discussed concept, as suggested in [11]. Furthermore, better insights into definitions and how to
define them has proven to be essential for a better understanding of concepts, intelligence and AI
included (see for example [12], [13] and [14] for more on properties of good definitions). Knowing those
concepts and related cognitive abilities (like defining, analyzing, understanding, discussing, and
comparing definitions of intelligence, among others) is expected for AI researchers and practitioners in
the first place. Yet, they are also central to extending or at least providing the basics of AI literacy to
other stakeholders of our society.
It is the main goal of this paper to present how a few hundreds of definitions of intelligence (of both
human intelligence and machine intelligence) were annotated by taking into account different properties
of good definitions. In doing so, we follow the guidelines for annotation case studies suggested in [15],
which also guide the structure of the paper and our methodology in what follows.
2 ANNOTATING DEFINITIONS OF INTELLIGENCE
The annotation case study that is the focus of this paper belongs to a rather uncommon domain in
linguistic annotation: definitions of human and machine (or artificial) intelligence are annotated according
to quality criteria for definitions. In other words, properties of good definitions are evaluated in order to
conclude whether a certain definition of intelligence fulfils these properties or not. To our knowledge,
this is the first time that such a problem is tackled in the sub-field of linguistic annotation. Next sections
will provide the background and characteristics of this atypical annotation project.
2.1 The Annotators
The annotation of data either its nature can be a very challenging and time consuming process. On the
one hand, it is a repetitive task fundamentally done by humans (i.e. annotators), mainly because the
state of the art in automatic data annotation is still biased, error prone, and far from being entirely
satisfactory. On the other hand, data is labelled according to its characteristics, but, even when done by
humans, the annotation itself might require special insights into the problem domain. Furthermore, it
might need a certain level of agreement on how to interpret and annotate the data correctly, as well as
depend on advanced domain knowledge.
Software solutions are available for supporting annotators in their work (see e.g. an extensive review in
[16]), but not for all kinds of data and certainly not for all kinds of situations that require specialized
knowledge for annotating the data. This is the case when annotating definitions of intelligence according
to several quality criteria, where AI-related knowledge might be critical and, thus, a pre-requisite for
annotating.
In the case of our annotation project, undergraduate Computer Science students in their third-year
studies are the annotators, the majority of them also attending a parallel course on AI. Furthermore,
they were involved in related research tasks and completed corresponding student research projects
that were especially considered as part of their term evaluation. This way, they could include the
knowledge and practice they acquired by annotating the data into their learning and study, directly.
2.2 The Annotation Data
The annotation corpus consists of four collections of definitions of intelligence. Participants to a survey
on definitions of intelligence [17] were asked to provide their level of agreement with definitions of both
human and machine intelligence from the literature (for more on the survey, please consult the provided
reference). They were also asked to justify their selection, as well as to provide new definitions of
intelligence, if desired. A total of 567 responses from experts worldwide were received and contained
more than 4000 comments or arguments in favor or against the literature definitions that were presented
to them. Respondents also provided more than 300 new, suggested definitions of intelligence (213
definitions of human intelligence and 125 definitions of machine intelligence). This is how a mixed pool
of what experts in other domains call “implicit theories” of intelligence (or people’s conceptions or what
intelligence is) and “explicit theoriesof intelligence (i.e. theories proposed by experts) was created (see
[3] for more on implicit and explicit theories).
Tab. 1 shows the information contained in each collection. The four collections conform what we call the
Intelligence Corpus.
3627
Table 1. The Intelligence Corpus.
Collection
Content
Definitions
A
New, suggested definitions of machine or artificial intelligence by participants to the
survey on defining intelligence [17].
213
B
New, suggested definitions of human intelligence by participants to the survey on
defining intelligence [17].
125
C
Definitions of intelligence from the literature to agree upon in the initial edition of the
survey on defining intelligence [17].
34
D
Definitions of intelligence from the collection presented in [18].
71
The following examples give an idea of the kind of definitions that are part of the Intelligence Corpus:
“Machine Intelligence is concerned with building systems that can adapt and learn in
unstructured noisy domains.” (From collection A)
“[Human intelligence is] the ability to use information to accomplish goals.” (From collection
B)
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”
(From collection C)
“[Intelligence is] the capacity to learn, reason, and understand.” (From collection D)
As it can be seen, and compared to other case studies in linguistic annotation, the Intelligence Corpus
is very small. Actually, it is very unlikely (indeed, not expected at all) that considerably many new
definitions of intelligence are defined by experts and non-experts alike in a long-term future.
2.3 The Annotation Scheme
The annotation scheme referred to in this paper builds upon different works on properties of good
definitions some of which were referenced to in Section 1. It uses most of the properties or quality criteria
for definitions suggested in [14], which includes a compendium and thorough analysis of the literature
on definitions together with their most desirable properties.
The following examples give an idea of the kind of quality criteria that were considered when annotating
the aforementioned definitions:
A good definition of intelligence defines the “what,” the thing to be defined. It defines
[machine | artificial | human] intelligence.
A good definition of intelligence is affirmative.
A good definition of intelligence is comprehensive, in that it omits no essential attribute of
the thing to be defined; it omits nothing which is a part of [machine | artificial | human]
intelligence.
A good definition of intelligence is clear, in that it avoids metaphorical, ambiguous
language, and obscure terms. It is clearly written; it is perspicuous.
Notice that some quality criteria are intuitive and easy to understand (and, thus, to verify), whereas
others might be more complex, could require a deeper understanding (and, consequently, a thorough
evaluation) as well as corresponding added efforts and time for assessing whether a certain definition
fulfils the quality criteria or not.
From the 30 quality criteria for definitions introduced in [14], 21 were considered for annotating each
definition from the Intelligence Corpus.
2.4 The Physical Representation
The collections from Tab. 1 were available in the form of MS Excel tables, one definition of intelligence
per row. It was both a logical and straightforward step to extend them with new columns, each
representing a property or quality criterion. The new tables were then imported into Google Sheets and
prepared to make them available to the annotators, i.e. to the students, in a later step.
3628
Because of the characteristics of the annotation schema and the size of the Intelligence Corpus, it was
not necessary to use any other software or system for annotating. The concrete form and type of the
annotated data will be clearer in Section 2.5.1 below.
2.5 The Annotation Process
The annotation process was done manually. On the one side, a reliable and consistent automatic or
semi-automatic annotation of data for this very specific case study was not (and we do not think it will
be in an advisable future) available: human language understanding continues to be an unsolved
problem in the field of AI. On the other side, the advantage of having a small corpus did not merit the
investment in extra resources that might slow down the annotation process as a whole.
Six annotators were involved, three female and three male, all of them undergraduate students in their
third year of Computer Science studies, as introduced above. This allowed for at least a satisfactory
level of knowledge about the definition of concepts, in general, and of AI, in particular. Crowdsourcing
mechanisms for annotating were discarded: not only the size of the corpus was small, but we also
assumed that the high-level subject matter might require an added, special training of the annotators,
thus at least some exposure to related fields and topics was a requirement.
Three pairs of annotators were formed. Each pair annotated one third of the definitions from the corpus,
i.e. 147 or 148 definitions of intelligence in total for each pair of annotators (see Fig. 1). Each annotator
annotated her/his definitions independently.
Figure 1. Distribution of definitions per groups of annotators.
The annotators were trained before the annotation process started. An initial meeting was held for this
purpose. The training consisted of a general introduction to research topics involving the definition of
the concept of intelligence, to the quality criteria for definitions, to related literature including the survey
introduced in Section 2.2 (see [17] for more), to the collections of definitions that should be annotated,
as well as to the annotation guidelines that will be presented in short. Furthermore, examples of
definitions and how to annotate them when considering the quality criteria for definitions were also
discussed.
Additionally, all annotators received the same information about the annotation process per email, as
well as the annotation guidelines and the URLs with tables in Google Sheets containing both “their” to-
be-annotated definitions and the quality criteria to evaluate them. As it was introduced in Section 2.4,
the annotation tables contained as many rows as there were definitions of intelligence to be annotated
(at most 148 definitions per annotator), and as many additional columns as there were quality criteria to
be considered (a total of 21 quality criteria).
Feedback from annotators was collected at the end of the annotation process. The feedback included
the time the students spent annotating the definitions of intelligence, which strategies they followed for
the annotation, as well as general comments and remarks, if they had any. The annotators sent their
results in a period ranging between less than two and up to nine weeks. It worth mentioning that they
annotated the definitions and worked on the corresponding student research projects parallel to their
attending other learning modules and classes.
3629
2.5.1 Annotation Guidelines
Extra, specific to the case study annotation guidelines were especially conceived for the project. They
followed some recommendations introduced in [19] and [20]. The guidelines include particular
characteristics of as well as some relevant aspects that should be considered when evaluating quality
criteria for definitions, together with the activities for doing so. They are listed in what follows in the form
that was presented to the annotators:
How to proceed: You can select one column (i.e. one quality criterion) and go row by row (i.e.
definition by definition) to evaluate the same criterion for all rows. This could be faster than fixing
a row (i.e. fixing one definition) and then analyzing all columns (i.e. all quality criteria) for that row.
But you could also go the other way around because some columns are related or refer to similar
criteria, plus you need to consider the same definition only once. It is up to you!
Write a 1 on a cell if the corresponding definition fulfils the quality criterion on the top of the
column. For example, if a definition d defines machine intelligence (or human intelligence or
intelligence, depending on the collection it belongs to) then write a 1 on the cell corresponding to
the quality criterion It defines the “what,” the thing to be defined. Leave the cell empty if not.
Mark a cell in red (i.e. set the background color of the cell to red) or write an email asking for
clarification, in case you dont have any idea about how to evaluate a given quality criterion for
that cell. Such cases will be discussed in the team later.
Notice that you dont have to justify your annotation. But, if you prefer, you could use the free
columns on the right to write any comments or questions related to some particular difficult case
that needs discussion. This should not be the normal case, though.
Annotate alone. Do not discuss with other annotators about how to annotate a particular
definition because this could introduce some bias in yoursor othersthinking. If necessary, write
an email asking for clarification.
Do not fix grammatical errors you might find in the definitions.
How long did it take? Record the time you spend annotating whenever possible. This will be very
useful for the upcoming publication about the annotation process!
Write an email when you are finished with the annotations!
Got any new idea or suggestion that could be included in these guidelines? They are welcome!
Drop a line in any case.
Extra: At the end of the annotation (or, better, during the process, if you prefer) write down your
strategy,i.e. what did you do and how; which problems, difficulties, or positive things did you
find, etc. This could be part not only of the research documentation about the annotation process
but also of your student research paper later!
As it was already mentioned, these guidelines for annotating definitions of intelligence were also
presented and explained to all annotators in the initial meeting.
3 RESULTS AND DISCUSSION
This section summarizes the most important results and lessons learned.
3.1 Feedback from the Annotators
The time spent on the annotation by each annotator was between 4.5 and 8.5 hours, with an average
time of 7 hours. One of the annotators did not record the time and gave as reason the varying conditions
under which his annotation sessions took place (at home, at the university, in the train). A second
annotator reported having consumed between 8 and 9 hours. In this case, a middle point was considered
when calculating the total average time. On average, each annotator invested about three minutes on
each definition and more than eight seconds on each quality criterion.
Evaluating whether a definition is affirmative or not is easy: for humans, it is straightforward to detect
adverbs that denote negation. For example, the definition “[Intelligence is] the capacity to learn, reason,
and understand” is posed in an affirmative way, there is even no need to read it until the end. Yet,
evaluating whether the same definition is comprehensive might require a more complex thinking
process. This shows how complicated or time consuming the annotation of a definition could be.
3630
Four annotators reported their individual strategies for annotating. All of them proceeded by fixing a
quality criterion and then annotating all definitions according to that criterion. General remarks
concerning the annotation process included concrete interpretations of the quality criteria. Such remarks
were reported by three annotators.
3.2 Inter-Annotator Agreement
The data from the annotators was easy to process once all annotations were available. Before that, the
project leader checked the annotations for consistency, randomly.
Then, the inter-annotator agreement (IAA) was computed following Cohen’s work [21]. Tab. 2 shows
the results for each collection from the Intelligence Corpus and each group of annotators, together with
averaged values. This part of the project was the particular research topic and focus of one of the
students.
Table 2. Cohen’s κ per collection and group of annotators.
A
B
C
D
Avg. per group
Annotators 1 and 2
0.390
0.455
0.344
0.411
0.400
Annotators 3 and 4
0.346
0.361
0.457
0.465
0.408
Annotators 5 and 6
0.404
0.431
0.372
0.361
0.392
Avg. per collection
0.380
0.416
0.391
0.412
Absolute: 0.4
The IAA in the same group was between fair and moderate for all collections (i.e. Cohen’s κ ranging
from 0.344 to 0.465, and according to Landis and Koch’s [22] interpretation of the values).
In general, the number of agreements among annotators was higher for the collection containing
definitions of human intelligence, followed by the collection from [18], which includes many dictionary
definitions of intelligence that, in general, are clearer and easier to understand. One possible
interpretation is that definitions of artificial intelligence, both those provided by participants to the survey
and from the literature, are still needing some work regarding expressiveness.
The quality criteria with the highest IAA values were those simpler, more intuitive, and easier to
understand, as expected. However, the quality criteria for definitions with the highest number of
disagreements (and thus, smaller IAA values among the annotators) were the following ones, in this
order:
A good definition of intelligence is exclusive, in that it includes nothing which is not a part
of [machine | artificial | human] intelligence.
A good definition defines the “why,” the purpose of the thing to be defined. It defines the
purpose of [machine | artificial | human] intelligence.
In future annotation processes, it might be advisable to abound and explain better to the annotators
what certain criteria mean, as well as to use more already (correctly) annotated definitions as examples.
Similarly, it was analyzed which definitions of intelligence received the highest and lowest number of
agreements (or disagreements). For example, the annotators were more agreeable when evaluating the
fulfilment of the quality criteria for the following definition:
“Intelligence is a very general mental capability that, among other things, involves the ability
to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly
and learn from experience.”
This result is not surprising: that one is Gottfredson’s definition of intelligence [23]. Gottfredson’s is not
only a widely accepted definition of intelligence among experts in intelligence and allied fields [24], but
it was also the most accepted definition of human intelligence in the survey presented in [17]. The
annotators confirmed once again what a well-defined definition of intelligence looks like.
3.3 Usage
Both the annotated corpus and the original collections of definitions of intelligence (see Tab. 1, Section
2.2) are available upon request. They could be used by interested readers and practitioners, for
3631
instance, when learning about fundamental concepts like the concept of intelligence, in general, and of
AI, in particular.
As an example, we provide part of the Intelligence Corpus as a separate collection with 148 definitions
of intelligence that were annotated by one of the students. It can be found at https://bit.ly/
AnnotatedDefsIntelligence (see [25]) under the Creative Commons Attribution-ShareAlike 4.0
International (CC BY-SA 4.0) license. It contains the following information:
- 71 definitions of machine or artificial intelligence (from a total of 213) from collection A,
- 42 definitions of human intelligence (from a total of 125) from collection B,
- 12 definitions of intelligence (from a total of 34) from collection C, and
- 23 definitions of intelligence (from a total of 71) from collection D,
together with their annotations, i.e., whether they fulfill 21 quality criteria for definitions (see Section 2.3).
Furthermore, all definitions considered in the survey on defining (machine) intelligence [17] are available
at https://goo.gl/KDPtKT, including their complete bibliographic information.
Finally, there is also an app that was developed by the project leader (also supervisor of the student
research projects) for the purpose of supporting end users through the process of defining a definition. For
example, all quality criteria for definitions are specified and exemplified there. The Defintly app, as it is
called, may also assist future annotators in their annotation processes (visit https://defintly.glideapp.io/
for more).
3.4 Mentoring
There was enough prior experience available in mentoring and supervising student research projects of
the kind presented in this paper. Further, the course on AI was delivered by the same instructor, what
allowed not only for ad hoc discussions about the state of the art of the mentioned student research
projects, but also about their content and goals in a broader setting. Other students also attended the
AI course, thereby enriching their general knowledge and projects they were working on. Further, all
necessary information not only key to starting the annotation process, but also the one concerning both
project management and mentoring was carefully prepared, discussed with, and used by the students
effectively.
4 CONCLUSIONS
This paper presented an annotated corpus of definitions of intelligence, the Intelligence Corpus, as well
as details about its annotation, which was performed as part of student research projects in Computer
Science. The Intelligence Corpus forms part of a peculiar annotation case study that evaluates whether
definitions of human and machine intelligence satisfy desirable properties or quality criteria of good
definitions. Future work includes a thorough discussion about some of the quality criteria (like those
more difficult to interpret or annotate) and how to ease further annotation processes. Furthermore, a
detailed, manually-conducted quality control of all available annotations will be performed in a near
future. Occasionally, the corpus may be extended with new annotated definitions and/or new quality
criteria.
Other possible uses of the Intelligence Corpus include training on the process of defining a good
definition of any concept, which could be of interest to regulators or lawyers, for instance. In their case,
it is essential to deal with legal definitions of different terms and, some of the times, they should even
define the definitions themselves. Examples from the Intelligence Corpus would illustrate desirable
properties for good definitions and help them in their work. In a similar vein, the Intelligence Corpus
could be a complement to students, in particular, and academics, in general, that are learning how to
conduct (or that are actually conducting) a concept analysis [26], like Philosophy students, for instance.
Last, but not least, further uses of the corpus involving machine learning techniques to analyze its
content are not discarded.
REFERENCES
[1] H.O. Rugg, “Intelligence and Its Measurement: A Symposium,” Journal of Educational Psychology,
vol. 12, pp. 123147, 1921.
3632
[2] R.J. Sternberg and D.K. Detterman, “What is Intelligence?: Contemporary Viewpoints on its Nature
and Definition,” Ablex Publishing Corporation, Norwood, NJ., 1986.
[3] S.-Y. Yang and R.J. Sternberg, “Conceptions of intelligence in Ancient Chinese Philosophy,” Journal
of Theoretical and Philosophical Psychology, vol. 17, pp. 101119, 1997.
[4] D. Monett, C.W.P. Lewis, and K.R. Thórisson, “Introduction to the JAGI Special Issue On Defining
Artificial Intelligence’Commentaries and Author’s Response,” Journal of Artificial General
Intelligence, vol. 11, pp. 14, 2020.
[5] N.J. Nilsson, “The Quest for Artificial Intelligence: A History of Ideas and Achievements,” Cambridge
University Press, 2010.
[6] P. Wang, “On Defining Artificial Intelligence,” Journal of Artificial General Intelligence, vol. 10, no. 2,
pp. 137, 2019.
[7] E. Hunt and S.M. Jaeggi, “Challenges for Research on Intelligence,” Journal of Intelligence, vol. 1,
pp. 3654, 2013.
[8] D. Monett, L. Hoge, and C.W.P. Lewis, “Cognitive Biases Undermine Consensus on Definitions of
Intelligence and Limit Understanding,” in Joint Proceedings of the IJCAI-2019 Workshops on
Linguistic and Cognitive Approaches to Dialog Agents and on Bridging the Gap Between Human
and Automated Reasoning (U. Furbach, S. Hölldobler, M. Ragni, R. Rzepka, C. Schon, J. Vallverdu,
and A. Wlodarczyk, eds.), pp. 5158, Macau, China. CEUR-WS, 2019.
[9] B. Dickson, “5 european companies that are (really) advancing AI,” The Next Web, 2019. Retrieved
from https://thenextweb.com/artificial-intelligence/2019/03/29/5-european-companies-advancing-ai/.
[10] F. Chollet, “The Measure of Intelligence,” arXiv e-prints, arXiv:1911.01547 [cs.AI], 2019.
[11] D. Monett and C. Winkler, “Using AI to Understand Intelligence: The Search for a Catalog of
Intelligence Capabilities,” in Proceedings of the 3rd Workshop on Natural Language for Artificial
Intelligence (M. Alam, V. Basile, F. Dell’Orletta, M. Nissim, and N. Novielli, eds.), vol. 2521, pp. 1
15, Rende, Italy. CEUR-WS, 2019.
[12] D. Kelley, “The Art of Reasoning: An Introduction to Logic and Critical Thinking,” W.W. Norton &
Company, New York, NY, fourth edition, 2014.
[13] S. Legg and M. Hutter, “Universal Intelligence: A Definition of Machine Intelligence,” Minds and
Machines, vol. 17, pp. 391444, 2007b.
[14] D. Monett and C.W.P. Lewis, “Definitional Foundations for Intelligent Systems, Part I: Quality Criteria
for Definitions of Intelligence,” in Proceedings of The 10th Anniversary Conference of the Academic
Conference Association (J. Vopava, V. Douda, R. Kratochvil, and M. Konecki, eds.), pp. 7380,
Prague, Czech Republic. MAC Prague Consulting Ltd., 2020.
[15] N. Ide, “Introduction: The Handbook of Linguistic Annotation,” in Handbook of Linguistic Annotation
(N. Ide and J. Pustejovsky, eds.), pp. 118. Springer, Dordrecht, 2017.
[16] M. Neves and J. Ševa, “An extensive review of tools for manual annotation of documents,” Briefings
in Bioinformatics, vol. 22, no. 1, pp. 146163, 2021.
[17] D. Monett and C.W.P. Lewis, “Getting clarity by defining Artificial IntelligenceA Survey,” in
Philosophy and Theory of Artificial Intelligence (V.C. ller, ed.), SAPERE vol. 44, pp. 212214.
Springer, Berlin, 2018.
[18] S. Legg and M. Hutter, “A Collection of Definitions of Intelligence,” in Advances in Artificial General
Intelligence: Concepts, Architectures and Algorithms (B. Goertzel and P. Wang, eds.), vol. 157, pp.
1724. IOS Press, UK, 2007a.
[19] R. Klinger and P. Cimiano, “The USAGE review corpus for fine grained multi lingual opinion
analysis,” in Proceedings of the Ninth International Conference on Language Resources and
Evaluation, pp. 22112218, Reykjavik, Iceland. European Language Resources Association, 2014.
[20] M. Sänger, “Aspektbasierte Meinungsanalyse von Bewertungen mobiler Applikationen,” Master
Thesis, Humboldt-Universität zu Berlin, 2018.
[21] J. Cohen, “A coefficient of agreement for nominal scales,” Educational and Psychological
Measurement, vol. 20, pp. 3746, 1960.
3633
[22] J.R. Landis and G.G. Koch, “The measurement of observer agreement for categorical data,”
Biometrics, vol. 33, pp. 159174, 1977.
[23] L.S. Gottfredson, “Mainstream science on intelligence: An editorial with 52 signatories, history, and
bibliography,” Intelligence, vol. 24, pp. 1323, 1997.
[24] R.J. Haier, “The Neuroscience of Intelligence,” Cambridge University Press, New York, NY, 2017.
[25] D. Monett, “Examples of annotated definitions of intelligence,” The AGI Sentinel Initiative, AGISI.org,
2021. Retrieved from https://bit.ly/AnnotatedDefsIntelligence (Last accessed: the last date you
accessed the collection.
[26] A. Sloman, “The Computer Revolution In Philosophy: Philosophy, science and models of mind,”
Harvester Press, Sussex, revised, online edition, 2019.
3634
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Artificial Intelligence (AI) algorithms permeate many of the systems and devices we interact with in everyday life. Since its conception as a field, a wide variety of these intelligent algorithms have sought to simulate or even surpass human cognitive abilities and behavior. However, there is as yet no widely accepted definition of what intelligence in machines means, nor of human intelligence. The primary goal of this paper is to propose an intelligence vocabulary or catalog in the quest for the boundaries that shape the current discourse of the experts on intelligence. The idea is to provide and inform researchers, practitioners, journalists, and policymakers, among many others, with a terminology that can be used when defining intelligence. Considering these challenges, we analyze the data from hundreds of experts around the world, provided when they were asked to give both their opinions on existing definitions of intelligence in the scientific literature and their own suggested definitions. All of the answers that were gathered (despite being subjective views) were evaluated using stateof-the-art text analytics processing and AI mechanisms. This ensures objectivity and allows us to extend (or even reproduce) the study in the future. Ultimately, our work contributes to strengthen the bridge between human and automatic reasoning: we examine experts’ opinions on definitions of human and machine intelligence with both manual and automatic methods. Some linguistic tasks like normalizing and clustering the opinions are performed automatically, thereby the twofold goal being to find an overview and to enable a drill-down to the most interesting answers. These individual artefacts are then interpreted manually. We hope that the proposed intelligence vocabulary will not only contribute to defining (machine) intelligence better but also to an understanding of both the current views of experts on intelligence and intelligence itself. This would help to frame a common language around AI, which has unfortunately been absent thus far. In the future, extending the procedure presented in this paper might lead to an interdisciplinary machine-assisted method for extracting knowledge from subjective opinions.
Conference Paper
Full-text available
We posit that the lack of consensus definitions of (machine or artificial) intelligence might be affected by the lack of knowledge of conceptual analysis and other well-investigated theories. Acute contextualization of the concepts that are defined may also be an issue. Accordingly, in this two-part paper, we review some basic concepts from across research fields on how to explicate a definition. In Part I we propose 30 quality criteria for definitions that shall serve as guidelines for well-defined definitions of any concept. The quality criteria may allow for both better insights into definitions and a wider understanding of the current discourse on AI. In Part II we provide basic terminology on definitions and an iterative process to guide the construction of robust definitions by considering the quality criteria introduced in Part I. Our central goal is twofold: we want to facilitate understanding across fields and inform different stakeholders from industry, academia, legal and governments, among others, by contributing to the formal foundations on elucidating "good and robust definitions" for AI.
Article
Full-text available
Motivation: Annotation tools are applied to build training and test corpora, which are essential for the development and evaluation of new natural language processing algorithms. Further, annotation tools are also used to extract new information for a particular use case. However, owing to the high number of existing annotation tools, finding the one that best fits particular needs is a demanding task that requires searching the scientific literature followed by installing and trying various tools. Methods: We searched for annotation tools and selected a subset of them according to five requirements with which they should comply, such as being Web-based or supporting the definition of a schema. We installed the selected tools (when necessary), carried out hands-on experiments and evaluated them using 26 criteria that covered functional and technical aspects. We defined each criterion on three levels of matches and a score for the final evaluation of the tools. Results: We evaluated 78 tools and selected the following 15 for a detailed evaluation: BioQRator, brat, Catma, Djangology, ezTag, FLAT, LightTag, MAT, MyMiner, PDFAnno, prodigy, tagtog, TextAE, WAT-SL and WebAnno. Full compliance with our 26 criteria ranged from only 9 up to 20 criteria, which demonstrated that some tools are comprehensive and mature enough to be used on most annotation projects. The highest score of 0.81 was obtained by WebAnno (of a maximum value of 1.0).
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
Conference Paper
Full-text available
There are several reasons for the lack of a consensus definition of (machine) intelligence. The constantly evolving nature and the interdisciplinarity of the Artificial Intelligence (AI) field, together with a historical polarization around what intelligence means, are among the most widely discussed rationalizations, both within the community and outside it. These factors are aggravated by the presence of cognitive biases in subjective reasoning by experts on the definition of intelligence, as we have found in a recent study of experts' opinions across multiple disciplines. In this paper, we show how different cognitive biases can undermine consensus on defining intelligence, and thus how an understanding of intelligence can be substantially affected by these human traits. We also provide general recommendations for tackling these problems. An understanding of intelligence can be achieved by understanding the limits of both human expressiveness and the current discourse around definitions of intelligence within and across the concerned fields.
Chapter
Full-text available
Intelligence remains ill-defined. Theories of intelligence and the goal of Artificial Intelligence (A.I.) have been the source of much confusion both within the field and among the general public. Studies that contribute to a well-defined goal of the discipline and spread a stronger, more coherent message, to the mainstream media, policy-makers, investors, and the general public to help dispel myths about A.I. are needed. We present the preliminary results of our research survey “Defining (machine) Intelligence.” Opinions, from a cross sector of professionals, to help create a unified message on the goal and definition of A.I.
Chapter
Full-text available
The Handbook of Linguistic Annotation provides a comprehensive survey of the development and state-of-the-art for linguistic annotation of language resources, including methods for annotation scheme design, annotation creation, physical format considerations, annotation tools, annotation use, evaluation, etc. The volume is divided into two parts: Part I includes survey chapters on the various phases and considerations for an annotation project, and Part II consists of thirty-nine case studies describing major annotation projects for a broad range of linguistic phenomena.
Article
Full-text available
After 100 years of research, the definition of the field is still inadequate. The biggest challenge we see is moving away from a de-factor definition of intelligence in terms of test scores, but at the same time making clear what the boundaries of the field are. We then present four challenges for the field, two within a biological and two within a social context. These revolve around the issues of the malleability of intelligence and its display in everyday life, outside of a formal testing context. We conclude that developments in cognitive neuroscience and increases in the feasibility of monitoring behavior outside of the context of a testing session offer considerable hope for expansion of our both the biological and social aspects of individual differences in cognition.