Neural Agent-based Models to Study Language
Contact using Linguistic Data
Vrije Universiteit Brussel
Bart de Boer
Vrije Universiteit Brussel
In this paper, we propose an outline for linguistic research on language change,
as observed in the languages of the world, using neural agent-based models of
emergent communication. We describe how such models could be used to study
morphological simpliﬁcation, using a case study of language contact in Eastern
Indonesia. A neural architecture is used to represent hypothesized cognitive mecha-
nisms of language change: a generalization mechanism, the procedural/declarative
model, and a phonological mechanism, the hyper & hypo articulation model, that
involves a theory of mind of the listener.
What happens to language, when an established population of speakers of a language, comes into
contact with a community of strangers, who try to learn their language? Agent-based computer
simulations of interactions between speakers can be effective models to study this question [
this paper, we will outline how agent-based models with a neural model of production and perception
can be used to study linguistic questions about language change, based on real-world data from the
languages of the world. We will provide the desiderata a neural agent model should fulﬁll to be able to
study these questions, and give a sketch of a possible model, which we plan to implement in the future.
We want to infer general factors behind language change, by seeing how these general factors surface
in case studies on real languages. Central in this paper is a case study of language contact in Eastern
Indonesia. We will study the hypothesis that morphological simpliﬁcation is caused by contact
between native speakers of a language (L1) and adult learners, who learn the language as a second
language (L2). We use neural networks as an architecture to implement two cognitive mechanisms that
could lead to morphological simpliﬁcation: a generalization mechanism, the procedural/declarative
model, and a phonological mechanism for simplifying or clarifying utterances, the hyper & hypo
articulation model, that involves a theory of mind of the listener.
2 Previous work
Agent-based models are used in many areas describing social or cultural processes, for example
to describe social segregation or the spread of religion [
]. Agent models have been used to
describe the evolution of language [
] and to describe concrete instances of language change, for
example change in word order in Dutch [
]. More abstract models from language evolution have
been used to study the inﬂuence of social factors (like population size and language contact) on
linguistic structure [
]. In addition to this work on agents for linguistic modelling, agent-based
approaches of emergent communication have been developed in the community of natural language
processing (NLP). Agents in these models use an abstract language to designate objects or images
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
and have to agree on a name for a certain object [
]. In recent agent models from NLP, deep neural
networks are used as the comprehension and production model [
]. Interesting analyses can be
made about the language neural agents learn, such as the frequency distribution of symbols and word
] model contact between communities of deep neural agents, and the formation of a
creole language. Some of these models ultimately have the goal of constructing conversational agents
in mind. We will use the same types of neural models, but apply them to study linguistic questions
about how real languages change. [
] suggest that among others linguistics and cognitive science
could contribute to hypotheses to test experimentally in neural models of emergent communication.
As a case study, we consider the linguistic situation at the Alor and Pantar islands in Eastern Indonesia.
Alorese, an Austronesian language, is spoken on the coasts of the islands, while landward, Papuan
Alor-Pantar languages are spoken. Many L1 speakers of Alor-Pantar languages, learn Alorese as a
second language. Alorese lost all of its morphology, when compared to closely related Lewoingu
], which has not been in contact with Alor-Pantar languages. It is assumed Alorese
lost its morphology due to adult language contact .
Figure 1: Case study: morphological simpliﬁcation in Alorese, spoken on the Alor & Pantar islands in Indonesia.
We will speciﬁcally look at the verb morphology of Alorese: compared to Lamaholot, all verb sufﬁxes,
signifying the subject of the sentence on the verb, have been lost. For example, the third person plural
for the verb lodo ‘to go down’ is lodo-ka (with the 3PL marker -ka) in Lewoingu Lamaholot, while
it is lodo (without person marker) in Alorese. In Alorese, only verb preﬁxes, on a small number
of vowel-initial verbs, have been retained. As initialization of our model, we use verb forms, and
their afﬁxes from a grammar of Lewoingu Lamaholot [
], which can be seen as the starting state of
Alorese, before it underwent morphological simpliﬁcation. We use 56 verbs, these are the only verbs
in the grammar explicitly classiﬁed as preﬁxing or sufﬁxing: 17 preﬁxing and 39 sufﬁxing verbs. As
this grammar is descriptive, there is no distributional data over forms: per verb concept, one verb form
and one preﬁx/sufﬁx per grammatical person is available. We compare the outcome of the language
in the model to the current state of Alorese from a grammar of Alorese [
] and demographic data,
about the proportion of L2 speakers in Alorese communities [
]. For possible future research on the
lexicon, the digital dataset LexiRumah, which includes Alorese and Alor-Pantar languages, could be
As data representation, we choose to stay close to the real language: agents communicate using real
word forms, preﬁxes and sufﬁxes. Every phoneme in these forms and afﬁxes, is represented as a string
of phonetic features. By staying close to the real language, language-speciﬁc factors which make the
case study interesting, such as the phonological vowel-initial retention of preﬁxes, can be included
in the study. However, we want to test the validity of our model to describe general mechanisms
of language change. Therefore, we plan to evaluate our model, with as much as possible the same
method and data representation, on other case studies of morphological simpliﬁcation, such as
language contact between Scandinavian languages and Low German (possible dataset: NorthEuraLex
The task for agents is to successfully communicate about concepts in the world, roughly inspired
by a Lewis signalling game [
] or naming game [
]. Every iteration, every agent in a population
speaks: it picks a concept, produces a form based on that concept and sends it to the listener. In
our case study, we look at verb morphology. We want to analyze the interaction between different
grammatical persons (e.g. 1st person singular, 1SG) in the verb paradigm, and we want to look at
transitivity (a verb having an object), because this determines the afﬁxes being used. Therefore, a
concept consists of a combination of a lexical concept (e.g. verb to go), a person (e.g. 1SG) and a
transitivity (containing object or not) of the sentence. The listener tries to infer the concept from the
received form and points to this object. The speaker then points to the correct object. Based on this
feedback, either only the listener, or the speaker and the listener, update their internal model. We
create a simulation with both L1 and L2 agents. We initialize L1 agents with (train on) a concept-form
mapping from Lewoingu Lamaholot, a precursor of Alorese which has not undergone morphological
simpliﬁcation. The L2 agents, with a neighboring Papuan language as ﬁrst language, are initialized
with a random model. We do not initialize the agents with a Papuan language model, since the
literature presupposes no L1 effect on morphology, but rather a general effect of L2 learning at an
adult age [
]. By running a model with and without adult language contact, and comparing the
results to the current state of Alorese (where morphological simpliﬁcation has taken place), we can
test the hypothesis that adult language contact was responsible for morphological simpliﬁcation.
3.3 Investigating cognitive factors
In our model we will implement two cognitive mechanisms: Ullman’s declarative/procedural model
of language learning [
], and a phonological component, Lindblom’s H&H model [
Figure 2: Cognitive mechanisms: the declarative/procedural model, used during comprehension and production,
and the H&H model, used during production, re-entering the produced utterance in the comprehension system.
A number of theories account for the differences between L1 and L2 language processing, leading
to morphological simpliﬁcation during adult language contact, including: a critical threshold age
for learning languages [
], missing surface inﬂection [
] – which assumes that adult learners have
knowledge of inﬂection, but cannot realize it –, a noise channel-based approach (where L2 speakers
have less information to decode) [
], and the role of the L1 knowledge when learning L2 [
We choose Ullman’s declarative/procedural model as cognitive mechanism of language learning and
generalization. According to the procedural/declarative model, grammar is produced by a procedural
cognitive system, while the lexicon is memorized in a declarative cognitive system. In L2 learners,
linguistic forms which are normally produced in the procedural system, such as morphology, are
memorized in the declarative system. We hypothesize this computationally heavy memorization
step in L2 speakers leads to simpliﬁcation. According to the theory, based on age of acquisition and
experience, morphology may be produced more via the procedural system also in L2 speakers. We
do not claim that the declarative/procedural model is the only mechanism at play, for example, some
of the aforementioned mechanisms may play a role, but Ullman’s mechanism is the perspective we
will use to approach our problem.
Furthermore, we add a phonological component, because in the data of our Alorese case study,
morphological simpliﬁcation is also phonologically conditioned. We use Lindblom’s hyper and
hypo articulation theory: speakers produce a form more clearly or less clearly (e.g., drop an afﬁx),
depending on their estimation of intelligibility by the listener. This requires the speaker to have a
theory of mind about the listener. One option is re-entrance [
]: the speaker interprets its utterance
using its own language comprehension system, as if it was the listener. Another option would be to
take the characteristics of the listener (e.g. L1 or L2) into account, but this assumes that these listener
characteristics are available to the speaker. The H&H articulation component turns the problem of
zero-shot learning upside down. Instead of burdening an L2 listener, who hears a form for the ﬁrst
time, with the task to infer an concept, the speaker will try to adjust his pronunciation to the L2
3.4 Neural model
We want to implement the proposed cognitive mechanisms (section 3.3) in a neural network, which
serves as a language comprehension and production model of every agent in the simulation. It is a
challenging task to implement cognitive mechanisms in a neural model, since it is not trivial how a
certain cognitive trait (e.g. generalization versus no generalization) can be translated into a neural
network architecture. Therefore, we will propose some possible ideas, and draw from the literature
on neural emergent communication. We want to develop a model that is cognitively plausible: it
should consist of cognitive modules which can be postulated in humans, and the model should be
able to exhibit to some extent human language processing behaviour in an agent-based simulation.
At the same time, the implementation does not have to be neurally plausible: the structure of the
neural network does not have to mimic the structure of the brain. A deep neural network is merely a
powerful and robust model, to be able to implement our cognitive mechanisms. The network will
perform a reinforcement learning task, where the communicative success is the reward. As in [
we think a modular structure of the network can well represent our cognitive mechanisms. A speciﬁc
challenge is the relatively small amount of data in our setting, which may call for speciﬁc network
architectures or data augmentation.
We will implement a declarative and a procedural module, in both L1 and L2 agents. The procedural
system facilitates generalization over different concepts, grammatical persons and sentence transi-
tivities. The declarative system performs no generalization, but instead memorizes concept-form
mappings. It is an open question how this generalization versus no generalization can be implemented
in a neural network. A possible approach could be to model the procedural model using a smaller
number of nodes, and add dropout and regularization, forcing it to generalize over training examples.
The declarative system could consist of a larger number of nodes and layers, nudging it to overﬁt.
In L1 agents, there is a stronger bias towards using the procedural module, while L2 agents use the
declarative module more, but are able to shift to the procedural system after gaining more experience.
The weights which modulate procedural versus declarative system usage, should thus be initialized
differently for L1 and L2 agents, but be able to be change by experience.
The other cognitive mechanism, the H&H articulation model, could involve a step of re-entrance:
how would a speaker’s utterance be perceived if the speaker had to interpret it himself? This could
possibly be implemented using a game setting like self-play [
], where an agent plays against itself.
Based on the estimated intelligibility of the utterance, the utterance is produced clearer or sloppier.
This post-processing step of an utterance, can be performed in a separate module of the network.
This module will need to have knowledge of how exactly a word form can be pronounced clearer or
sloppier (e.g. dropping an ending), possibly by pre-training on language data.
We have described a dataset, task and model that show how neural models of language production
and comprehension can be used in an agent-based setting to study language change, using data from
languages of the world. We sketched how the declarative-procedural model, responsible for linguistic
generalization, and the hyper & hypo articulation model, which involves a theory of mind through
re-entrance, could be implemented in a neural model. Further research is needed to determine the
precise architectures to implement these mechanisms, speciﬁcally in a small data setting. We hope
the techniques proposed help to develop models that can better explain language change, and by
doing this, eventually shed light on human (pre)history.
Acknowledgments and Disclosure of Funding
This work was supported by funding from the Flemish Government under the Onderzoeksprogramma
Artiﬁciële Intelligentie (AI) Vlaanderen programme. PD was supported by a PhD Fellowship funda-
mental research (11A2821N) of the Research Foundation – Flanders (FWO).
BAPTI STA, M., GELMAN, S. A., AND BECK, E. Testing the role of convergence in language
acquisition, with implications for creole genesis. International Journal of Bilingualism 20, 3
(June 2016), 269–296.
BLOEM , J., VER SLOOT, A., AND WEERMAN, F. An agent-based model of a historical word
order change. In Proceedings of the Sixth Workshop on Cognitive Aspects of Computational
Language Learning (Lisbon, Portugal, 2015), Association for Computational Linguistics, pp. 22–
CHAABOUNI, R., KH ARITONOV, E., DUPOUX, E., AND BARONI, M. Anti-efﬁcient encoding
in emergent communication. In Advances in Neural Information Processing Systems (2019),
CHAABOUNI, R., KHARITONOV, E., LAZARIC, A., DUPOUX, E., AND BARONI, M. Word-
order biases in deep-agent emergent communication. arXiv:1905.12330 [cs] (June 2019).
DAGA N, G., HU PKES, D., AND BRUN I, E. Co-evolution of language and agents in referential
games. arXiv preprint arXiv:2001.03361 (2020).
 DAL E, R., AND LUP YAN, G. UNDERSTANDING THE ORIGINS OF MORPHOLOGICAL
DIVERSITY: THE LINGUISTIC NICHE HYPOTHESIS. Advances in Complex Systems 15,
03n04 (May 2012), 1150017.
DE BOER , B. Self-organization in vowel systems. Journal of Phonetics 28, 4 (Oct. 2000),
DELLE RT, J., DANE YKO, T., MÜ NCH, A., LADY GINA , A., BUCH, A., CLARIUS, N.,
GRIGORJEW, I., BALAB EL, M., BOGA, H. I., BAYSAROVA, Z., MÜHLENBERND, R., WAHLE,
J., AND JÄGER , G. NorthEuraLex: A wide-coverage lexical database of Northern Eurasia.
Language Resources and Evaluation (Nov. 2019).
EVTIM OVA, K., DROZDOV, A., KIELA, D., AND CHO , K. Emergent Communication in a
Multi-Modal, Multi-Step Referential Game. arXiv:1705.10369 [cs, math] (Apr. 2018).
FUTRE LL, R., A ND GIBSON, E. L2 processing as noisy channel language comprehension.
Bilingualism: Language and Cognition 20, 4 (Sept. 2016), 683–684.
GRAES SER, L., CHO, K., AND KIEL A, D. Emergent Linguistic Phenomena in Multi-Agent
Communication Games. arXiv:1901.08706 [cs] (Jan. 2019).
IANNACCO NE, L. R., AND MAKOW SKY, M. D. Accidental Atheists? Agent-Based Explana-
tions for the Persistence of Religious Regionalism. Journal for the Scientiﬁc Study of Religion
46, 1 (Mar. 2007), 1–16.
KAIPING, G. A., A ND KLAMER, M. LexiRumah: An online lexical database of the Lesser
Sunda Islands. PLOS ONE 13, 10 (Oct. 2018), e0205250.
KLAMER, M. A. F. A Short Grammar of Alorese (Austronesian). No. 486 in Languages of the
World. Materials. LINCOM Europa, Muenchen, 2011.
LAZARIDOU, A., AND BARONI, M. Emergent Multi-Agent Communication in the Deep
Learning Era. arXiv:2006.02419 [cs] (July 2020).
LAZARIDOU, A., PEYSAKHOVICH, A., AND BARONI, M. Multi-Agent Cooperation and the
Emergence of (Natural) Language. arXiv:1612.07182 [cs] (Mar. 2017).
LENNE BERG, E. H. Response to reviews of biological foundations of language. Journal of
Communication Disorders 1, 4 (Oct. 1968), 320–322.
 LEWIS, D. Convention. Harvard University Press, Cambridge, MA, 1969.
LINDBLOM, B. Explaining Phonetic Variation: A Sketch of the H&H Theory. In Speech
Production and Speech Modelling, W. J. Hardcastle and A. Marchal, Eds. Springer Netherlands,
Dordrecht, 1990, pp. 403–439.
LOU-MAGNUSON, M., AN D ONNIS, L. Social Network Limits Language Complexity. Cogni-
tive Science 42, 8 (Nov. 2018), 2790–2817.
LOWE , R., GUP TA, A., FOER STER, J., KIEL A, D., AN D PINEAU, J. On the interaction
between supervision and self-play in emergent communication. arXiv:2002.01093 [cs, stat]
MORO , F. R. Loss of Morphology in Alorese (Austronesian): Simpliﬁcation in Adult Language
Contact. Journal of Language Contact 12, 2 (Aug. 2019), 378–403.
NISHIYAMA, K., AND KE LEN, H. A Grammar of Lamaholot, Eastern Indonesia. Lincom
PRÉVOST, P., AN D WHITE, L. Missing Surface Inﬂection or Impairment in second language
acquisition? Evidence from tense and agreement. Second Language Research 16, 2 (Apr. 2000),
REALI , F., CHATER, N., AND CHRISTIANSEN, M. H. Simpler grammar, larger vocabulary:
How population size affects language. Proceedings of the Royal Society B: Biological Sciences
285, 1871 (Jan. 2018), 20172586.
SCHELLING, T. C. Dynamic models of segregation
.The Journal of Mathematical Sociology
1, 2 (July 1971), 143–186.
SCHEPENS, J., VAN DER SLI K, F., AN D VAN HOUT, R. The effect of linguistic distance across
Indo-European mother tongues on learning Dutch as a second language. In Approaches to
Measuring Linguistic Differences, L. Borin and A. Saxena, Eds. DE GRUYTER, Berlin, Boston,
SMITH, A. D. Models of language evolution and change. Wiley Interdisciplinary Reviews:
Cognitive Science 5, 3 (2014), 281–293.
STEEL S, L. Synthesising the origins of language and meaning using co-evolution, self-
organisation and level formation. In Approaches to the Evolution of Language. Cambridge
University Press, 1998, pp. 384–404.
STEEL S, L. Language re-entrance and the ‘inner voice’. Journal of Consciousness Studies 10,
4-5 (2003), 173–185.
ULLMAN, M. T. The neural basis of lexicon and grammar in ﬁrst and second language:
The declarative/procedural model. Bilingualism: Language and Cognition 4, 2 (Aug. 2001),
ULLMAN, M. T. A neurocognitive perspective on language: The declarative/procedural model.
Nature Reviews Neuroscience 2, 10 (Oct. 2001), 717–726.