PosterPDF Available

A prediction of generic they semantics

Authors:

Figures

Content may be subject to copyright.
Instance Vectors
the mean of vectors of words and inflectional
functions surrounding a target word token [7]
computed based on the semantic vectors
generated by NDL

A prediction of generic they semantics
Generic they is generic and
singular and shows remnants
of plurality
Dominic Schmitz
Heinrich-Heine-Universität Düsseldorf
Dominic.Schmitz@uni-duesseldorf.de
Background & Motivation
besides the prototypical plural they, there are at least four other types of they [1]
(1) generic indefinite: Someone ran out of the classroom, but they forgot their backpack.
(2) generic definite: The ideal student completes the homework, but not if they have an emergency.
(3) specific definite ungendered: The math teacher is talented, but they hand back grades late.
(4) specific definite gendered: James is great at laundry, but they never wash their dishes.



while there is research from sociolinguistics
and syntax [e.g. 1-4], there are no semantic
analyses of singular they and pronouns in
general yet
RQ: What are the semantics of generic they?
Method
Naive Discriminative Learning NDL
based on well-established theory in cognitive
psychology [5-6]
computes semantic vectors of words and
inflectional features via cues and outcomes
Linear Discriminative Learning LDL
linguistic knowledge and underlying features
-9]
maps forms onto meanings and vice versa;
simulates mental lexicon and its interrelations
Discussion
generic they appears to be a generic singular
pronoun with remnants of plurality
generic they is comprehended significantly
better than plural they
generic they coactivates entries in the lexicon
to same degree as plural they does
semantic analyses of pronouns appear to be
fruitful
the Discriminative Lexicon [9] is a framework
fit to explore pronoun semantics






     







 



 


































Linear Discriminative Learning Background
C
form
S
semantics
G
production
F
comprehension
ACKNOWLEDGEMENTS The author would like to thank the members of the department of English language and
linguistics at Heinrich-Heine-Universität Düsseldorf, with special thanks to the (former) members of the DFG project
 semantics of derivational  (440512447) Ingo Plag, Martin Schäfer, and Viktoria Schneider.
REFERENCES
[1] Conrod, K. (2020). Pronouns and gender in language. In The Oxford Handbook of Language and Sexuality. Oxford
University Press. doi: 10.1093/oxfordhb/9780190212926.013.63 [2] Bjorkman, B. M. (2017). Singular they and the
syntactic representation of gender in English. Glossa: A Journal of General Linguistics,2(1). doi: 10.5334/gjgl.374 [3]
Conrod, K. (2019). Pronouns raising and emerging. Seattle: University of Washington dissertation. [4] Conrod, K.
(2022). Abolishing gender on D. Canadian Journal of Linguistics/Revue Canadienne de Linguistique,67(3), 216241.
doi: doi.org/10.1017/cnj.2022.27 [5] Rescorla, Robert A. & Allan R. Wagner. 1972. A theory of Pavlovian conditioning:
Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black & W. F. Prokasy (eds.), Classical
conditioning II: Current research and theory,6499. New York: Appleton-Century-Crofts. [6] Wagner, Allan R. &
Robert A. Rescorla. 1972. Inhibition in Pavlovian conditioning: Application of a theory. In R. A. Boakes & M. S. Halliday
(eds.), Inhibition and learning,301334. London: Academic Press Inc.[7] Lapesa, Gabriella, Lea Kawaletz, Ingo Plag,
Marios Andreou, Max Kisselew & Sebastian Padó. 2018. Disambiguation of newly derived nominalizations in context:
A Distributional Semantics approach. Word Structure 11(3). 277312. doi: 10.3366/word.2018.0131.[8] Baayen, R.
Harald, Yu-Ying Chuang, Elnaz Shafaei-Bajestan & James P. Blevins. 2019. The discriminative lexicon: A unified
computational model for the lexicon and lexical processing in comprehension and production grounded not in
(de)composition but in linear discriminative learning. Complexity 2019.4895891. doi: 10.1155/2019/4895891. [9]
Chuang, Yu-Ying & R. Harald Baayen. 2021. Discriminative learning and the lexicon: NDL and LDL.Oxford research
encyclopedia of linguistics. Oxford: Oxford University Press. doi: 10.1093/ACREFORE/9780199384655.013.375.
𝐹 = 𝐶𝑆
𝐺 = 𝑆𝐶
𝑆 = 𝐶𝐹𝐶 = 𝑆𝐺
ResearchGate has not been able to resolve any citations for this publication.
Chapter
Full-text available
Naive discriminative learning (NDL) and linear discriminative learning (LDL) are simple computational algorithms for lexical learning and lexical processing. Both NDL and LDL assume that learning is discriminative, driven by prediction error, and that it is this error that calibrates the association strength between input and output representations. Both words’ forms and their meanings are represented by numeric vectors, and mappings between forms and meanings are set up. For comprehension, form vectors predict meaning vectors. For production, meaning vectors map onto form vectors. These mappings can be learned incrementally, approximating how children learn the words of their language. Alternatively, optimal mappings representing the end state of learning can be estimated. The NDL and LDL algorithms are incorporated in a computational theory of the mental lexicon, the ‘discriminative lexicon’. The model shows good performance both with respect to production and comprehension accuracy, and for predicting aspects of lexical processing, including morphological processing, across a wide range of experiments. Since, mathematically, NDL and LDL implement multivariate multiple regression, the ‘discriminative lexicon’ provides a cognitively motivated statistical modeling approach to lexical processing.
Article
Full-text available
The discriminative lexicon is introduced as a mathematical and computational model of the mental lexicon. This novel theory is inspired by word and paradigm morphology but operationalizes the concept of proportional analogy using the mathematics of linear algebra. It embraces the discriminative perspective on language, rejecting the idea that words’ meanings are compositional in the sense of Frege and Russell and arguing instead that the relation between form and meaning is fundamentally discriminative. The discriminative lexicon also incorporates the insight from machine learning that end-to-end modeling is much more effective than working with a cascade of models targeting individual subtasks. The computational engine at the heart of the discriminative lexicon is linear discriminative learning: simple linear networks are used for mapping form onto meaning and meaning onto form, without requiring the hierarchies of post-Bloomfieldian ‘hidden’ constructs such as phonemes, morphemes, and stems. We show that this novel model meets the criteria of accuracy (it properly recognizes words and produces words correctly), productivity (the model is remarkably successful in understanding and producing novel complex words), and predictivity (it correctly predicts a wide array of experimental phenomena in lexical processing). The discriminative lexicon does not make use of static representations that are stored in memory and that have to be accessed in comprehension and production. It replaces static representations by states of the cognitive system that arise dynamically as a consequence of external or internal stimuli. The discriminative lexicon brings together visual and auditory comprehension as well as speech production into an integrated dynamic system of coupled linear networks.
Article
Full-text available
One of the central problems in the semantics of derived words is polysemy (see, for example, the recent contributions by Lieber 2016 and Plag et al. 2018 ). In this paper, we tackle the problem of disambiguating newly derived words in context by applying Distributional Semantics ( Firth 1957 ) to deverbal -ment nominalizations (e.g. bedragglement, emplacement). We collected a dataset containing contexts of low frequency deverbal -ment nominalizations (55 types, 406 tokens, see Appendix B) extracted from large corpora such as the Corpus of Contemporary American English. We chose low frequency derivatives because high frequency formations are often lexicalized and thus tend to not exhibit the kind of polysemous readings we are interested in. Furthermore, disambiguating low-frequency words presents an especially difficult task because there is little to no prior knowledge about these words from which their semantic properties can be extrapolated. The data was manually annotated according to eventive vs. non-eventive interpretations, allowing also an ambiguous label in those cases where the context did not disambiguate. Our question then was to what extent, and under which conditions, context-derived representations such as those of Distributional Semantics can be successfully employed in the disambiguation of low-frequency derivatives. Our results show that, first, our models are able to distinguish between eventive and non-eventive readings with some success. Second, very small context windows are sufficient to find the intended interpretation in the majority of cases. Third, ambiguous instances tend to be classified as events. Fourth, the performance of the classifier differed for different subcategories of nouns, with non-eventive derivatives being harder to classify correctly. We present indirect evidence that this is due to the semantic similarity of abstract non-eventive nouns to eventive nouns. Overall, this paper demonstrates that distributional semantic models can be fruitfully employed for the disambiguation of low frequency words in spite of the scarcity of available contextual information.
Chapter
This handbook is currently in development, with individual articles publishing online in advance of print publication. At this time, we cannot add information about unpublished articles in this handbook, however the table of contents will continue to grow as additional articles pass through the review process and are added to the site. Please note that the online publication date for this handbook is the date that the first article in the title was published online. For more information, please read the site FAQs.
Singular they and the syntactic representation of gender in English
  • K Conrod
Conrod, K. (2020). Pronouns and gender in language. In The Oxford Handbook of Language and Sexuality. Oxford University Press. doi: 10.1093/oxfordhb/9780190212926.013.63 [2] Bjorkman, B. M. (2017). Singular they and the syntactic representation of gender in English. Glossa: A Journal of General Linguistics, 2(1). doi: 10.5334/gjgl.374 [3]