Science topic

Lexical Semantics - Science topic

Explore the latest questions and answers in Lexical Semantics, and find Lexical Semantics experts.
Questions related to Lexical Semantics
  • asked a question related to Lexical Semantics
Question
3 answers
Hello!
I’m looking for native speakers of English who live in an English-speaking country and who don’t speak any Romance language (Spanish, Portuguese, Italian, French…).
My PhD research is in the field of language and cognition. More specifically, I’m looking into how speakers of different languages lexicalize motion events.
I’ve designed a video description task with 15 short video clips. The platform is mobile-friendly, and this survey will take no longer than 15 minutes of your time. And it’s pretty straightforward: the participants answer a few questions about themselves and then watch and describe what they see in the clips.
If meet the requirements and would like to take part, just follow the link below. https://survey.phonic.ai/624d5e0a269545d4b2d0e359
Thank you a lot for your help! Renan Ferreira Universidade Federal de Pelotas (Brazil)
Relevant answer
Answer
Thank you. And thanks for adding that English should be the only mother tongue.
  • asked a question related to Lexical Semantics
Question
19 answers
Is there evidence of sensory-motor activation during visual word recognition?
Relevant answer
Answer
Here are two more PowerPoints related to embodiment and lexical processing. These PowerPoints are based on our two vocabulary books, in which we develop a methodology which is driven by the source, not the target, of the metaphor. We call our book, VOCABULARY PLUS: A SOURCE-BASED APPROACH.
  • asked a question related to Lexical Semantics
Question
7 answers
Hi guys!
I have a question on a framing effect-like issue. Well, everyone of us has the immediate feeling that there's a huge difference between saying - for instance - "you should respect the environment" and "we should respect the environment", or also "the environment should be respected".
The difference might lie in how such sentences are interpreted by our minds and of course it affects the compliance to the described behavior (i.e., "respect the environment").
I'm convinced that I'm no genius and there must be a huge literature behind such an effect; but I'm not skilled in these themes, so I'm calling for help. Any clues?
P.S.: I know that nudge units and behavioral interventions teams in general promote the "make it personal" magic recipe to increase compliance, but I wonder where such strategies come from. I'm particularly interested in understanding the differences between "you should /we should", that is how grammatical phrasing (i.e., switching the person in the phrase) affects the interpretation and the relative compliance.
thanks in advance for any help
all the best,
Alessandro
Relevant answer
Answer
(1) You should do X / (2) we should do X / (3) X should be done — note that this is about deontic utterances, which are performative. They have no truth value but have variable speech act force. That FORCE stems from the authority that backs up the speaker in context. In (1), the back up is, basically: "...because I say so!" = personal authority. In (2), there is a collective morality behind the speaker. And in (3), there is an appeal to rationality, so the speaker speaks in the name of what he thinks is best according to impersonal logics and knowledge.
  • asked a question related to Lexical Semantics
Question
6 answers
When I try to review theories about word meaning within a broad-sense concept-related manner, I find that there are only few scholars who regard word meaning as concept, including Fodor, Bloom, Borg. In contrast, there are many who conceive word meaning not as concept but as conceptual schema or template, including most of Cognitive Linguists and Relevance Theorists.
Thus, I am wondering whether there are more scholars in "word meaning is concept" camp? Can Plato's Idealism be included, a seeming "label" view?
Great thanks.
Relevant answer
Answer
Hi Qiao
There are various possibilities for what these underspecified entities might be: a special kind of ‘lexical’ concept, a pro-concept, a schema or procedure or set of constraints on the kind of contentful concept they can be used to express/communicate.
  • asked a question related to Lexical Semantics
Question
2 answers
If you build your own corpus to address specific research questions, which method to you use to make sure It is saturated? I'm interested in methods as I work on digital data and I wonder which method is more efficient and less time-consuming.
Relevant answer
Answer
In Corpus design, the "saturation corpus" is associated with the concept of "representativeness", developed by Douglas Biber: <http://otipl.philol.msu.ru/media/biber930.pdf>.
Here are some other sources from the University of Lancaster that might interest you:
... Methods: e.g. a short paper from the University of Birmingham :
4. A quantitative approach to corpus representativeness: <http://www.lexytrad.es/assets/cl24_0.pdf>
  • asked a question related to Lexical Semantics
Question
12 answers
Hi, I am trying to solve the problem of imbalanced dataset using SMOTE in text classification while using TfidfTransformer and K-fold cross validation. I want to solve this problem by using Python code. Actually it takes me over two weeks and I couldn't find any clear and easy way to solve this problem.
Do you have any suggestion where exactly to look?
After implementing SMOTE is it normal to get different results accuracy in the dataset?
Relevant answer
Answer
You need to fix the seed number so that you can replicate the result each time you perform the task.
HTH.
Dr. Samer Sarsam
  • asked a question related to Lexical Semantics
Question
7 answers
I am doing research on automatic summarization of technical articles, and I do not know what lexical knowledge can be used for this. 
Relevant answer
Answer
You might be interested in approaches to automatic summarization:
1. Androutsopoulos, Ion; Malakasiotis, Prodromos (2010): A Survey of Paraphrasing and Textual Entailment Methods. In: Journal of Artificial Intelligence Research, Bd. 38, S. 135–187.
2. Loza, Vanessa; Lahiri, Shibamouli; Mihalcea, Rada; Lai, Po-Hsiang: Building a Dataset for Summarization and Keyword Extraction from Emails. In: Proceedings of the International Conference on Language Resources and Evaluations (LREC 2014).
Yuen-Hsien; Lin, Chi-Jen; Lin, Yu-I (2007): Text mining techniques for patent analysis. In: Information Processing & Management 43 (5), S. 1216–1247. DOI: 10.1016/j.ipm.2006.11.011.
  • asked a question related to Lexical Semantics
Question
7 answers
Which techniques do you reccomend for quantitative term mapping? We are conducting a literature review to disambiguate a group of closely related technical words in education. Or aim is to provide a clear definition for each of them, as based on the mainstream use among researchers in the field. We were thinking on a quantitative analysis which helped us conducting kind of a cluster analysis of the most often alluded meanings under each term. Can you reccomend something?
Thanks!
Relevant answer
Answer
So the first step would be to 'lock down' the terms / lexicalizations. You can do a bit of explorative reading to get to them, of course. But you should not just build up the totality of the corpus as you go. That will skew tha data to some direction. Don't try to go wide by getting many terms. Try to get deep by getting a few terms in as much different contexts as you could possibly lay your hands on. Typically you'd want your corpus to reflect the balance of contexts in the actual discourse. But, often enough, insights are gained by the odd low-count contex . So do try to get them in, even if at an expense of balance.
Then, 'lock down' corpus size. Corpora grow to be virtual behemoths these days (up to millions and tens of millions tokens). If you'd like to retain the option to close-read when necessary you'd want to go on the low end -or design for sub-corpora. Sub-corpora are extra nice, because they can allow for keyness analyses (texts where your terms are key would be more likely to elaborate on them). So, get your size fixed and go reach it, without changing your mind about what should go in and what shouldn't midway. Ideally, you'd want to crowd-source that part; it both speeds up the procedure and keeps you impartial. Set up a Train-Test split (dedicate part of your data to derive your scheme from and part to test it on). You can also go for a cross-testing arrangment.
Perform standard collocation and collostructional analyses with your terms as KWICs and wide enough spans on both sides on your Train set. See if any interesting n-grams -including skip-grams- with discriminative power pop-up (n-grams mostly showing up with one but not the other term). Run collocation analyses on items in them too. If an n-gram strongly selects a term as collocate both ways and not another, well done, you've found a disambiguating feature. Add the feature to a lexicon. You'd want at least one disambiguating feature for every term touple. Once there's a scheme in place, evaluate it against the Test set. Observe where it hits and where it misses. Go back and refine as necessary.
  • asked a question related to Lexical Semantics
Question
4 answers
What do you understand by the phrase "world to word"?
I used it for the title
"World to Word: Nomenclature Systems of Color and Species" https://mospace.umsystem.edu/xmlui/handle/10355/60517
to mean the practice of dividing a continuum into chunks with words.
The phrase has other uses, and I would like to know how it is used in your fields.
Relevant answer
Answer
The notions of world-to-word and word-to-world have to do with directions of fit between the intentionality (i.e. aboutness) of a mental state or a speech act and the world.
The truth of a particular belief or assertion depends on the state of the world, and if the belief or assertion is false, that is a mismatch that is remedied by changing one's belief or assertion. The words that characterize a belief or assertion must fit the state of the world.
On the other hand, if a desire or command is unfulfilled, that mismatch is remedied by making a change in the world. The world must be made to fit the words that characterize a desire or command.
  • asked a question related to Lexical Semantics
Question
5 answers
Lexico-grammatical competence as the basis of linguistic competence, its formation, components, peculiarities in communicative approach
Relevant answer
Answer
the ability to use and understand lexical entries and grammatical structures of any language related to looking at language as a system consisting of linguistic skills and components. best
  • asked a question related to Lexical Semantics
Question
3 answers
Hi,
I'm looking for a free tool to recognize the terminology concepts in technical domains such as computer science and engineering.
Is there any available dictionary, gold standard or such a tool to do that? why there is no much research in this direction?
Thank you,
  • asked a question related to Lexical Semantics
Question
1 answer
I am working in Text Segmentation Project.I need to build a lexical chain depending on WordNet or some other corpora from plain text.
There is decision Tree algorithm like C 4.5 to implement lexical chain.Being not much skilled in Python ,It's tough for me to manipulate decision tree.Is there any Python Package or Code available for finding lexical chain?
Relevant answer
Answer
See the following article
Text Summarization Using Lexical Chains
  • asked a question related to Lexical Semantics
Question
5 answers
Is it possible to obtain an indice is between 0 and 1? I read that this indice should always be negative when calculated with Gramulator (Treffers-Daller 2013) but have 2% of positive scores in a data sample...
Relevant answer
Answer
Does anyone use VOCD D command for lexical diversity?
  • asked a question related to Lexical Semantics
Question
5 answers
I want to know what treatment, if any, figurative meaning is given within Conceptual Semantics and the Parallel Architecture: what it is understood as, how it is formalized (if in any way), and what implications it has, e.g. for the interfaces between semantics and the rest of the grammar (syntax and phonology, but also morphology and the lexicon).
Relevant answer
Answer
Dear EWG,
On metaphor, check the following:
1. Washing the Brain_ Metaphor and Hidden Ideology_Andrew Goatly-John Benjamins (2007)
2. Andrew Ortony, ed. Metaphor and Thought. 2nd ed. (Cambridge:
Cambridge University Press, 1993).
Good luck.
  • asked a question related to Lexical Semantics
Question
3 answers
I am trying to measure angiogenesis, particularly its area covered and morphology.
Relevant answer
Answer
Quantitation of angiogenesis and antiangiogenesis is carried out by calculating he total vessel length, the vessel branching points and/or the vessel density over defined image areas.The angiogenic effects can be measured by counting the number of blood vessels in a given area using a stereomicroscope.
Quantitation of angiogenesis may also be performed by observing
the growth of chorioallantoic vessels through a fenestration
in the eggshell (Ausprunk et al., 1975). Auerbach et al. (1974)
introduced a modification by transferring the chick embryos
with their foetal membranes to Petri dishes after 3 days of incubation.
As soon as the chick embryo and extraembryonic membranes
are in the Petri dish, the CAM appears on the surface and
spreads as an even membrane over the whole dish.
Kindly go through the published papers related to the quantification of angiogenesis for which few are listed as below for your references.
Current methods for assaying angiogenesis in vitro and in vivo - NCBI
https://www.ncbi.nlm.nih.gov › NCBI › Literature › PubMed Central (PMC)
by CA Staton - ‎2004 - ‎Cited by 289 - ‎Related articles
Jump to In vivo assays of angiogenesis - In a variation of the CAM assay, shell-less embryos .... it allows for the real-time imaging of angiogenesis.
The chicken chorioallantoic membrane model in biology, medicine ...
https://www.ncbi.nlm.nih.gov › NCBI › Literature › PubMed Central (PMC)
by P Nowak-Sliwinska - ‎2014 - ‎Cited by 31 - ‎Related articles
Aug 20, 2014 - Angiogenesis assays in the CAM. a Scheme of angiogenic growth into an ... Imaging techniques used for the CAM and/or chicken embryo ...
Chick Chorioallantoic Membrane (CAM) Assay as an In Vivo Model to ...
https://www.ncbi.nlm.nih.gov › NCBI › Literature › PubMed Central (PMC)
by NA Lokman - ‎2012 - ‎Cited by 72 - ‎Related articles
Aug 10, 2012 - CAM assays have been widely used to study angiogenesisand tumor ... The results from the CAM assay are consistent with cancer cell motility and ..... Luker G.D. Noninvasive imaging reveals inhibition of ovarian cancer by ...
CHAPTER TWO: Chick Embryo Chorioallantoic Membrane Models to ...
https://www.ncbi.nlm.nih.gov › NCBI › Literature › PubMed Central (PMC)
by EI Deryugina - ‎2008 - ‎Cited by 71 - ‎Related articles
Jump to Overview of CAM Angiogenesis Models - Several CAM angiogenic assays have been .... blood vessels, and also direct imaging, in situ analysis, ...
[PDF]Evaluating Compounds Affecting Angiogenesis.indd - SRI International
In a typical angiogenesis evaluation using the CAM assay, fertilized chick eggs are ... dissecting microscope and a digital imaging system. The CAM-tumor assay  ...
A novel technique for quantifying changes in vascular density ...
by WJ Miller - ‎2004 - ‎Cited by 35 - ‎Related articles
This improved CAM assay can correlate changes in vascular density with changes ... A variety of in vitro and in vivo angiogenesis assay systems have been .... Fluorescent confocal microscopy imaging in the chorioallantoic membrane assay.
[PDF]Quantitation of Angiogenesis and Antiangiogenesis In Vivo, Ex Vivo ...
by M Bahramsoltani - ‎Cited by 29 - ‎Related articles
the highest parameters of angiogenesis in the CAM are found between the 4th and .... analysis and quantitation of vessels, models using imaging systems like ...
Lab on a Chick: A Novel In Vivo Angiogenesis Assay - Cambridge Core
by S Arnold - ‎2011
Several in vivo and ex vivo angiogenesis assays exist that allow the study of ... while allowing quantification and intravital imaging. To obtain a clear ... for a well- established angiogenesis model, the chorioallantoic membrane (CAM). We were  ...
Enrique Zudaire, ‎Frank Cuttitta - 2012 - ‎Medical
While the CAM has been employed for the study of angiogenesis for the better part ... 1) have improved reproducibility and scalability of assays [1, 3, 4]. ... Moreover, recent advances in imaging technologies have made it possible to visualize ...
Assessment methods for angiogenesis and current approaches for its ...
by WH AlMalki - ‎2014 - ‎Cited by 2 - ‎Related articles
May 9, 2014 - The different assays for the evaluation of angiogenesis have been ..... the CAM assay, imaging is done for every step which is then quantified.
 
 
 
  • asked a question related to Lexical Semantics
Question
4 answers
Hello, can anyone recommend me a straightforward and easily administrable task capturing executive inhibition of verbal (lexical-semantic and/or associative) stimuli (e.g. inhibiting free associations or a kind of rapid lexical/semantic/associative Go-noGo paradigm, etc.)? This task should generally reflect the control/filtering component of complex semantic retrieval during idea generation.
Thank you for any reference,
Sincerely,
Martin
Relevant answer
Answer
THE HAYLING TASK.
In this task you must, in the first part, complete a sentence with the word expected... the time is calculated for each item. In the second part (inhibition) you must complete the sentence with another word that maintenance no relation to the given sentence. The number of errors of inhibition as well as the response time for each item is counted.
you can also use a Stroop paradigm (inhibition of reading (see file)
  • asked a question related to Lexical Semantics
Question
3 answers
As in the attached pictures i get different result between POS tagger and parser despite it follow to the same producer(Stanford).
For example, look to the POS tag for the  word "على" in Statment3 file :
in tagger the result is CD
in Parser the result is NNP
I work on a research  about the Statistical Arabic Grammar Analysis by applying Naive bayes classification and then optimize the results using Genetic algorithms.
And I search on an efficient Arabic NLP tools  that give me the features that specifies the Grammar Analysis(E'arab) but really I didn't find that. If any one have an idea or interest in this research field please, give me your experiment and knowledge.
Relevant answer
Answer
The really strange thing is that the word you are discussing is neither a proper noun (NNP) nor a cardinal number (CD). It's a closed class preposition, which neither the parser nor the tagger should be getting wrong.
Actually, I believe you have a different problem: From your image I see the model you have selected for the parser is arabicFactoredBuckwalter.ser.gz. This model is probably expecting Arabic in Buckwalter transliteration using Latin characters, see:
The input you are giving to the parser is in actual Arabic characters, which probably never appear in the data it is trained on. I think you need to convert your data to romanized Buckwalter style transliteration before you can use the parser. Alternatively I think there is also a model for native Arabic characters, called arabicFactored.ser.gz (see this guide: http://nlp.stanford.edu/software/parser-arabic-faq.shtml).
  • asked a question related to Lexical Semantics
Question
3 answers
The language in eighteenth century Virgin Islands Dutch Creole manuscripts at first sight differs from the spoken variety of this language which was recorded in the twentieth century. A closer look, using Bell's  (1984) Audience Design Model, shows that the influence of the Referee blurs the situation. Several words and constructions appear to be word for word translations from European religious source texts or were included to educate the audience. The texts also look more or less 'broken Dutch'-like because of etymological orthography, ignoring Creole pronunciation. I consider this to be the influence of the referee/the tradition, and not as influence of an author connecting best to his audience of Creole speakers. Do you agree with me? 
Relevant answer
Answer
Sounds plausible to me.
I would try to differentiate between "the tradition" and "the referee" in the way that the first criterion is impersonal, implicit, most likely unconscious, whereas the referee's influence can be very much intentional and reflected (e.g. if they state why they opt for a specific spelling or grammatical construction rather than another one; reflections like these can be found in quite a number of texts in historical creolistics with both decisions in favour of more European/standard-like structures and in favour of more basilectal ones.)
Then again, I wouldn't dismiss the idea of connecting to the audience altogether in those specific contexts where we know that slaves had been alphabetised in Dutch before and a more Dutch-like structure might build upon this previous knowledge.
Definitely a very fascinating question and an interesting approach!
  • asked a question related to Lexical Semantics
Question
45 answers
Do the formal languages of logic share so many properties with natural languages that it would be nonsense to separate them in more advanced investigations or, on the contrary, are formal languages a sort of ‘crystalline form’ of natural languages so that any further logical investigation into their structure is useless? On the other hand, is it true that humans think in natural languages or rather in a kind of internal ‘language’ (code)? In either of these cases, is it possible to model the processing of natural language information using formal languages or is such modelling useless and we should instead wait until the plausible internal ‘language’ (code) is confirmed and its nature revealed?
The above questions concern therefore the following possibly triangular relationship: (1) formal (symbolic) language vs. natural language, (2) natural language vs. internal ‘language’ (code) and (3) internal ‘language’ (code) vs. formal (symbolic) language. There are different opinions regarding these questions. Let me quote three of them: (1) for some linguists, for whom “language is thought”, there should probably be no room for the hypothesis of two different languages such as the internal ‘language’ (code) and the natural language, (2) for some logicians, natural languages are, in fact, “as formal languages”, (3) for some neurologists, there should exist a “code” in the human brain but we do not yet know what its nature is.
Relevant answer
Answer
To Velina (if I may): Imperatives are used to coordinate actions. The "generation of meaning" is a hypothetical process that not every theory must presuppose.
To Andre (if I may): Of course the receiver's role must be taken into account. The reason why this role has been omitted is the fact that it belongs to the "effect-part" in the basic formula of logical pragmatics: within the communication type C, normally if the sender S emits message P, then the effect E occurs on the side of receiver R. In short, if C, then [S:"P"]ER, where the formula [Cause]Effect is to be understood as a formula of dynamic logic. What is the scope of logical research in this context? Logic usually studies the syntax and semantics of a language in which a message P is formulated. Logical pragmatics studies language in use.
Regarding the claim "Obviously, in order to create artificially such a powerful (having the expressivity of natural language) device we will need to take many cognitive and psychological aspects into consideration." It would be useful to make the distinction between the further development of the Leibnizian concept-script and the extension of logical research to pragmatics. In the latter case, it is not only the case that psychological states must be taken into account, but also the normative or social dimension. There have been made significant contributions to the development of the logical pragmatics: illocutionary logic by J. Searle and D. Vanderveken, dynamic logic of J. van Benthem, normative pragmatic of R. Brandom.
  • asked a question related to Lexical Semantics
Question
1 answer
Does anyone know of a study that includes a full text sample--minimum 800 words, but the longer the better--with the lexical chains marked up? I am looking to demonstrate a visual method of identifying lexical chains, and would like to compare the analysis that can be done using the visual method against a manually (or computationally) completed analysis. If there is a gold standard, that would be great, but otherwise, any full-text example will do! Thanks in advance for your help.
Relevant answer
  • asked a question related to Lexical Semantics
Question
16 answers
Can you recommend sources about cognitive linguistics?
Relevant answer
Answer
I find the following very helpful: Geeraerts, Dirk (ed.). 2008. Cognitive Linguistics: Basic Readings. Berlin: de Gruyter, Mouton. (=Cognitive Linguistics Research ; 34)
From the publisher's website:
"Cognitive Linguistics: Basic Readings brings together twelve foundational articles, each of which introduces one of the basic concepts of Cognitive Linguistics, like conceptual metaphor, image schemas, mental spaces, construction grammar, prototypicality and radial sets. The collection features the founding fathers of Cognitive Linguistics: George Lakoff, Ron Langacker, Len Talmy, Gilles Fauconnier, and Charles Fillmore, together with some of the most influential younger scholars. By its choice of seminal papers and leading authors, Basic Readings is specifically suited for an introductory course in Cognitive Linguistics. This is further supported by a general introduction to the theory and, specifically, the practice of Cognitive Linguistics and by trajectories for further reading that start out from the individual chapters."
  • asked a question related to Lexical Semantics
Question
3 answers
Bantu languages
Relevant answer
Answer
 You might want to look at Michael R. Marlo's work.  He is at the University of Missouri-Columbia in the English Department and also on the Research Gate site.
  • asked a question related to Lexical Semantics
Question
12 answers
I am working on a project that requires me to find the semantic similarity index between documents.  I currently use LSA but that causes scalability issues as I need to run the LSA algorithm on all documents everytime a user uploads a new document. That is not a very efficient method. Are there any libraries available which lets one compare two documents against a corpora to find the similarity index? I came across the Gensim package but I'm not quite sure how to use it to implement LSA between two documents.
PS. I work on Python so if any libraries are available in Python let me know.
Relevant answer
Answer
Hi,
In general - the first method to test as a baseline is document similarity based on the vector space model - as pointed by Michael Gubanov.
The idea is that you represent documents as vectors of features, and compare documents by measuring the distance between these features.  There are multiple ways to compute features that capture the semantics of documents - but one method that is surprisingly effective is to compute the tf*idf encoding of the documents.  
The following Gensim classes allow you to experiment with this in an easy manner: https://radimrehurek.com/gensim/similarities/docsim.html and a way to "package" these in an easy to consume manner: http://radimrehurek.com/gensim/simserver.html
To construct the tf*idf representation of documents given a collection of documents, I would look at the sklearn library - and start at this tutorial which specifically covers this topic: https://github.com/amueller/scipy_2015_sklearn_tutorial
Besides the tf*idf which should definitely remain your baseline and first stop - you can try and experiment with more advanced semantic representations of the documents.  In this space - two approaches are possible: topic models and word embeddings.
Word embeddings are a way to capture similarity across words based on the contexts in which they appear.  The most well known word embedding model is word2vec.
Word2vec will perform word similarity in a useful manner - but to turn the word-level similarity measure to document-similarity requires further adaptation.
The excellent Radim Rehurek has published a Gensim-based tutorial on adapting the doc2vec model in the following articles with Tim Emerik:
This is an implementation of the following paper's method:
Quoc Le & Tomáš Mikolov: “Distributed Representations of Sentences and ocuments”
We have experimented with doc2vec in our Lab at BGU and found it "surprising" in many tasks.  It is difficult to tune it to perform higher level tasks.
We are now experimenting with the following more recent method:
From Word Embeddings To Document Distances
Kusner et al, ICML 2015
The Python notebook of the code (with Sklearn) is available:
This is also a document similarity measure based on word2vec.
For topic models - the sequence of tutorials posted by the decidedly excellent Radim Rehurek can be followed step by step and provide an effective method to apply the method:
The same similarity interface that can be used over a vector space representation or tf*idf representation can also be applied to the output of the topic model transformation.
Each of the three approaches capture "semantic similarity" in a different way - vector space, word embedding and topic modelling. For practical applications - you need to experiment and compare their strengths and weaknesses.
  • asked a question related to Lexical Semantics
Question
9 answers
Investigations of word semantics focusing on forms of words formation and their functions in memory lead to the theory of
lexicon organization.
Relevant answer
Answer
A nice introduction is:
Words in the Mind: An Introduction to the Mental Lexicon 4th Edition
by Jean Aitchison (Author)
Words in the Mind, is all about words: how we learn them, remember them, understand them, and find the precise ones we wish to use. It also addresses the structure and content of the human word-store - the ‘mental lexicon’ - with particular reference to the spoken language of native English speakers. Great strides have been made in our understanding of the lexicon since the first three editions of Words in the Mind were published, and it has developed into a major interest of study among linguists, psychologists, sociologists, and those who teach English as a second language. 
In addition to numerous updates and revisions, this latest edition features a wealth of new material, including an all-new chapter focusing exclusively on the brain and language. Enhanced coverage is also provided on lexical corpora - computerized databases - and on lexical change of meaning. Many of the notes and suggestions for further reading are also expanded and updated. Written by a true master of making scholarly concepts accessible, the fourth edition of Words in the Mind remains a rich and revealing resource for students and non-specialists alike, presenting the latest insights into the complex relationship between language, words, and the human mind.
About the Author
Jean Aitchison is Emeritus Rupert Murdoch Professor of Language and Communication at the University of Oxford. She is the author of numerous books on language, including Language Change: Progress or Decay? (Third Edition, 2001), The Word Weavers: Newshounds and Wordsmiths (2007), Aitchison's Linguistics (Seventh Edition, 2010), and The Articulate Mammal (Routledge Classics Edition, 2011).
  • asked a question related to Lexical Semantics
Question
2 answers
In their paper, Coltheart et al. (2001) state that the lexical route comprises 2 paths: lexical nonsemantic and lexical semantic. They say that the lexical semantic route has not yet been implemented.
Does anyone know if it has been implemented as of 2015?
Relevant answer
Answer
I'm pretty sure that Coltheart et al have never implemented the lexical-semantic route. The Coltheart et al 2001 paper is, to my knowledge, the most recent paper on this.
Probably the most developed computational model of semantic processes in reading is that of Harm & Seidenberg (2004, Psyc Review). But you probably already know about that. Plus, its based on the PDP triangle framework, and focuses on reading for meaning (not reading aloud), so probably not what you're looking for.
If PDP models are of interest, then see Woollams et al (2007, Psyc review). It implements a degraded "semantic" reading route within a PDP framework.
  • asked a question related to Lexical Semantics
Question
1 answer
I am working on the Article Choice Parameter Hypothesis proposed by Ionin in which he theorizes that there are two article settings that L2 learners can have access to. The Samoan language exemplifies setting I which distinguishes articles based on specificity while English exemplifies setting II, distinguishing articles based on definiteness. If the hypothesis is true, Samoan must have se & le, denoting non-specific DPs & specific DPs respectively, differentiated by specificity. In other words, the article se must introduce both non-specific definite and non-specific indefinite DPs. I've searched through literature yet can't find any evidence.
Relevant answer
Answer
Dear Quyen,
     The hypothesis is a very interesting one, but I haven't heard of it before. Could you give a reference? I'm not familiar with Samoan, but I work with a Native American language that has determiners which distinguish human vs. non-human. Would that fit his hypothesis?
    --Rudy
  • asked a question related to Lexical Semantics
Question
6 answers
Take any word, for ex.:
a branch - a plant, a branch of a nerve/ blood vessel, a branch of organization/ family,  a branch of a river, a branchof a road,,,
How is it correlated with language - speech division?
Relevant answer
Answer
I would largely agree with Ulrike, that the distinction between "homonym" and "polyseme" is an artificial one, but the distinction is most often one made, not by linguists, but by dictionary makers (or their editorial staff, most of whom have no special training in linguistics), often based on purely economic concerns -- how many pages should be in a dictionary  for the publisher to make a profit. The fewer headwords, the more definitions that can be packed into a space. There is thus an economic incentive to have as few separate "homonyms" as possible. The criterion of "context of use", as in the bank - bank example, may often be employed. At other times, historical alterations in spelling, as in flour - flower may impel a decision. But I have occasionally been surprised to realize that a term I learned in one context was actually a metaphorical extension of a term I had known all along from another context.  Speakers of a language are for the most part totally unaware of the etymology of words, so this could only be an equally artificial criterion invented by linguists who are familiar with the history of the language.
  --Rudy
  • asked a question related to Lexical Semantics
Question
3 answers
Dear all,
I am looking for a collaborator with expertise in running psycholinguistic experiments in speech production to undertake a research on competing inflectional morphology, specifically when two or three inflectional forms consistent with the sentence context compete to being selected in speech production. I hold a PhD in General Linguistics and work as an assistant professor in Iran, Persian Gulf University, English Language and Literature Department to teach linguistic courses. I have so far focused in my studies on lexical semantics, and at this phase of my studies, I am interested in exploring lexical semantics from the lens of a psycholinguist with a focus on lexical processing and access. Any further details to those interested would be provided on demand. I am looking forward to hearing from an expert to help as a mentor and joint author.
Relevant answer
Answer
Dear Fatemeh 
We are conducting research in the area of language and literacy in bilinguals in my lab. We study diverse language combinations.
Best
Alexandra
  • asked a question related to Lexical Semantics
Question
5 answers
Several methods to induce the polarity of words over a domain have been proposed over these years. The majority, falls into two class, either dictionary-based or corpus-based. But when you put all these methods together you realize that even if you have the best technique to induce the polarity value of words, the classification process will fall in errors because words in the same domain also have different connotations according with the context. So, why don't focus on context-dependent words directly instead of domain-dependent or we still need better methods to induce the polarity of words according the domain?
Relevant answer
Answer
The answer is that the context is more dynamic than the domain, so it is more difficult to collect or identify context words (sentiment keywords) than domain words (sentiment keywords). Your proposal is a challenge in which you should apply some kind of semantic discrimination (in an effective way) for addressing this task. Maybe the use of a dynamic cluster of keywords for discriminating domain words could be a suitable idea. Also the use of deep learning is a novel way for addressing dynamic contexts.
  • asked a question related to Lexical Semantics
Question
8 answers
Orthographic neighborhood is a strong constraint in word naming and lexical acceptability. Coltheart's N and OLD20 are numerical values expressing this neighborhood distance, implemented in WordGen and Wuggy, respectively. They however do not give the option to directly enter a predefined real word together with a corresponding pseudo-word to measure this distance. Do you know any software which could help us in this respect?
Relevant answer
Answer
R has a package just for this: "vwr".  (And it's all free!)
Once installed, there's a routine that does exactly what you want:
levenshtein.distance("apple", "paple")
returns 2.
You can also give it long lists of words, and there are routines for OLD20, data for different languages, etc.
Update: I compared ELP's OLD data (on a set of words I'm studying) with the results of vwr's old20() and found a correlation of .96***, so it appears both sources agree. Neither ELP nor Wuggy gives me OLD measures on all my words and non-words (thus lots of missing values), but vwr does.
Also, Wuggy and vwr were both written by Emmanuel Keuleers <emmanuel.keuleers@ugent.be>
  • asked a question related to Lexical Semantics
Question
14 answers
According to the semantic field theory, the lexical sets can be arranged in groups of words of which meanings are closely interrelated. Could we consider the Interjections as a semantic field?
Thanks for your comments and answers.
Relevant answer
Answer
I don't think interjections can be put together into one same semantic field called "Highly charged emotions" since the component 'emotion' is not a part of their meaning.
Belonging to a semantic field presupposes that the "members" of that field share a common semantic component. This is true of course if we have the same definition of "Semantic field". Mine comes from Mel'cuk, Semantics, From Meaning to Text, 2013, John Benjamins, vol. 2, p. 318.
I explain myself with an example. The meaning of the interjection Aïe ! in French is 'I signal that I have pain', that of the interjection Wow! in English is approximately 'I signal that I admire (something)'. I can't see any trace of the meaning 'emotion' here.
And even if a definition including 'emotion' could be found for fr. Aïe ! (a Wierzbicka style definition : 'I feel something bad. Because of that, something happens in my body and I say something (...)'), nothing of the sort happens with Fr. Ah ? meaning approximately 'I signal that I am astonished at something that has just been said', or with En. Hi! meaning 'I signal that I salute you'.
To return to the original question "Can we consider Interjections as a semantic field?", I would answer "No" for the following reason: being an interjection pertains to syntax while belonging to a semantic field pertains to semantics. Those are two different fields, even if related.
In a very strained manner, interjections could possibly be assembled into a semantic field called 'signaled', because their meanings presumably all share that semantic component. But it is such a general field that one can hardly see what interest there could be in doing so.
  • asked a question related to Lexical Semantics
Question
3 answers
I am in search of a tool capable of providing meanings to idioms and phrases. I need the meaning as well as the similar terms which can be used instead of those phrases. For example: 'on Cloud nine' : in state of great happiness, very happy and so on.
Relevant answer
Answer
Can I get  corpus or something which I can intergrate into my code?
  • asked a question related to Lexical Semantics
Question
3 answers
Most of scientific texts give arguments about a subject, that is, they give logic sentences ' well-formed sentences'.
I want to find out if the arguments of a text are really arguments about the subject of the text, or if that writing is only text that it has not relation with the subject, and that writing makes difficult to understand the reading in some cases
Relevant answer
Answer
The field that studies how to find automatically the arguments in a text is called 'Argument mining'. It's an exciting subfield of NLP, but very much in its infancy.
I suggest you to check the work of these people:
Chris Reed, Simone Teufel, Manfred Stede, Francine Moens, Patrick St. Dizier
A relevant journal is 'Argument and Computation'.
One important issue is that argument mining presupposes that you have a theory of what an argument is. This brings you to the field of Argumentation Theory. Your theory also needs to be quite sophisticated if you want to see whether arguments are really relevant to the issue being discussed as you seem to hint in your question. This means that you need a theory of argumentative relevance and of argument quality.
On Argumentation Theory I suggest reading the works of
Frans H. van Eemeren, Douglas Walton, James Freeman, Eddo Rigotti, Mark Vorobej.
  • asked a question related to Lexical Semantics
Question
1 answer
I am interested in standardization bodies at a European level.
Relevant answer
Answer
I am not completely sure what your question is, but if you are interested in standardization efforts in semantics, take a look at this workshop series: http://sigsem.uvt.nl/isa9/
  • asked a question related to Lexical Semantics
Question
3 answers
For using the triplet in SPARQL query.
Relevant answer
Answer
You may also take a look at ReVerb, which extracts relation that usually represent SVO triplets.
  • asked a question related to Lexical Semantics
Question
1 answer
I am looking for help in understanding the values of the concept "Weekend" in the lexical resource SenticNet
Weekend:
{'sensitivity': 0.0, 'attention': 0.734, 'pleasantness': 0.0, 'aptitude': 0.0}
polarity: 0.245
The concept is also linked to the concept "sleep": http://sentic.net/api/en/concept/weekend/
My questions:
a) I would expect "Weekend" to have some positive value for "Pleasantness". Why does it return zero (0)?
b) the fact that "Weekend" is linked to "Sleep" is intuitive but I would expect more diversity here. Why is "Sleep" the only linked concept?
Relevant answer
Answer
The answer is very simple: NOISE.
Every KB has this problem, especially common-sense KBs (see noise ConceptNet).
answer provided by the authors of SenticNet
  • asked a question related to Lexical Semantics
Question
3 answers
I am working on Ontology of language to build a lexical resource. I want to know what has already been done in this domain.
Relevant answer
Answer
Also you might be interested in babelnet:
  • asked a question related to Lexical Semantics
Question
4 answers
Does anyone know of a source where I can download a precompiled distributional thesaurus for German or of an easy to use tool to generate one myself from raw text?
Relevant answer
Answer
Yes, you can find a German thesaurus from news corpora here:
The program to construct it is also there: http://sourceforge.net/projects/jobimtext/
Let me know if you need something bigger or different,
Chris
  • asked a question related to Lexical Semantics
Question
8 answers
I am going to write a paper about lexical awareness and how it can be affected by differences in context, time and users?
Could you please advise me on some references you think might be useful and some ideas I can write about.
Relevant answer
Answer
You could also look into synchronic and diachronic linguistics.
How words undergo lexical changes through the ages, what they meant in the past and what they mean now. Just to give you a clue:
Some words lose their everyday meaning because the realm to which they belong has disappeared, as in the word "hauberk" (a piece of armour covering the neck).
In some other words, the realm of a word remains but the word itself is replaced, as in the french word "gerdermerie" in Persian, which was later replaced by the english word "Police", and also there are other words for implying the meaning.
Some words acquire new meanings without losing the old meaning, as in the word "armour" (metal coverings that covered the body, old meaning), and "armour" (hard covering that protects a vehicle, also such vehicles themselves).
Some words acquire new meaning and lose their old meanings, as in the word "computer" (someone who made calculations), and "computer" (a machine that carries out complex operation).
It might provide some food for thought to write about semantical changes. In my MA dissertation, I wrote about how loan-words change in meaning when they move from Western European Languages to Russian. I indentified 13 type of semantical changes, including:
1- words loaned as such: "computer", "broker" etc.
2- words with expansion in meaning: "combine" (harvesting machine),"combine" (in russian, any complex machine that does the work of several other machines)
3- words with concretization in meaning: "meeting" (almost any gathering of people) and "meeting" (in russian, a massive gatheing for political or burning issues of the day).
4. words that quite change in meaning: "werkstatt" (in german workbench) "Verstak" (in russian workshop)
And so on.
  • asked a question related to Lexical Semantics
Question
1 answer
No doubt, the spatial character of count nouns and perfective verbs stems from boundary detection. But, the perceptual process responsible for distinguishing count and mass nouns, as well as perfective and imperfective verbs, is not spatial in the (top-down) sense of being near or far away, as Langacker suggests, but spatial perception in the sense of spatial versus temporal presence.
Relevant answer
Answer
I would suggest that the issue is a bit more complex. A closer examination of construal level theory (CLT) indicates that the issues are not as discrete as they may seem initially. Langacker (2012) and several scholars (e.g., Alony, 2006; Fauconnier & Turner (2008); Liberman et al., 2007) have recognized the cognitive interrelation of space, time, social distance, and degree of probability. Indeed, we see discussions of this interrelation as early as Lakoff & Johnson (1980). In these accounts, there is no difference with regard to cognitive processing across the four dimensions. Bar-Anan, Liberman, Trope, & Algom (2007) used a modified Stroop test as a way of confirmation.
  • asked a question related to Lexical Semantics
Question
9 answers
Semantics, on its own, remains helpless to understand meaning. Semantic theories are also debatable. So, how can one devise a research on purely semantic grounds? Fundamental questions of semantics are still unsolved. In a speech community, there are dissimilarities among the speakers’ idiolects and lexicon. If someone conducts a research and finds a radical theory, still, it will remain impractical for another context, interestingly, in the same community. It is a circular way where the point of end is actually a point of another beginning.
Relevant answer
Answer
Research on specific types of communication, such as advertisements, is usually framed as an application of the theories that have been developed to describe language comprehension in general.
If you are not familiar with research on semantic processing of language, I would recommend starting with the sections of a cognitive psychology textbook, or a psychology of language textbook. You may also find useful information about research on the effects of advertisements in social psychology or social cognition textbooks.
These will include references to the primary literature that the textbook discussion has been developed from. One major issue with investigating television advertisements is that most research focuses on texts and written information, or spoken language. In contrast, television advertisements present complex visual and auditory stimuli in a manner that is difficult to study because the impact of the different types of information is hard to separate and harder to reintegrate within the different models that have been developed to describe the types of information presented.
I think that, in general, with information as complex as television commercials you will find that much research is focused on analyzing the memory for specific elements of the advertisements at different times after exposure. You'll also find research that focuses on decision making after viewing advertisements with differences among certain components of the advertisements as the dependent variables.
One thing to keep in mind is that when it comes to research on the impact of advertisements, research conducted by cognitive scientists and social psychologists in academic journals may represent only part of what is known about these types of information. It is likely that many large companies, especially those that have existed for as long or longer than scientific psychology have developed their own research and have kept it within the company as trade secrets rather than publishing the results.
  • asked a question related to Lexical Semantics
Question
7 answers
If we have 2-word noun compound N1N2, then N2 is the head noun. Is this applicable to 3-word and higher-order level nouns also? Is it always the case that N3 is the head noun in N1N2N3 noun compound?
Relevant answer
Answer
No, the bracketing does not affect the head noun. I think it should work for 3-word NCs as well. As if we try to minimize the 3-word NC to a 2-word by considering the bracketing it is always N3 that turns out to be the head.
((N1N2)N3)---head for N1N2 is N2 => (N2N3) ---head is N3
(N1(N2N3))--head for N2N3 is N3 => (N1N3)---head is N3
So N3 always turns out to be head. Probably that should be explanation.
  • asked a question related to Lexical Semantics
Question
2 answers
Or c++ api for msa nlp
Relevant answer
Answer
you can check Stanford tools http://www-nlp.stanford.edu/projects/arabic.shtml which is built in Java, you can use it for morphological and syntactic levels.