Science topics: LinguisticsSyntaxConstruction Grammar
Construction Grammar - Science topic
Explore the latest questions and answers in Construction Grammar, and find Construction Grammar experts.
Questions related to Construction Grammar
It's all in the question, really. This topic must have been raised and discussed before, but I can't seem to find a good reference just yet. I'm writing this paper where I make the case for a distinct construction (based, among other arguments, on the fact that its frequency has been increasing significantly over the years). But the still (arguably) low raw figures make the reviewers tick. I argue that the numbers in question are *not* low (especially because they now represent 17.5% of the uses of the main lexical at the heart of that construction). But I was hoping maybe there would be references that precisely discuss this issue, and especially that 'low' figures are still relevant (i.e. 'frequent enough') if they constitute a significant portion of an item's distribution. Any idea? Thanks!
The most venerable professors and research scholars
Your critical comments, valuable opinions, scientific facts and thoughts, and supportive discussion on how can structural grammar and IC analysis be justified in the recent pedagogical and enhancement trends in EIP for EFL adult learners.
I shall be thankful sincerely for your kind participation.
i am wondering whether someone actually built an (adaptive) language learning software based on the cue competition model by brian macwhinney
Hi fellow researchers in cognitive lingustics. I am currently finishing my book on the teaching of phrasal verbs from an Applied Cognitive Construction Grammar framework. In order to reach the widest audience possible, I would like to translate the book's one-page preface into several languages, starting with Chinese (Mandarin, Cantonese) and Russian. If anyone is interested, please, download the attached file (other languages are also welcome). It goes without saying that the sources of the selected translations will be included in the book (plus a big "thanks" in the acknowledgements section). In addition, I will send a free interactive PDF of the book to those who generously collaborate with me in this task. Thanks to those interested.
This question comes from a simple observation that really puzzles me.
Constructions (in CxG) are defined as form-function pairings. My focus here is not so much on the function of constructions but rather on their form. If we take the ditransitive construction (as in I sent her a letter), for instance, it is often described to have the form in (1). However, in the literature, you also often find the form of the ditransitive construction to be discussed as in (2), as the ‘double-object’ construction.
(1) NP V NP NP
(2) SUBJ V OBJ OBJ2
The problem for me is that in (2), the 'form' of the ditransitive construction is not described in terms of syntactic properties (such as ‘NP’), but in terms of functional properties (an object is a function, not a form). So my question is simple: why use the description in (2), a semantic/functional description, instead of the description in (1) to talk about the formal properties of the ditransitive construction? Is there a particular purpose for using one instead of the other? Or is it simply because (1) might fail to properly differentiate the ditransitive construction from other syntactic patterns? (e.g. They elected him president, also an <NP V NP NP> pattern, yet supposedly instantiating a different (resultative?) construction) And in the latter case, is that not a problem for the theory?
These questions may have to do with the syntax/semantics interface, but should CxG therefore not address these questions more explicitely?
I am wondering whether it is possible for a construction (and in particular more schematic/grammatical ones) to be realized by slightly different forms.
I seem to have identified a construction which has developped a second meaning. However, the use of the construction with this second meaning also involves a change in formal properties.
Shall I still consider this to be a case of polysemy?
The two meanings are without any doubt related.
I’m working on a paper on Likert-type scales, as well as a statistical measure/test that sort of emerged by accident whilst working on the paper. However, I was hoping for some preliminary feedback (and what better place for such information as RG?). Specifically, the following are basically universally accepted by specialists whose fields are closely related to the matter:
1) There exists no 1-to-1 mapping between a word in a source language and a word in a target language. More simply, translation always involves information loss.
2) If we throw out the substantial evidence that lexemes aren’t the basic unit of language (and choose not to adopt a construction grammar), we are still left with polysemy
Linguists on opposite sides of the fence, such as Jackendoff and Langacker, still agree that “words” are encyclopedic: there isn’t any mapping from a word to some “unit” of knowledge, information, brain activity, etc. That’s why, if one looks in a dictionary, one finds words defined by other words.
3) Even if we accept the modern version of grandmother neurons (“concept cells” that have been found to respond selectively to e.g., specific people in ways that have lead some researchers to claim that there exists a 1-to-1 mapping between such cells and concepts, nobody believes (and it is an empirically validated falsehood) that there exists any mapping between the conceptual representation via neural activity in one brain to that in another.
4) Finally, language is intricately involved in shaping thought and knowledge, in particular through the relationship of constructions (or lexemes, phrasal nouns, collocations, etc.) and concepts. However, there isn’t any 1-to-1 mapping between a particular instantiation of lexemes in a particular construction that is sufficiently stable such that an individual can separate the scale (which is usually completely conceptual, although sometimes also empirical as in e.g., frequency) from the individual responses (whether only the endpoints are labeled or all possible responses are) and keep these conceptual domains distinct. In other words, the items necessarily force the respondent’s novel conceptualization, making it impossible to treat a single participant’s response to a single item as somehow remotely precise.
What, then, is the justification for treating all participant responses as infinitely precise and corresponding to the exact same values as if all participants were observations of a single value?
Thanks for any and all input!
I'm reading about context free grammar and i recognized how to eliminate the left recursion but i did not find out what is the problem with left recursion?? can any one explain
Thanks in advance
Hello to all. I can't find any contributions to machine translation using pregroup grammar or Lambek calculus, on the net. I am working on this and wanted to know if there is any literature.
I am currently in the process of drafting a new research project on the use of Spanish subject personal pronouns. Basically, I am the only one at my university who is working on Hispanic Sociolinguistics, so I don't have any collegues to discuss my ideas with. Therefore, I would want to ask the research community for their thoughts on the attached research project.
Thanks in advance,
I read the following example in one of my professors notes.
1) we have a SLR(1) Grammar G as following. we use SLR(1) parser generator and generate a parse table S for G. we use LALR(1) parser generator and generate a parse table L for G.
A-> lambda (lambda is a string with length=0)
Solution: the number of elements with R (reduce) in S is more than L.
but in one site I read:
2) Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?
a) T1 and T2 has not any difference.
b) total Number of non-error entries in T1 is lower than T2
c) total Number of error entries in T1 is lower than T2
The LALR(1) algorithm generates exactly the same states as the SLR(1) algorithm, but it can generate different actions; it is capable of resolving more conflicts than the SLR(1) algorithm. However, if the grammar is SLR(1), both algorithms will produce exactly the same machine (a is right).
any one could describe for me which of them is true?
EDIT: infact my question is why for a given SLR(1) Grammar, the parse table of LALAR(1) and SLR(1) is exactly the same, (error and non-error entries are equal and number of reduce is equal) but for the above grammar, the number of Reduced in S is more than L.
It seems that Construction Grammar generally assumes a rather strictly vertical/hierarchical structure, from more substantial to more schematic levels, such that a given item instantiates one (and only one) construction on the next higher level of abstraction.
Is there any formalization of 'horizontal links', such as a similarity in only form or only meaning, but not both?
And could there be a "constructional polygamy", e.g. for structurally ambiguous cases, which could instantiate more than one abstract construction? (To try an example: "All that money I have to spend" could be deontic 'have to V' as well as transitive 'have X' + to-infinitive, so it might 'activate' both constructions...)