Science topic
Chomskyan Grammar - Science topic
Explore the latest questions and answers in Chomskyan Grammar, and find Chomskyan Grammar experts.
Questions related to Chomskyan Grammar
The most venerable professors and research scholars
Your critical comments, valuable opinions, scientific facts and thoughts, and supportive discussion on how can structural grammar and IC analysis be justified in the recent pedagogical and enhancement trends in EIP for EFL adult learners.
I shall be thankful sincerely for your kind participation.
Best,
Dr. Meenakshi
And if in addition to advancing in “Artificial Intelligence” we further investigate our “Natural Intelligence”!?
for example, Natural Intelligence and Research in Neurodegenerative diseases.
While we are still at an early stage in answering some key questions about Natural Intelligence [NI] [such as what algorithms the mind uses] the rapidly advancing Artificial Intelligence [AI] has already begun to change our Daily Lives. Machine learning has brought to light remarkable potential in healthcare, facilitating speech recognition, clinical image analysis, and medical diagnosis. For example, there is a growing need for automation of medical imaging, as it takes a lot of time and resources to train an Expert Human Radiologist. Deep learning AI architectures have been developed to analyze medical images of the brain, lungs, heart, breast, liver, skeletal muscle, some of which have already been used in clinics to aid in disease diagnosis. Juana Maria Arcelus-Ulibarrena
Cfr.
This Question does not refer to "NATURALISTIC INTELLIGENCE" but to "NATURAL INTELLIGENCE"
We are asking by NATURAL INTELLIGENCE [NI] not by NATURALIST INTELLIGENCE

Is it “true” that when anyone is rewriting a “sentence” in Logical Form(LF) by deploying metalinguistic constants and variables, the ultimate output would reveal the ‘true’ meaning of a given sentence? In LF, a major concentration is devoted to describe and understand the ‘real world’. This supposed logical positivist “real” is incorporated in the logical analysis of sentences in the algorithmic chain of LF of S-Structure by deploying sentential calculus. LF mainly follows Fregean compositionality or its derivatives like Katz-Fodorian Model. The following questions may be asked:
1. What is “real” in this real world? (To answer such question, one may take a clue from Russell’s An Inquiry into Meaning and Truth: “We all start from ‘naïve realism,’ i.e., the doctrine the things are what they seem. We think that grass is green, that stones are hard and snow is cold. But physics assures us that the greenness of grass, the hardness of stones, and coldness of snow are not the greenness, hardness and coldness that we know in our own experience, but something very different. The observer, when he(sic) seems to himself (sic) to be observing a stone, is really, if physicist to be believed, observing the effects of the stone upon himself (sic).Thus science seems to be at war with itself: when it most means to be objective, it finds itself plunged into subjectivity against its will. Naïve realism leads to physics, and physics, if true, shows that naïve realism is false. Therefore naïve realism, if true, is false; therefore it is false.” (1940:15)
2. What happens in LF if anyone puts Russell’s paradox (1913) in LF? How do we incorporate Gödel’s theorem to tackle a formal system like LF? According to Goedel’s theorem (1931), no formal system is complete enough to handle all the problems within a formal paradigm. If anyone puts any Goedel’s proposition or Russell’s paradox (“One Calcuttan says that all Calcuttans are liars”) in LF of S-Structure, the total formal as well as mechanical algorithmic system to gauge the meaning may collapse.
3. Katz-Fodorian (1963) system of binary componential analysis ignores the prototypical cognition of meaning by the human being. As some cognitive scientist observed that the meaning as endorsed by human beings, could not be analyzed by the stipulated components as humans understand meaning through prototypical cognition. What should we follow in semantic analysis:technical intelligentsia’s critical discursive habit of paraphrasing or commonsense deployment of prototypes?
4. Let us switch over to another schooling and try to understand semantic problems raised by continental philosophers (under the umbrella o fso-called Post-Formalism/ Structuralism). These Post-Formalists are talking about plural meanings of non-disposable texts as well as something called‘surplus meanings’, which is not at all analyzable or quantifiable .According to them, the meaning-site is too slippery area and any futile endeavor to formalize such site will be ended in vain. Do you think that they are neglecting ‘science’ and its formalism by promoting“un-scientific” non-formalism?
There are two or more approaches in explaining the coordination in compounding: the lexicalist and the generativist. Does the morphology-syntax theory work on coordinate compounds in English? If so, can anyone explain with examples?
I am working on the Article Choice Parameter Hypothesis proposed by Ionin in which he theorizes that there are two article settings that L2 learners can have access to. The Samoan language exemplifies setting I which distinguishes articles based on specificity while English exemplifies setting II, distinguishing articles based on definiteness. If the hypothesis is true, Samoan must have se & le, denoting non-specific DPs & specific DPs respectively, differentiated by specificity. In other words, the article se must introduce both non-specific definite and non-specific indefinite DPs. I've searched through literature yet can't find any evidence.
In Chomsky (1995), and (2000), there was the introduction of the Inclusiveness Condition which Chomsky argues that it is a principle for the efficient computation of 'perfect' languages. My question is regarding the developments of this condition (for instance Chomsky replaced 'object' with 'feature' in his description of the condition when he stated that there must be no addition of new features in the course of the syntactic computation). Further, what is the stand of the bare phrase structure theory? Are there any serious attempts made in its development? Is it still of interest for syntacticians? Thanks.
Hello to all. I can't find any contributions to machine translation using pregroup grammar or Lambek calculus, on the net. I am working on this and wanted to know if there is any literature.
I read the following example in one of my professors notes.
1) we have a SLR(1) Grammar G as following. we use SLR(1) parser generator and generate a parse table S for G. we use LALR(1) parser generator and generate a parse table L for G.
S->AB
A->dAa
A-> lambda (lambda is a string with length=0)
B->aAb
Solution: the number of elements with R (reduce) in S is more than L.
but in one site I read:
2) Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?
a) T1 and T2 has not any difference.
b) total Number of non-error entries in T1 is lower than T2
c) total Number of error entries in T1 is lower than T2
Solution:
The LALR(1) algorithm generates exactly the same states as the SLR(1) algorithm, but it can generate different actions; it is capable of resolving more conflicts than the SLR(1) algorithm. However, if the grammar is SLR(1), both algorithms will produce exactly the same machine (a is right).
any one could describe for me which of them is true?
EDIT: infact my question is why for a given SLR(1) Grammar, the parse table of LALAR(1) and SLR(1) is exactly the same, (error and non-error entries are equal and number of reduce is equal) but for the above grammar, the number of Reduced in S is more than L.
My question is, for a given AGREE relationship between X and Y, how could one determine which category hosts the interpretable feature and which one hosts the uninterpretable one?
AGREE is driven by interpretable/uninterpretable feature pairs e.g. uPhi/iPhi or uWH/iWH etc. For examples like phi checking between T and a subject, it is taken for granted that the subject has interpretable phi features and that T has uPhi features; but the subject has uT (or uCase) features while T has iT (or iCase).
One argument for this seems to be, in part, semantic: there is a semantic reality to, for example, number which make it plausible that number is interpretable on nouns, while uninterpretable on verbs. (Although, one must also concede that number on pluractional or reciprocal verbs is not all that far fetched, which undermines this type of argument). However, when looking at other constructions, or other types of syntactic interaction, it's not always clear that this type of argument works.
For example, in topicalization constructions such as (a,b,c), let's assume that the topicalized constituent moves to SpecTopicP to check a Topic feature. But is iTopic a feature on the moved constituent and uTopic on the head of TopicP? Or is it the other way around? Is this something that could be parameterized? More specifically, I'd like to know what *arguments* could be marshalled either way.
(a) Peter Florrick I could vote for.
With grammar (bnf form) and source code as input, Antlr is generating AST/Parse tree with tokens as terminals and "nil" as parent/root in this case.This is not appropriate tree. I came to know that grammar has to be rewritten in order to generate proper AST/Parse tree. But I couldnt find any appropriate rules to rewrite grammar.
The Chomsky hierarchy is a guideline on language's expressive power. The linear feedback shift register is a very interesting "element" to the structure of a language and there is a large base of theoretical literature on the subject.
Transformation analysis on speech level
Are the "operators" of descriptive grammar a) word allocation and b) subffixes; while the operators of explanatory grammar a) semantics and b) pragmatics?
I'm currently working on a comparison between Cognitive Construction Grammar and Minimalist syntax as theories for modeling language variation and change. I've received some training in minimalism, but that's long time ago, so my knowledge on that topic is a bit rusty. My supervisors are cognitive linguists too, and about everybody I work with is a cognitive linguists, so no help there. Could someone revise sections 6 and 7 of this paper (about 5 pages, no more) in order to check if I got everything about right?
The Minimalist Program Chomsky 1995, 1999, 2000 has the idea that there is no distinction between Morphology and Syntax and all word formation takes places in the narrow syntax. However, what are the evidence for that? Also, what is the syntax of word formation? Is there any asymmetry for the structure of words?