Science topic

Language - Science topic

Language is a verbal or nonverbal means of communicating ideas or feelings.
Questions related to Language
  • asked a question related to Language
Question
8 answers
Many have criticized Chomsky’s theory of the Universal Grammar of language (e.g., Pinker as described in Sihombing 2022), but the most effective criticisms have come from Daniel Everett, given that Chomsky (according to Everett) has never addressed the criticisms. Everett has two issues with Chomsky’s theory of language: the evolutionary timeline of language for Homo sapiens[1] and the lack of universality of language structure for all languages. On the evolution of language, Chomsky has proposed that language began some 60,000 years ago (Chomsky 2012). Everett’s contrary explanation (Everett 2017) is that rudimentary language started 2.5 million years ago in the South Pacific amongst Homo erectus, who are estimated to have had 62 billion neurons (24 billion short of Homo sapiens, Herculano-Houzel 2012), and for whom there is archeological evidence that they were skilled sailors with territories throughout the south Pacific Ocean; navigation between territorial islands was done using the stars and sea currents, which would have required some form of communication between group members (Everett 2017)[2]. Also, at the time of Homo erectus, there is evidence of an asteroid strike in the South Pacific, which could have accelerated the evolutionary process (as it did from mammals 64 million years ago) by bringing about the introduction of large, big-brained primates.
On the generalizability of Chomsky’s theory to all languages (including primitive languages), Everett (2006, 2016) spent many years in the Amazon basin of Brazil studying the Pirahã people, who have no written language or number system. To transmit their history across generations (two at most) it is all done by word-of-mouth. The language has eight consonants, three vowels, and two tones. The sentences are very simplistic, with no embedded clauses such as, “John, who is a hunter, is an active individual.” Instead, the utterance would be: “John is a hunter. John is an active individual.” This language structure is apparent when children or adults begin to learn a language (thereby having no recursive structure). Also, the language has no pronouns. Furthermore, it has a proximate tense (e.g., for the present) and a remote tense (e.g., for the past) but no perfect tense, a tense with no time stamp, e.g., “I have prepared some food.” The language does not permit the establishment of a creation myth. The sense of time, e.g., historic time, is not well developed. Much is set in the present. Hunting and foraging are a daily affair for the Pirahã people. The children are taught the names of all the plants and animals in the jungle, which can number in the thousands.
Accordingly, Chomsky’s theory fails to account for the evolutionary history of language. As well, his theory accounts only for complex, recursive languages with little to say about the more primitive languages such as the one spoken by the Pirahã people of Brazil. It is noteworthy that if a Pirahã child is raised in Sao Paulo in the Portuguese language, the child will have no problem mastering all the complexities of Portuguese, which has way more verb tenses than English but a similar number system, as well as a comparable written script.
Neanderthals (Homo neanderthalensis), who occupied Northern Europe for much of their existence up until 40,000 years ago (Sansalone et al. 2023), were around when Homo sapiens were endowed with an ability to generate speech sounds and therefore able to express their cognition (Chomsky 2012). Doreen Kimura, who spent most of her life studying how the brain processes human language by examining brain-damaged patients (Kimura 1993), believed that human language does not represent some type of species exceptionalism, but instead represents a characteristic of the brain and the body that was shaped by evolution thereby leaving traces of its genetics (e.g., Chomsky’s universal grammar) in other species such as the electric fish, song-birds, bats, elephants, dolphins, whales, and non-Homo sapiens. She argued that communication of early Homo sapiens some 500,000 years ago was non-verbal and gesture-based, but changed to the vocal apparatus at this time (i.e., by the formation of a right-angled vocal tract, see Fig. 1.1 Kimura 1993) allowing for the utterance of vowels. This notion runs contrary to the idea that some 60,000 years ago humans just started producing language spontaneously (Chomsky 2012), with no clear link to evolution, brain, and behavior despite many challenges to this idea (Bizzi and Mussa-Ivaldi 1998; Changizi 2001b, 2003; Dawkins 1976; Dawkins and Dawkins 1976; Everett 2017; Fentress and Stilwell 1973; Gallistel 1980)[3].
As for Neanderthals, Sansalone et al. (2023) have recently opined that the Neanderthal neocortex was as sophisticated as the human neocortex exhibiting a high degree of interareal integration, which does not exist in other primates. The overlap between Neanderthals and Homo sapiens 40,000 years ago permitted the sharing of genes between the two groups. A common language would have facilitated their genetic intimacy, and there is evidence that Neanderthals and Homo sapiens exchanged genes, for over 6% of the genomes of Europeans are Neanderthal. Perhaps before their extinction, Neanderthals possessed Chomsky’s universal grammar. This could be verified by evolutionary biologists.
Another issue is that Chomsky’s theory emphasizes the rapid acquisition of language during childhood, which Chomsky attributes to a universal grammar programmed genetically in all humans (Chomsky 1965). A child does not need to spend time in school to master the verbal aspects of a language, which is acquired automatically between birth and adolescence, but reading and writing necessitates schooling. FOXP2 gene expression occurs in new-born humans and in new-born and adult songbirds for the accelerated acquisition of language and songs, respectively (Rochefort et al. 2007). This acquisition is mediated by neurogenesis in the telencephalon (neocortex of humans) and the hippocampus (Goldman and Nottebohm 1983; Rochefort et al. 2007); neurogenesis ceases by the age of twelve in humans (Charvet and Finlay 2018; Sanai et al. 2011; Sorrells et al. 2018). Neurogenesis may accelerate language learning in children, whereas it promotes the learning of songs for mate selection in adult songbirds.
One might expect that the number of new words learned as a child should be much greater than the number of new words learned after the age of 10 to 12 when neurogenesis begins to subside (Charvet and Finlay 2018; Sanai et al. 2011; Sorrells et al. 2018). According to Bloom and Markson (1998) by the age of ten, children learn an average of 23,651 words to yield an acquisition rate of 2,365 words per year, and from the age of ten to eighteen children learn an average of 36,350 additional words to yield an acquisition rate of 4,544 words per year (this is based on children who attend school). Now some of this increase in acquisition rate after the age of ten may be related to a child having more methods by which to consolidate information; on this point, the ability to speak, read, and write tends to accelerate after the age of ten, which should contribute to the efficiency of word consolidation and retrieval. Nonetheless, no one would argue that language acquisition (through speaking and hearing) up to the age of 10 or 12 is relatively effortless and word and phrase utterances are free of any accent (other than the parents’/teachers’ accent) even when learning multiple languages. These points are emphasized by Chomsky (1959) and used effectively to challenge Skinner’s Verbal Behavior Theory of language (Skinner 1957).
Lastly, an analysis of 19 different languages including European and Asian languages revealed that the information transmission rate is comparable for all the languages at about 39 bits per second (Coupé et al. 2019). This means that the brain sets the same limits on language irrespective of language type, which bolsters Chomsky’s notion that there is a neuro-genetic structure in humans that controls the universal acquisition of language (Chomsky 1965).
Summary:
1. That language was acquired by Homo sapiens as late at 60,000 years ago may not be correct, since there is evidence that Homo erectus (an ancestor of Homo sapiens) some 2.5 million years ago may have required this capability to organize communications to navigate between territories in the south Pacific Ocean.
2. The theory of Universal Grammar does not account for all languages, particularly languages that have no recursive structure, such as the language of the Pirahã people of the Amazon. Nevertheless, advanced languages have a comparable information transfer rate, and Pirahã children can learn a recursive language, which suggests that all humans are genetically endowed with a common neural mechanism for the acquisition of language.
3. A universal grammar may be represented in non-human species. There is evidence that Neanderthals had a brain as advanced as that of Homo sapiens and therefore this species could have supported human-like language.
4. More English words are learned after the age of ten than before the age of ten, even though neurogenesis stops by or shortly after this age in humans. This, however, does not take away from the fact that before the age of ten children learn to speak effortlessly and without an accent, a point emphasized by Chomsky.
Footnotes:
[1] Chomsky is not sure whether language is affected by natural selection, since when asked questions about this he never gives a clear yes or no on the topic (Chomsky 2020-2023/Youtube).
[2] Soccer robots have both proprioception to note the position of their bodies as well as a visual sense to detect the ball, the goals, and the position of the other robots (Behnke and Strucker 2008). To communicate the location of the ball and other items with other robots, an allocentric coordinate system is used, much like that utilized by a group of electric fish (who use electricity to communicate), a pack of wolves (who use gestures and sounds to communicate), or a pod of killer whales (who use sounds to communicate) in pursuit of prey. Language may have evolved to enhance allocentric communication, as is required by soccer robots.
[3] For example, when an estimate is made for the value of ‘d’ (a word-syllable quotient) between the number of words (E) and number of syllables (C) using the formula ‘E = Cd’ (derived from Changizi 2001b), the value for ‘d’ turns out to be ~ 1.046 for humans [i.e., there are approximately 170,000 words of common usage in the English language and there are approximately 100,000 corresponding syllables, which yields ‘d’ = 1.046 (the ‘E’ and ‘C’ values are based on the full, 20 volume, Oxford English Dictionary)]. This means that for the English language words and syllables have a combinatorial relationship, namely, for every one word there is an average of 1.046 syllables. Now what about birdsong? Much like for human language, the number of birdsongs (E) varies as a function of the number of syllables (C), such that the number of syllables per song, ‘d’, is estimated to be 1.23 (Changizi 2001b), which is even greater than the combinatorial estimate for human language of words to syllables.
Relevant answer
Answer
I agree with Dr. Pehar that "there is a genetic, inborn make-up in humans to acquire /.../ language", but, and this is the important thing, there are few reasons to believe that this genetic make-up has the form of Chomskys "universal grammar".
  • asked a question related to Language
Question
17 answers
I would like to understand the broad range of parameters that constitute a speaker of any given language being regarded as a 'native speaker' of the said language (as opposed to merely fluent in it or possessing a bilingual proficiency of it), and at what point this status is no longer applicable to those who have acquired a language via Second Language Acquisition (SLA).
Relevant answer
Answer
Please, don't tantalize yourself. As long as you can communicate both orally and in writing in another language and you are understood, the concept of 'native speaker's ability' carries a linguistic bias and specific ideology, where only 'native speakers' are the best of a given language ... and this is WRONG in a globalized (scientific) world where everyone tries to communicate his/her ideas and research.
  • asked a question related to Language
Question
2 answers
Can we apply the theoretical computer science for proofs of theorems in Math?
Relevant answer
Answer
The pumping lemma is a valuable theoretical tool for understanding the limitations of finite automata and regular languages. It is not used for solving computational problems directly but is important for proving non-regularity and understanding the boundaries of regular languages.
  • asked a question related to Language
Question
4 answers
Soccer robots have both proprioception to note the position of their bodies as well as a visual sense that is egocentric to detect the ball, the goals, and the position of other robots (Behnke and Strucker 2008). To communicate the location of the ball and other items with other robots, an allocentric coordinate system is used, much like that utilized by a group of electric fish (who use electricity to communicate), a pack of wolves (who use gestures and sounds to communicate), or a pod of killer whales (who use sounds to communicate) in pursuit of prey. Perhaps, language evolved to enhance allocentric communication, as is required by soccer robots.
A staunch critic of Noam Chomsky, Daniel Everett has argued that language started some two million years ago (rather than 60,000 years ago, Chomsky 2012) with (bipedal) Homo erectus, who inhabited the South Pacific, used tools, and is suspected of having navigational skill to travel between islands (Everett 2016, 2017). To facilitate the travel, Everett has proposed that Homo erectus used allocentric communication—perhaps, starting with gestures before evolving into verbal behavior some 500,000 years ago for Homo sapiens (Kimura 1993). It is believed that Homo erectus evolved into Homo sapiens.
Relevant answer
Answer
Language likely evolved to enhance allocentric communication, which refers to the ability to communicate about objects, events, or entities outside of oneself. This form of communication is fundamental to human social interaction, allowing individuals to share information, coordinate actions, and build complex societies. The evolution of language provided a sophisticated tool for expressing thoughts, intentions, and observations, enabling humans to convey precise details about the external world. As social animals, humans benefit from the ability to discuss things that are not immediately present, such as distant resources, future plans, or abstract concepts. This allocentric communication likely played a crucial role in the survival and success of early human communities, as it allowed for more effective collaboration, problem-solving, and cultural transmission.
  • asked a question related to Language
Question
228 answers
And if in addition to advancing in “Artificial Intelligence” we further investigate our “Natural Intelligence”!?
for example, Natural Intelligence and Research in Neurodegenerative diseases.
While we are still at an early stage in answering some key questions about Natural Intelligence [NI] [such as what algorithms the mind uses] the rapidly advancing Artificial Intelligence [AI] has already begun to change our Daily Lives. Machine learning has brought to light remarkable potential in healthcare, facilitating speech recognition, clinical image analysis, and medical diagnosis. For example, there is a growing need for automation of medical imaging, as it takes a lot of time and resources to train an Expert Human Radiologist. Deep learning AI architectures have been developed to analyze medical images of the brain, lungs, heart, breast, liver, skeletal muscle, some of which have already been used in clinics to aid in disease diagnosis. Juana Maria Arcelus-Ulibarrena
Cfr.
This Question does not refer to "NATURALISTIC INTELLIGENCE" but to "NATURAL INTELLIGENCE"
We are asking by NATURAL INTELLIGENCE [NI] not by NATURALIST INTELLIGENCE
Relevant answer
And if in addition to advancing in “Artificial Intelligence” we further investigate OUR “Natural Intelligence”!!?
....at the "base" of everything that will come in the future [that we will not know him!!!]....there will always be "the soul"...this is the real Question!
  • asked a question related to Language
Question
4 answers
I attended a lecture at the Baylor College of Medicine (~ 2019) where one of the questions was “Does birdsong have anything to do with human language?” Noam Chomsky would say, “Absolutely not!” The speaker who had just finished discussing how birdsong is influenced by dopamine, a neurotransmitter that has been implicated in reward (incidentally a specialty of one of Chomsky’s critics, B.F. Skinner) was put off by the question, delivering a non-committal answer.
The late Doreen Kimura who spent much of her life studying how the brain processes human language (Kimura 1993) needs to be mentioned here. Kimura believed that human language does not represent some type of exceptionalism, but rather just a species-specific characteristic of the brain and body that was shaped by evolution. She argued that communication of early Homo sapiens some 500,000 years ago was non-verbal and gesture-based, but that later changes to the vocal apparatus (i.e., the formation of a right-angled vocal tract, see Fig. 1.1 Kimura 1993) allowed for the production of vowels. This idea runs contrary to the notion that some 60,000 years ago humans just started producing language spontaneously (Chomsky 2012) with no clear link to evolution (Everett 2017) and animal behavior and brain organization (Bolhuis et al. 2014). The notion that human language is an evolutionary outlier must be as wrong as the idea that the sun rotates around the earth (Galileo 1616, at the time a criminal complaint was issued by the Catholic Church).
Relevant answer
Answer
That is OK.
  • asked a question related to Language
Question
2 answers
When the eyes of a person are damaged this causes complete blindness. Likewise, when Wernicke’s and Broca’s areas of neocortex are damaged this causes complete aphasia, losing the ability to comprehend language as well as the ability to produce speech (Penfield and Roberts 1966). In the absence of the eyes, one can deliver electricity to various regions of the visual system such as the lateral geniculate nucleus, the visual cortex, or the superior colliculus, for instance, to evoke fragments of visual perception (Schiller and Tehovnik 2015; Tehovnik and Slocum 2013), but complete vision has yet to be achieved using such a method during blindness. In the absence of Wernicke’s and Broca’s areas no one has yet tried to recover language by activating subcortical sites that participate in language functions such as the thalamus and cerebellum, for example (Penfield and Roberts 1966; Schmahmann 1997; Tehovnik, Patel, Tolias et al. 2021; but see Ojemann 1991).
The neocortex contains the complete declarative code for language (Corkin 2002; Kimura 1993; Penfield and Roberts 1966). This compelled Pereira, Fedorenko et al. (2018) to use fMRI to collect signals focused on the language areas of neocortex (i.e., 50,000 voxels including the entire neocortex) as sixteen subjects were tested on 180 mental concepts contained in various sentences. It was found that the fMRI signal could predict the correct stimulus sentence at a rank accuracy of 74% correctness, on average (p < 0.01, Fig. 4b of Pereira, Fedorenko et al. 2018). Note that there was some variability in the ‘selected’ 5,000 (out of 50,000) most effective voxels per subject over time. fMRI does not have single-neuron resolution spatially and temporally, but it is now believed that spanning minutes to years the composition of neocortical neurons mediating behavior can vary such that a percentage of cells always remains tuned to a task but that the composition of that percentage fluctuates (Chen and Wise 1995ab; Gallego et al. 2020; Rokni, Bizzi et al. 2007). However, delivering a speech requires great precision of word order. This precision is maintained by neocortical-cerebellar loops that are instrumental in converting the declarative code of neocortex into an explicit motor code via the cerebellum (Gao et al. 2018; Guo, Hantman et al. 2021; Hasanbegović 2024; Zhu, Hasanbegović et al. 2023; Mariën et al. 2017; Ojemann 1983, 1991; Thach et al. 1992), which stores the efference-copy representation for automatic performance (Bell et al. 1997; Chen 2019; Cullen 2015; De Zeeuw 2021; Fukutomi and Carlson 2020; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik, Patel, Tolias et al. 2021; Wang et al. 2023). Patients with damage to Boca’s area can perform movement sequences (of the upper extremities) that have been overlearned but are impaired at learning new sequences (Kimura 1993), which means under such conditions the remaining islands of neocortical and cerebellar connectivity are sufficient to generate previously learned movements. Thus, to learn new sequences (especially as it pertains to language) requires that Broca’s area be intact.
Relevant answer
Answer
دوما هناك صلة ما بين الجانب العضوي للإنسان، والجانب الهرموني، والجانب النفسي... وكل تلك العلاقات تؤثر بشكل مباشر على مخرجات الفرد اللغوية والاجتماعية والسلوكية ... اللغة جزء من منظومة الفرد يمارسها باستمرار، وأي عطب على مستوى أجهزته العضوية يؤدي فورا للتوقف عن إنتاج الكلام. وهو مؤشر انقطاع الفرد عن التعبير . لكن في ظل كل هذا نتساءل: أين تذهب رسائلنا التي لم تخرج ولم تنتقل في الهواء أو عبر الأثير؟
أين يخزن الفرد تفاعلاته وانفعالاته بعد عطب أصاب أعضاءه؟
مجال اللغة روحي إلى أبعد حد...!
  • asked a question related to Language
Question
4 answers
1. a. Whoj knows whok heard what stories about himselfk?
b. John does (=John knows whok heard what stories about himselfk).
2. a. Whoj knows what stories about himselfj whok heard?
b. John does (=John knows what stories about himselfj whok heard
/John knows whok heard what stories about hisj own)
The examples (1a) and (2a) ask questions about the matrix subject 'who', with 'John' italicized in (1b) and (2b) corresponding to the wh-constituents that are being answered. I am curious about the binding relations in these examples, particularly in (2). Can example (2a) be construed as a question target matrix subject 'who' with 'himself' bound by the matrix subject?
Relevant answer
Answer
I don't think the English language is set up to nest separate questions this way, at least not to do that and be grammatically correct. It is logical that if someone heard a story about themselves, then the question could always follow, as to what that story was, so these 2 questions can be logically nested.
But I think you're trying to ask "what were the stories, if the person heard stories about themselves ?" But you can't do that by just using "what stories" since it becomes grammatically incorrect, so to be correct you need to use "which questions" but this then becomes a logical problem because "which stories" implies the selection of stories has already been determined, and a choice just needs to be as to which one, which isn't the case here.
  • asked a question related to Language
Question
6 answers
Much has been made of the idea that humans are genetically programmed to learn languages at an early age, suggesting that learning plays a minor role in this process (Chomsky 1959). But we have argued that a large part of being able to speak at an information transfer rate exceeding 40 bits per second (i.e., over a trillion possibilities per second, Coupé et al. 2019; Reed and Durlach 1998) is due to having a one-decade-long formal education in one’s native and secondary languages (Tehovnik, Hasanbegović, Chen 2024). For example, Joseph Conrad, whose native language was Polish and who became a world renowned writer, in his 20’s learned to write in English (Wikipedia/Joseph Conrad/July 11, 2024). In what is now Poland, Conrad was mentored by his father, Apollo Korzeniouwski, who was a writer and later a convicted political activist by the Russian Empire. To escape the political turmoil of eastern Europe, Conrad (to the dislike of his father) exiled himself to England, which marked the start of his writing career. And the rest we know about: ‘Heart of Darkness’, ‘Lord Jim’, ‘Nostromo’, and so on.
The study of second language learning by 20 year olds was investigated by Hosoda et al. (2013). They recruited twenty-four Japanese university students who were serially bilingual with the earliest age of learning English at seven years of age. The students completed a 4-month training course in intensive English study to enhance their vocabulary. They learned 60 words per week for 16 weeks yielding a total of 960 words, which translates into an information transfer rate of 0.0006 bits per second (see Footnote 1), which is appreciably lower than the transfer rate of ~ 40 bits per second for producing speech (Coupé et al. 2019; Reed and Durlach 1998).
Furthermore, there is this belief that learning a language is accelerated in children as compared to adults (Chomsky 1959). By the age of eighteen, one can have memorized some 60,000 words in the English language (Bloom and Markson 1998; Miller 1996), which represents an information consolidation rate of 0.0006 bits per second (see Footnote 2), which is the same as the rate experienced by the Japanese students learning English as a second language as adults (Hosoda et al. 2013).
Two conclusions can be drawn: First, consolidating a language is many orders of magnitude slower than delivering a speech (i.e., 0.0006 bits per second vs. 40 bits per second). Second, the idea that children learn languages at an accelerated rate may not be true. This needs to be properly investigated, however, whereby the rate of language learning (in bits per second) is measured yearly starting neonatally and ending in adulthood. Also, there is more to language than just memorizing words, so linguists will need to design experiments covering all the major parameters of language and express these parameters in terms of bits per unit time. It is time that linguistics (like neuroscience) becomes a quantitative discipline.
Footnote 1: Bit-rate calculation: if each word is made up of 4 letters (on average) then the bit rate of learning (using values by Reed and Durlach 1998) = 1.5 bits per letter x 4 letters/word x 960 words/16 weeks = 360 bits per week = 0.0006 bits/sec. The learning period includes not only the time spend memorizing the words, but also the time required to consolidate the information in the brain, which occurs during sleep and during moments of immobility (Dickey et al. 2022; Marr 1971; Wilson and McNaughton 1994). After the learning there was an increase in the grey matter volume of Broca’s area, the head of the caudate nucleus, and the anterior cingulate cortex; as well, there was an increase in the white matter volume of the inferior frontal-caudate pathway and of connections between Broca’s and Wernicke’s areas (Hosoda et al. 2013). The grey and white matter enhancement correlated with the extent of word memorization.
Footnote 2: Bit-rate calculation: Memorizing 60,000 words in 18 years translates into 360,000 bits of information [i.e., 60,000 words x 4 letters per word x 1.5 bits per letters, Reed and Durlach 1998] or a word consolidation rate of 55 bits per day (or 9 words per day) over eighteen years of life. Therefore, the rate per second is 0.0006 bits per second. For other details see Footnote 1.
Relevant answer
Answer
Thank you for the suggestion, Krzysztof. Ed Tehovnik.
  • asked a question related to Language
Question
4 answers
Many have criticized Noam Chomsky’s theory of language (e.g., Pinker as described in Sihombing 2022), but the most effective criticisms have come from Daniel Everett, given that Chomsky (according to Everett) has never addressed the criticisms. Everett has two issues with Chomsky’s theory: the evolutionary timeline of language for Homo sapiens and the lack of universality of language structure for all languages. On the evolution of language, Chomsky has proposed that language began some 60,000 years ago (Chomsky 2012). Everett’s contrary explanation is that rudimentary language started 2.5 million years ago in the South Pacific amongst Homo erectus [who are estimated to have had 62 billion neurons/24 billion short of Homo sapiens, Herculano-Houzel 2012] for which there is evidence that they were skilled sailors expanding throughout the Pacific Ocean, the navigation of which (using the stars and currents) is believed to depend on having communication between group members (Everett 2017). Also, at this time there is evidence of an asteroid strike in the South Pacific, which could have accelerated the evolutionary process, as it did 64 million years ago, bringing about the large, big-brained mammals.
On the generalizability of Chomsky’s theory to all languages (including primitive languages), Everett (2006, 2016) spent many years in the Amazon basin of Brazil studying the Pirahã people, who have no written language or number system. To transmit their history across generations (two at most) it is all done by word-of-mouth. The language has eight consonants, three vowels, and two tones. The sentences are very simplistic, with no embedded clauses such as, “John, who is a hunter, is an active individual.” Instead, the utterance would be: “John is a hunter. John is an active individual.” This language structure is apparent when children or adults begin to learn a language (thereby having no recursive structure). Also, the language has no pronouns. Furthermore, it has a proximate tense (e.g., for the present) and a remote tense (e.g., for the past) but no perfect tense, a tense with no time stamp, e.g., “I have prepared some food.” The language does not permit the establishment of a creation myth. The sense of time, e.g., historic time, is not well developed. Much is set in the present. Hunting and foraging are a daily affair for the Pirahã people. The children are taught the names of all the plants and animals in the jungle, which can number in the thousands.
Accordingly, Chomsky’s theory fails to account for the evolutionary history of language. And his theories can only explain complex (recursive languages) with little to say about the more primitive languages such as the one spoken by the Pirahã people of Brazil. However, if a Pirahã child is raised in Sao Paulo in the Portuguese language, the child will master all the complexities of Portuguese, which has way more verb tenses than English and a similar number system, as well as a written script.
Relevant answer
Answer
I haven't read what Everett had to say about Chomsky's contribution to linguistics, but I think that, from the account given here, Everett's criticism of Chomsky is reductionist. While not a Chomskyan by persuasion, I believe that Chomsky's contribution is more than the evolution of language and the generalizing of his theory to all languages. If the account about Everett is not reductionist, I think Everett may have missed an important point that actually gave impetus to the theories of pragmatics and cognitive linguistics, which is Chomsky's deliberate avoidance of the intricacies of meaning and context in his formalist program. This takes me to the generalization problem referred to in this post. This is not a defense of Chomsky, but about the essence of theory. We all know that for a theory to be one, it should be able to be falsifiable. If the data of the Pirahã language contradict Chomsky's generalization, it simply means that his theory is not to be taken as an unfalsifiable theorem but as a scaffolding in the evolution of linguistic theory at large.
  • asked a question related to Language
Question
5 answers
The hippocampal formation is central to the consolidation and retrieval of long-term declarative memory, memories that are stored throughout the neocortex with putative subcortical participation (Berger et al. 2011; Corkin 2002; Deadwyler et al. 2016; Hikosaka et al. 2014; Kim and Hikosaka 2013; Scoville and Milner 1957; Squire and Knowlton 2000; Tehovnik, Hasanbegović, Chen 2024; Wilson and McNaughton 1994). Subjects that have hippocampal damage have great difficulty narrating stories (Hassabis et al. 2007ab), which can be viewed as a disruption of one’s stream of consciousness as it pertains to retrieving information. The retrieved stories, which are highly fragmented in hippocampal patients (Hassabis et al. 2007ab), are comparable to those evoked electrically by stimulating a single site in the parietal and temporal lobes (Penfield and Rasmussen 1952; Penfield 1958, 1959, 1975). Nevertheless, individuals with hippocampal damage can still engage others verbally, but the conversation is limited in that it is based on declarative memories that are not updated making the hippocampectomized interlocker seem out of touch (Corkin 2002; Knecht 2004). A rapid exchange of speech is dependent on an efference-copy representation, which is mediated through the cerebellum (Bell et al. 1997; Chen 2019; De Zeeuw 2021; Guell, Schmahmann et al. 2018; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik, Patel, Tolias et al. 2021; Wang et al. 2023).
Patient HM, who had bilateral damage of his hippocampal formation, had ‘blind memory’ (much like ‘blindsight’): when asked to name the president of the United States in the early 2000’s he failed to recall the name, but when given a choice of three names: George Burns, George Brown and George Bush he was able to select George Bush (Corkin 2002). Therefore, his unconscious stores of information were intact (which is also true of blindsight for detecting high-contrast spots of light, Tehovnik, Patel, Tolias et al. 2021). As well, HM had memory traces of his childhood (a time well before his hippocampectomy), but the specifics were lost such that he could not describe even one event about his mother or father (Corkin 2002). Although many presume that HM had memories of his childhood, these memories were so fragmented and lacking in content that referring to his childhood recollections as ‘long-term memories’ is questionable.
The idea that the brain becomes less active once a new task has been acquired through learning can be traced back to the experiments of Chen and Wise (1995ab) that were done in the supplementary motor area, Brodmann’s Area 6. Monkeys were trained to associate a visual image with a particular direction of saccadic eye movement, which could be up, down, left, or right of a centrally-located fixation of the eyes. For a significant proportion of neurons studied it was found that the activity of the cells decreased with overlearning an association. At the time of publication this counter-intuitive result was greeted with much skepticism. After reading the paper, Peter Schiller did not know what to make of the result since his results (seven years before) suggested that the supplementary motor area becomes more active and engaged once new tasks are learned (Mann, Schiller et al. 1988).
Years later, Hikosaka and colleagues continued this line of work to show that the diminution of activity with learning was a real neural phenomenon and that the diminished information was channeled to the caudate nucleus (Hikosaka 2019; Hikosaka et al. 2014; Kim and Hikosaka 2013), which is connected anatomically to the entire neocortex such that the head of the caudate innervates the frontal lobes whereas the tail of the caudate innervates the temporal lobes (Selemon and Goldman-Rakic 1985). Hikosaka (2019) has proposed that the memories of learned tasks are archived in the caudate nucleus, whereby new tasks are stored in the head of the caudate and old tasks are stored in the tail of the caudate—perhaps for immediate use by the temporal lobes which if damage disrupts long-term memories even of one’s childhood (Corkin 2002; Squire et al. 2001).
That neurons throughout the brain (i.e., the cortex and subcortex) become less responsive to task execution once overlearned is a well-established fact (Lehericy et al. 2005). We have argued that this diminution of responsivity is the brain’s way of consolidating learned information efficiently, while reducing the energy expended for the evocation of a learned behavior (Tehovnik, Hasanbegović, Chen 2024). We and others (Lu and Golomb 2023) believe that all memories are stored according to the context of the memorization, which requires that a given site in the neocortex that contains a memory fragment such as a word or visual image be networked with other neurons to recreate the context, which we refer to as a declarative/conscious unit (Tehovnik, Hasanbegović, Chen 2024). When someone narrates a story, declarative/conscious units are concatenated in a string much like the serialization of the images of a film and this process involves both the neocortex and the cerebellum (Hasanbegović 2024).
Furthermore, a primary language (as compared to secondary languages) is stored in the neocortex and cerebellum in such a way that any damage to either structure often preserves the primary language while degrading the secondary languages (Mariën et al. 2017; Ojemann 1983, 1991; Penfield and Roberts 1966). All languages are networked separately in the brain (Ojemann 1991): a unique neocortical-cerebellar loop is summoned during the delivery of a speech in the chosen language (Tehovnik, Hasanbegović, Chen 2024). The language one thinks in (i.e., one’s counting language) is the language that is well archived and highly distributed (including areas of the brain that mediate mathematics), thus making the language more resistant to the effects of brain damage.
In conclusion, information stored in the brain is no different from information stored in a university library: the ancient texts are all housed in a special climate-controlled chamber, while the remaining texts including the most recent publications are made available to all students and professors. Indeed, it is our childhood memories that define us and therefore they deserve to be archived and protected in the brain. The details of how this happens will need to be disclosed.
Relevant answer
Answer
There are different theories of information and different understandings and definitions of information. In my field, Claude Shannon's model is used, according to which information is encoded by the entropy of the signal. However, when making artificial intelligence systems, as has become fashionable, one must remember that in humans information arises only in interaction between two sources - that which arrives from sensors and that which is generated by the brain. What is available stored as memory links or genetically hardwired makes the incoming stream informative. It seems that the hippocampal structure plays a role in the interaction of the two sources.
Best!
  • asked a question related to Language
Question
4 answers
It was Pavlov who showed that language was a consequence of the human cerebral complexity and that it objectified the superiority and specificity of the human brain with respect to animal brains. He perceived language as a special type of conditioned reflexes, a second system of signalization, the first one being that of gnosis and praxis of direct thinking by images. To each image will be substituted through education its verbal denomination. Since they name everything, instead of associating images, human beings can directly associate the corresponding names, a system more efficient in maximizing the abstraction capabilities of the human brain” [Chauchard 1960, p. 122, from Michaud 2019].
In short, Pavlov believed that the process of thinking is possessed by all animals (which runs contrary to the views of Chomsky 1965, 2012), and what happened to humans (between 2 and 0.5 mya, Everett 2016; Kimura 1993) is that they invented language (as they invented writing, the steam engine, and AI) by using the ‘thinking’ process of the neocortex to make associations between sounds and objects in the real world (a little like what Chat-GPT does today, but more efficiently and at an accelerated rate during development). The universal grammar proposed by Chomsky (1965) is merely an acknowledgement that all Homo sapiens are of the same species and therefore have a common capacity to acquire language, which today includes reading and writing both of which have become global requirements for citizenship by way of state-sponsored education from K to 12. Indeed, Pavlov’s view (unlike Chomsky’s) fits better with our understanding of evolution and human inventiveness (Michaud 2019), two notions ignored by Chomsky.
Relevant answer
Answer
Furthermore, regarding a previous comment defending Noam Chomsky, the commentator needs to read more Noam Chomsky to understand that Chomsky is a philosopher and not a biologist. In all my years at MIT, he never attended not even one neuroscience seminar and I very much doubt that he attended seminars on genetics and evolutionary biology.
  • asked a question related to Language
Question
7 answers
The terms Fictional Language, Fictitious Language, Artificial Language, or Constructed Language in fiction has been used interchangeably in papers that I have read. Are there differences in these terms, and which is preferable?
Relevant answer
Answer
I agree with Ira. By 'constructed' we should understand any given language that has been artificially created by human beings. They can be applied to real life (e.g. Esperanto) or fiction, hence 'fictional' (Tolkien, G.R.R. Martin, Star Trek, etc.). I personally find 'fictitious' a less clear-cut term, as it can both characterise languages created for 'fictional' purposes and how they are perceived (as false or not genuine) inside a work of fiction by its narrative voice(s) or characters. I hope this can be of help.
  • asked a question related to Language
Question
29 answers
Dear Professors,
I am Ziad Rabea, a high school student, and I am delighted to share one of my latest projects with you, seeking your valuable feedback. After years of research and development, I'd like to introduce L.B.F.C.T (Linguistic Barriers Free Coding Technology).
WorldLang is a new programming language featuring dynamic keyword importation and an integrated translator, which enables the translation of code and language keywords from one language to another dynamically.
Context and Motivation:
According to statistics from Statista and Ethnologue, native English speakers comprise about 380 million of the global population of 8 billion, approximately 4.7%. Additionally, those who speak English as a second language constitute about 13%, leaving over 82% of individuals worldwide who do not speak English. Given that the next Steve Jobs could emerge from this vast majority of non-English speakers, it is imperative to provide tools that foster innovation and creativity across linguistic boundaries.
Although there have been previous attempts to address this issue, such as Citrine and Supernova, they often fell short due to the concept of localization. While creating a programming language that allows coding in one’s native language is a significant step, it does not solve the problem entirely and can even exacerbate it. For instance, a programming language tailored to a specific language would be unusable by anyone except this language speakers, hindering collaborative development across diverse linguistic groups.
What's new? :
WorldLang is the world’s first programming language to feature dynamic keyword importation during the tokenization phase and an integrated translator. This allows users to download code written in one language, translate it into their native language, edit it, and then retranslate it back into the original language or any other language. This capability ensures that developers from different linguistic backgrounds can collaborate seamlessly. WorldLang is a global symphony of code.
As a high schooler, I accept that my research skills may not be that good, but I would love to hear your thoughts, feedback, and suggestions on WorldLang.
If you are interested in collaborating or testing this new language, please feel free to reach out. Your expertise and insights will be invaluable in refining and improving this technology.
Thank you, and I look forward to an engaging discussion!
Best regards,
Ziad Rabea
Relevant answer
Answer
Dear Ziad Rabea ,
By taking a multilingual software development approach, you can improve the efficiency of your localization processes, reduce translation cost, and provide a better experience for your software's international users. Here's how you can switch to a multilingual software development process.
Regards,
Shafagat
  • asked a question related to Language
Question
1 answer
1)Identify concrete situation.
2)Have empathy.
3)Either already know the language or have an effective enough AI translator.
Relevant answer
Answer
To interpret something:
1. **Understand Context**: Consider the context and background information.
2. **Analyze Content**: Examine the details and main points.
3. **Identify Key Themes**: Determine the central themes or messages.
4. **Evaluate Significance**: Assess the importance and implications.
5. **Formulate Insight**: Develop your understanding or conclusion based on the analysis.
This approach helps in making sense of data, texts, or situations effectively.
  • asked a question related to Language
Question
3 answers
Ablation of the cerebellum does not abolish locomotion in mammals (Ioffe 2013); it merely induces atonia: body movements become clumsy with postural and vestibular deficits, which is related to the negation of both proprioceptive and vestibular input to the cerebellum, which encodes where the body is with respect to itself and the outside world, i.e., with respect to the gravitational axis (Carriot et al. 2021; Demontis et al. 2017; Fuchs and Kornhuber 1969; Lawson et al. 2016; Miles and Lisberger 1981). Animals have difficulty crossing a balance beam following complete cerebellar damage and the righting reflex is interrupted. Consciousness, which is a declarative attribute, is not affected following cerebellar damage (D’Angelo and Casali 2013; Petrosini et al. 1998; Tononi and Edelman 1998). As with cerebellar impairment, following neocortical ablation, locomotion is not eliminated but the sequencing of movement is severely affected (Vanderwolf 2007; Vanderwolf et al. 1978). Stepping responses can be evoked in spinal animals, but with a total loss of balance and muscular coordination since both cerebellar and neocortical support is now absent (Audet et al. 2022; Grillner 2003; Sherrington 1910).
Following a stroke that affected the left mediolateral and posterior lobes of the cerebellar cortex (including the left dentate nucleus), it was found that the subject (aged 72), a (right handed) war correspondent who had been versed in seven languages, could no longer communicate in his non-primary languages (see Fig. 1, Mariën et al. 2017): French, German, Slovenian, Serbo-Croatian, Hebrew, and Dutch (in the order of having learned the languages before the age of 40). Before the stroke, the subject used Dutch, French, and English regularly. After the stroke his primary language, English, remained intact. Most significantly, the day of the stroke, all thinking in the second languages was abolished (see Footnote 1). One day following the stroke, however, the French language returned. Nevertheless, the remaining secondary languages were abnormal. Reading was better preserved than oral and written language, likely because reading is dependent mainly on scanning a page with the eyes and having an intact neocortex for word comprehension (fMRI revealed language activations in neocortex and in the intact right cerebellar hemisphere, Mariën et al. 2017). Speaking and writing, on the other hand, are more dependent on the sequencing of multiple muscle groups, a task of the cerebellum (Heck and Sultan 2002; Sultan and Heck 2003; Thatch et al. 1992). When speaking or writing in a non-primary language, English words would intrude. The naming of objects and actions verbally was impaired, and writing was severely disrupted. When high-frequency visual stimuli (objects, animals, etc.) were presented visually (1 month after the stroke), identifying an object with the correct word surpassed 80% correctness for English, French, and Dutch, whereas it remained at under 20% correctness for German, Slovenian, Serbo-Croatian, and Hebrew. Since the execution of behavior depends on loop integrity between the neocortex and cerebellum (Hasanbegović 2024), it is highly likely that damage to the cerebellum undermined this integrity such that the least overlearned routines—German, Slovenian, Serbo-Croatian, and Hebrew—were disturbed. Note that a functional left neocortex (of the right-handed subject) with a preserved right cerebellum was sufficient to execute the overlearned languages—English, French, and Dutch.
Based on our understanding of cerebellar function, if the entire cerebellum (including the subjacent nuclei) were damaged in the subject, we would expect that even English, the primary language, would be compromised, and most importantly, the learning of a new language would be rendered impossible, given the dependence of behavioral executions (and learning) on intact neocortical-cerebellar loops (Hasanbegović 2024; also see: Sendhilnathan and Goldberg 2000b; Thach et al. 1992). Thus, thinking is affected by damage to neocortical-cerebellar loops, which concurs with the behavioral findings of Hasanbegović (2024).
Footnote 1: Self-report by the patient about the day of the cerebellar stroke: “I was watching television at my apartment in Antwerp when suddenly the room seemed to spin around violently. I tried to stand but was unable to do so. I felt a need to vomit and managed to crawl to the bathroom to take a plastic bowl. My next instinct was to call the emergency service, but the leaflet I have outlining the services was in Dutch and for some reason, I was unable to think (or speak) in any language other than my native English. I have lived in Antwerp for many years and use Dutch (Flemish) on a day-to-day basis. I called my son-in-law, who speaks fluent English and he drove me to Middelheim Hospital. We normally speak English when together. I understood none of the questions asked to me in Dutch by hospital staff and they had to be translated back to me in English. My speech was slurred. I had lost some words, I was aware of that, but I cannot recall which words. I made no attempt to speak any of the other languages I know, and in the first hours of my mishap happening, I do not think I realized that I had other languages.” (Mariën et al. 2017, p. 19)
Figure 1. Human cerebellar cortex. The mediolateral and posterior lobes are indicated. The mediolateral lobe of the cerebellum (right and left) is part of the cortico-frontal-cerebellar language loop (Stoodley and Schmahmann 2009), and cerebellar grey matter density in bilingual speakers is correlated with language proficiency (Pliatsikas et al. 2014). Typically, the innervation of the left neocortical language areas is strongest to the right cerebellum in right-handed subjects (Van Overwalle et al. 2023). Illustration from figure 8 of Tehovnik, Patel, Tolias et al. (2021).
Relevant answer
Answer
Ευχαριστώ για τα καλα λόγια, thank you for the words of encouragement.
  • asked a question related to Language
Question
2 answers
We can totally get the sentence meaning without them.
Relevant answer
Answer
Chinese does not use verb tenses, or the verb "to be" I understand, causing listeners to have to figure out timing by context. But verb tenses are deeply ingrained in English.
Yes, we can decode the meaning, but it makes the speaker sound uneducated as the result of using "bad grammar", thus lowering credibility.
Note that when AI translates Chinese into English, it generally translates as present tense, even when the text is clearly talking about something in the past or the future.
  • asked a question related to Language
Question
4 answers
Theta activity (~ 6-10 Hz) has been associated with transitions between different frames of consciousness, as studied using binocular rivalry (Dwarakanath, Logothetis 2023). This rhythm is modulated by neurons in the septal area by way of the hippocampus (Buzsáki 2006; Stewart and Fox 1990). A travelling theta wave occupies the posterior-anterior length of the hippocampus during locomotion along a track (Lubenov and Siapas 2009; Zhang and Jacobs 2015). Both excitatory (cholinergic) and inhibitory (GABAergic) neurons located within the septum are important for maintaining this rhythm (Stewart and Fox 1990). These neurons not only innervate the hippocampus, but they also affect the neocortex (Beaman et al. 2017; Bjordahl et al. 1998; Engel et al. 2016; Goard and Dan 2009; McLin et al. 2002; Miasnikov et al. 2009; Pinto et al. 2013; Tamamaki and Tomioka 2010; Vanderwolf 1969, 1990) so that the two regions can exhibit synchronized activations when tasks such as running along a track, playing a musical instrument, or delivering a speech are being executed. These behaviors require transitions between different frames of consciousness, as stored declaratively within the neocortex (Corkin 2002; Dwarakanath, Logothetis 2023; James 1890; Sacks 1976, 2012; Squire et al. 2001). Having both excitatory and inhibitory inputs to the neocortex (Stewart and Fox 1990; some 2/3 of neocortical neurons are excitatory and the remainder are inhibitory, Bekker 2011) allows for specific strings of consciousness to be concatenated, but only after overtraining which diminishes the roll of the cerebellar cortex (e.g., Lisberger 1984; Miles and Lisberger 1981). Thus, the concatenated items of the neocortex would need to have ready access to the brain stem and spinal cord nuclei to produce a sequence of behaviors (Kumura 1993; Vanderwolf 2007). For this to be accomplished there needs to be a fine interplay between the inhibitory and excitatory fibres of the neocortex. Exactly how this happens sequentially remains to be deduced by careful experimentation, but we now have the technology to study this globally in the brain (e.g., Hasanbegović 2024).
The travelling wave via the hippocampus (Lubenov and Siapas 2009; Zhang and Jacobs 2015) must be paired with specific neocortical neurons to deliver a declarative expression, such as—"I want to be a scientist”—which is generated by the muscles controlled by the brain stem vocal apparatus (see Footnote 1). Each cycle of a travelling wave would sample a particular sequence of activations within the neocortex and across one cycle a specific collection of neurons would be sequenced, and items stored within each neuron delivered verbally. This process would be repeated—the repetition of unique strings of consciousness—until the completion of a speech. The cerebellar cortex would only be engaged while delivering a speech, if alterations needed to be made to the executable code, which would happen, for example, if someone from the audience asked a question. Such an alteration would require a volitional intervention by the speaker (i.e., by the neocortex) to interrupt the automatic running of the executable code as memorized.
Footnote 1: The reason humans have been endowed with speech is because the M1 pyramidal fibres innervate the vocal apparatus directly which is composed of the following cranial nerves: V, VII, X, and XII (Aboitiz 2018; Kimura 1993; Ojemann 1991; Penfield and Roberts 1966; Simonyan and Horwitz 2011; Vanderwolf 2007). This allows for maximal control over the speech muscles. It is known that most speech, irrespective of language type, can be transferred at about 40 bits per second (Coupé et al. 2019; Reed and Durlach 1998; Tehovnik and Chen 2015). One will need to investigate whether this limit is set by the number of pyramidal fibres dedicated to the production of speech [note that a brain-machine interface for speech was found to transfer 2.1 bits per second for neural recordings made in the speech area of M1 (Willett, Shenoy et al. 2023), which falls well short of the 40 bits per second needed for normal performance]. Some 100 of the 700 skeletal muscles of the human body are involved in the delivery of a speech to operate the vocal apparatus (Simonyan and Horwitz 2011).
Relevant answer
Answer
That is one of the most amazing sequential descriptions inhave ever seen put together!! very enjoyable and informative reading!!!!
  • asked a question related to Language
Question
5 answers
When does a language become dead?
Relevant answer
Answer
when people stop using it their communication for one reason or another. Different factors can lead to language death; historical, political, cultural, economical, social, psychological, when? when the most of the previous factors meet together
  • asked a question related to Language
Question
2 answers
Without a neocortex language processing in humans is impossible (Kimura 1993; Ojemann 1983, 1991; Penfield and Roberts 1966) and without a hippocampus (but with an intact neocortex and cerebellum) new language associations cannot be consolidated into long-term memory (Corkin 2002). Noam Chomsky (1965), the father of modern linguistics, made two bold claims some 60 years ago. First, he declared that all humans have a universal grammar that is genetically based and that explains why language acquisition is so rapid in young children. Second, he proposed that a central process in language acquisition is a principle called ‘merge’, which takes two syntactic elements ‘a’ and ‘b’ and merges them to form ‘a + b’. For example, ‘the’ and ‘apple’ are combined to yield ‘the apple’. This process can apply to the results of its own output such that ‘ate’ can be combined with ‘the apple’ to yield ‘ate the apple’. Language is thus built-up from component parts using a process called Merge. The basic elements of language (whether auditory or visual) are stored in Wernicke’s and Broca’s areas in a declarative format (Corkin 2002; Penfield 1975; Penfield and Roberts 1966; Scoville and Milner 1957; Squire and Knowlton 2000; Squire et al. 2001) according to the learning history of an individual to create a linguistic map that is unique (Ojemann 1991).
The neocortex of mammals was designed to make associations at the synaptic level, which is well established (Hebb 1949, 1961, 1968; Kandel 2006; also see Pavlov 1927, pp. 328 who found that classical conditioning is rendered ineffective 4.5 years after neocortical removal, but ‘vegetative’ conditioning is intact, Gallistel 2022). Normally, electrical stimulation of M1 (i.e., motor cortex) yields a muscle twitch, but after electrical stimulation of M1 is temporally paired with the electrical stimulation of V1 (i.e., the visual cortex) then electrical stimulation of V1 evokes a muscle twitch on its own (Baer 1905; Doty 1965, 1969). Furthermore, V1 conditioning is dependent on descending pyramidal fibres (Logothetis et al. 2010; Rutledge and Doty 1962; Tehovnik and Slocum 2013), which means subcortical circuits must be involved in the learning process. And we already know which subcortical structures are important here: the hippocampus consolidates the declarative information at the level of the neocortex (Corkin 2002; Penfield 1975; Penfield and Roberts 1966; Scoville and Milner 1957; Squire and Knowlton 2000; Squire et al. 2001; Swain, Thompson et al. 2011) and the cerebellum converts the declarative information into executable code, i.e., to drive the vocal cords for speaking and hand movements for writing (Tehovnik, Hasanbegović, Chen 2024).
Hence, the neocortex, the hippocampus, and the cerebellum together are necessary for humans to acquire language as envisioned by Chomsky (1965). And this capacity evolved from mechanisms already existent in mammals/ vertebrates (i.e., a telencephalon and a cerebellum) and that was passed on to archaic Homo sapiens some five hundred thousand years ago (Kimura 1993), but some believe that the basic elements of language existed in Homo erectus 2.5 million years ago (Everett 2016).
Note: Activation of two microzones composed of Purkinje neurons in the cerebellar flocculus (one for horizontal movement and a second for vertical movement) using optogenetics induces precise movement of the ipsilateral eye of the mouse (from Fig. 5 of Blot, De Zeeuw et al. 2023). This precision is such that each eye has independent innervation for VOR and OKN (the independence allows the eyes to verge across different depth planes). Although we do not have the data for driving the vocal cords, distinct microzones must be activated when we learn to speak a language. This is how declarative information of the neocortex is converted into a motor response (a sound) during learning. No need to invoke abstract concepts to explain Chomsky’s ‘Merge’ since the brain is explainable biologically.
Relevant answer
  • asked a question related to Language
Question
7 answers
The 5th International Conference on Language, Art and Cultural Exchange (ICLACE 2024) will be held on May 17-19,2024 in Bangkok, Thailand.
ICLACE 2024 is to bring together innovative academics and industrial experts in the field of Language, Art and Culture to a common forum. The primary goal of the conference is to promote research and developmental activities in Language, Art and Culture and another goal is to promote scientific information interchange between researchers, developers, engineers, students, and practitioners working all around the world.
The conference will be held every year to make it an ideal platform for people to share views and experiences in Language, Art and Culture and related areas. We warmly invite you to participate in ICLACE2024 and look forward to seeing you in Bangkok, Thailand!
Important Dates:
Full Paper Submission Date: March 10, 2024
Registration Deadline: April 1, 2024
Final Paper Submission Date: April 28, 2024
Conference Dates: May 17-19,2024
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕Language
· Philosophy of Language and International Communication, Language and National Conditions
· Oral Teaching, Chinese Language and Literature, Philosophy in Language
· Body Language Communication, Language Research and Development, Language Expression
· Analysis and Research on Teachers' teaching Language and Network Language
◕ Art
· Materials and Technology, Environmental Sculpture Modeling, Murals and Reliefs, Decorative Foundation, Aesthetics
· Public Facilities Design, Architecture and Environment Design, Space Form Design, Public Governance Change
· Exhibition Design, Art design, Digital Media Technology, Landscape Planning and Design, Gem design, Industrial Design
· Art Theory, Music and Dance, Drama and Film and Television, Fine Arts, Chinese Calligraphy and Painting, Film and Film Production
◕ Cultural Exchange
· Campus and Corporate Culture Construction, Adult Education and Special Education, Creative Culture Industry and Construction, Educational Research
· Chinese Traditional Culture and Overseas Culture, Comparative Study of Chinese and Foreign Literature, Comparison of Chinese and Foreign Cultures and Cross-Cultural Exchanges
· Regional Culture and Cultural Differences, Intangible Cultural Heritage, Cultural Confidence and Connotation
· Red Inheritance and Cultural Heritage, Cultural Industry, Drama, Philosophy and History
For More Details please visit:
Relevant answer
Answer
Thanks for the response.
  • asked a question related to Language
Question
4 answers
Are people more likely to mix up words if they are fluent in more languages? How? Why?
Relevant answer
Answer
Certainly! A person who is eloquent in more than one language is more likely to code-switch and mix up words from different languages within her L1. Language users be it consciously or unconsciously, seek facilitating things for themselves. Reasons for this interference vary:
1/ Similarities in pronunciation, grammar, vocabulary among languages systems like French, English, Spanish do play an important role in a multilingual society. The fact of knowing more than one language because of historical reasons, mixing up words become crucial when people communicate with others from different languages. A person who is fluent in French may easily mix up words when using English. The same thing happens to learners who mix up words from French when writing in or speaking English.
2/Language dominance: A bilingual speaker who uses the second language the whole day at work and with colleagues may not prevent herself from mixing up words when using her mother tongue at home.
3/ Prestige is another reason why people mix up words. For example, in Algeria people who uses French (a second language) words or sentences with Arabic is considered intellectual.
3/Actually languages interferences and code-witching occur even in the same language. For instance, a person who lives or works in an area which is far from home may be noticed since she uses different vocabulary and body language. The same thing happens to the same language user when words are mixed up using her own language at home.
  • asked a question related to Language
Question
2 answers
Listening to the BBC this morning (Jan 13, 2024), whereby the guests were inter-language translators of literary books. The question addressed by the host of the program was: Will AI replace translators? After a group of translators of scholarly novels and poems was interviewed, it became clear that these folks will not be replaced any time soon, no matter how good the AI technology might now appear to be. The reason? Context!
For example, take the expression ‘Trump Matters’. Now a simple interpretation of this might be: “Yes, Trump is a human being and like all human beings he matters.” But someone who has absorbed lots of current affairs as it pertains to the United States might interpret it as a play on words based on the expression ‘Black Lives Matter’. If so, this introduces a whole new series of complexities for a translator. First one must understand that ‘Black Lives Matter’ is a movement in the United States that responded to the death of an African American, George Floyd, who was choked to death by a police officer. Within this context, the term ‘Trump Matters’ cannot be translated using the simple formulation and the translator must be familiar with the movement and with the intentions of the person who coined the term ‘Trump Matters’, who believes that he matters because he and his supporters are trying to suppress the history of the African American experience in the US, which is something that the Ku Klux Klan did throughout the United States within the period of American Reconstruction and beyond (1865-1960).
Neuroscientists now understand that objects/words are stored in the brain according to their context (Lu and Golomb 2023). Furthermore, when the neocortex/hippocampal complex is damaged one cannot learn new words (Corkin 2002); following cerebellar damage one cannot learn new movements as triggered by a declarative, conscious context such as an imbedded word (Tehovnik, Hasanbegović, Chen 2024). Finally, following destruction of the language areas of neocortex, one is (forever) ‘blind’ to all words and phrases for both their reception and production (Kimura 1993; Penfield and Roberts 1966) even if the cerebellum remains intact.
In short, AI will not be replacing the human brain anytime soon due to the problem of storing ‘context’. It is the storage of context along with the object that makes an individual’s recollection of history unique, which means Einstein, Kasparov, and Pelé cannot easily be converted from one to the other, brain-wise. So, all the nonsense of hooking up different brains to transfer their experiences (e.g., Pais-Vieira, Nicolelis et al. 2013) is just that: nonsense (see Tehovnik and Teixeira e Silva 2014).
Relevant answer
Answer
Until AI systems learn to determine meaning, translators can sleep peacefully. This is unachievable within a 5-10 year horizon.
  • asked a question related to Language
Question
3 answers
No one has the mental capacity to know all languages. Additionally, the more languages one is fluent in, the more likely that individual will mix up words. Thus, knowing enough languages for survival is optimal while artificial intelligence could and potentially will bridge language barriers. Of course knowing three languages or more is somewhat of an advantage.
Relevant answer
Answer
Sure, the focus study helps to find many special points of Strength in the language.
  • asked a question related to Language
Question
6 answers
this is what they say on etymoline.com:
"late 14c., auctorisen, autorisen, "give formal approval or sanction to," also "confirm as authentic or true; regard (a book) as correct or trustworthy," from Old French autoriser, auctoriser "authorize, give authority to" (12c.) and directly from Medieval Latin auctorizare, from auctor (see author (n.))."
Relevant answer
Answer
That is the way. The first step is the addition of the suffix -ize, which is used to create verbs from adjectives, to the root 'author' (in its adjectival meaning) creating the verb digitalized with the meaning of 'making something A ('author')'. After this, we add negative preffix 'un', which means 'the opposite or contrary action of V', creating 'unauthorize'. The same evolution follows the chain: digital>digitalize>undigitalize.
  • asked a question related to Language
Question
5 answers
Could it be possible to access similar studies in commonly used languages other than English regarding the relationship between title length and citation in academic articles?
Relevant answer
Answer
Hello Prof Metin Orbay
I have also heard (but do not know there is proof) that if you make your journal title a question that it is less likely to get read and cited. This could be an old wives' tale!
  • asked a question related to Language
Question
3 answers
Our world has many kind of languages. Some languages have important contribution for our lives. For example we can learn a lot of sciences and technologies transfer from that language. But the other ones maybe ourselves don't find any advantages to study that language. Looks like just waste our time. What is your opinion about this topics ?
Relevant answer
Answer
Thank you for your insight Rhianon Allen, and Victoria Sethunya.
  • asked a question related to Language
Question
3 answers
‘Entrance to courses is frequently restricted by high prerequisites in terms of prior academic performance (Arendt, Lange, & Wakefield, 1986; Crawford-Lange, 1985; Lange, 1987). This elitism is curious when one considers that it operates under the assumption that some students cannot learn a second language when virtually all students have achieved proficiency in a first language (Crawford & McLaren 2004, p. 141).
Should Higher Education institutes in native English-speaking countries request from Non-native English Speakers (NNES) English proficiency requirements for entry without mandating the same proficiency tests for Native English Speakers (NES)?
Some Higher Education institutes in native English-speaking countries require proof of proficiency from Non-native English-speaking individuals for entrance. There is no question that students need to communicate in the target culture language. However, these institutes enforce strict IELTS band scores for each language skill (reading, writing, speaking, and listening) from NNES but do not mandate that NES undertake the proficiency test. This assumes that NES are naturally skilled in reading, writing, speaking, and listening, whereas, in reality, not all NES have strong writing or reading skills.
Arguments to consider:
1) Some NNES might have exam anxiety, which puts them at a disadvantage when taking English proficiency tests.
2) Some topics in English proficiency tests are specific to NES cultures that NNES may be unfamiliar with.
3) NNES should have the opportunity to be accepted regardless of their English proficiency scores with options for prerequisite courses for improvement.
4) Different cultures have different writing styles. Language Tests assessors might not be familiar with these cultural differences, which may affect grading.
Relevant answer
Answer
The question of English proficiency is a requirement by all the institutions with a some differences from one institution to another on the basis of the discipline every student wants to study.
  • asked a question related to Language
Question
7 answers
As in French le/la, in German der/die/das & other languages, thera are genders for words & so articles in some languages. Grammaticaly gender for words are complete redundancy !? Governments have to cancel them offically as soon as possible so that people can learn those languages easily also. One of the reason English almost became universal language is due to being genderless for words !
"It's an inheritance from our distant past. Researchers believe that Proto-Indo-European had two genders: animate and inanimate. It can also, in some cases, make it easier to use pronouns clearly when you're talking about multiple objects."
As Mark Twain once wrote in reference to German:
A person’s mouth, neck, bosom, elbows, fingers, nails, feet, and body are of the male sex, and his head is male or neuter according to the word selected to signify it, and not according to the sex of the individual who wears it! A person’s nose, lips, shoulders, breast, hands, and toes are of the female sex; and his hair, ears, eyes, chin, legs, knees, heart, and conscience haven’t any sex at all…
Relevant answer
Answer
Each language has its own rules and structure. Governments have nothing to do with this. It is language specific. You can not change it in a fortnight.
Regards
Mustapha Boughoulid
  • asked a question related to Language
Question
2 answers
The question of transliteration (transcription) from Ukrainian Cyrillic to Latin in scientific texts is something that each Ukrainian researcher faced in his life. However, I could not find a perfect solution, so I am asking for your opinion.
Previously, for the transliterations of Ukrainian texts, I used transliteration rules from 27.01.2010 (http://ukrlit.org/transliteratsiia), which are known in Ukraine but not always understandable for foreigners. E.g. my previous surname was also transliterated using this standard (Дарія Ширяєва -> Dariia Shyriaieva, and to be honest, I do not know any foreigner who can read my name correctly using this official transliteration, especially the four vowels in the line "iaie"...)
So, it was not an ideal option, but I was used to it, and it is an accepted transliteration. That's why I defended this transliteration in my discussions with others.
However, I see that many people follow the ISO 9 standard (https://en.wikipedia.org/wiki/ISO_9) as an international standard, which also seems not ideal to me (at least the last version). Also, recently I found the mention of a new transliteration standard (quite a strange one!): "DSTU 9112:2021. Cyrillic-Latin transliteration and Latin-Cyrillic retransliteration of Ukrainian texts. Writing rules" (https://uk.wikipedia.org/wiki/%D0%94%D0%A1%D0%A2%D0%A3_9112:2021).
Could you please explain how you transliterate Ukrainian texts, which standard you use and why?
Thank you very much!
Dariia
Relevant answer
Answer
The same. As far as I understand, 2010 style is still used in official documents. Plus, despite the shortcomings, this is the simplest option, for which the standard Latin alphabet layout is enough.
  • asked a question related to Language
Question
9 answers
In UAE,  there are other Arabic dialects used. I just want to examine students attitudes towards English, Standard Arabic and the spoken dialect using the matched guise technique. So which dialect to use the in the recorded  dialect guise  ?
Relevant answer
  • asked a question related to Language
Question
6 answers
One of the answer would be the sensory input, but I want to know what others think
Relevant answer
Answer
Language remains the birth child of every children of the universe .We all know that at the birth time of every child which occurs the sound resembling the sound of the arrival creating the sound calling the problems of new arrival .Every human beings irrespective , caste , creed & religion are social animals sitting the company of their society & culture & with growth of every human beings language comes automatically in their ear & mind . It is naturally that every human beings take the meaning of understand with the language .
This is my personal opinion
  • asked a question related to Language
Question
10 answers
In Turkey, translation is used in the multiple-choice format in language proficiency exams. I wonder if there are any other examples around the world.
  • asked a question related to Language
Question
4 answers
Hi, everyone. :)
Language maintenance and language shifting is an interesting topic. Talking about Indonesia, our linguists note that until 2022 Indonesia has 718 languages. Indonesia really cares about the existing languages.
One thing that is interesting, language maintenance and language shift are also influenced by geographical conditions.
To accommodate 718 different languages, Indonesia has a geographical condition of islands. If we move from island to island in Indonesia, the use of the language is very contrasting, there is contact of different languages ​​between us.
Some literature states that language maintenance and language shift are strongly influenced by the concentration of speakers in an area.
So, in the developments related to the topic of language maintenance and language shift regarding geographical conditions, to what extent have linguists made new breakthroughs in this issue?
I think that the study of language maintenance and language shifts related to regions is the same as the study of food availability or state territory which makes the area the main factor for this defense.
I throw this question at all linguists, do you have a new point of view in the keywords language, maintenance, and geographical.
Kind regards :)
Relevant answer
Answer
Language maintenance is the maintenance of a language (usually L1) despite the influence of external sociolinguistic forces (usually powerful language(s) and language shift, is a shift, transfer, replacement or language assimilation of usually L1 to L2 due mainly to the external sociolinguistic forces influencing a speech community to shift to a different language over time. This happens because speakers may perceived the new language as prestigious, stabilized, standardized over their L1 (lower-status). An example is the shift from first languages to second language(s) such as the English language.
Solution for language maintenance and protection from language shift rests on Social networks.
Social network deals with the relationships contracted with others, with the community structures and properties entailed in these relationships (Milroy, 1978,1980 &1987)
· It views social networks as a means of capturing the dynamics underlying speakers’ interactional behaviours and cultures.
The fundamental assumption is that people create their communities with meaningful framework in attaining stronger relationship for solving the problems of daily life.
Personal communities are constituted by interpersonal ties of different types, strengths, and structural relationships between links (varying in nature) but a stronger link can become the anchor to the network.
For close-knit network with strong ties
Such networks have the following characteristics, they are
  • Relatively dense = everyone would know everyone else (developing a common behavior and culture)
  • Multiplex = the actors would know one another in a range of capacities
Where do we find some close-knit networks? In smaller communities, but also in cities, because of cultural and economical diversity, e.g. newer emigrants communities, or High-educated individuals.
Functions:
  1. Protect interest of group
  2. Maintain and enforce local conventions and norms that are opposed to the mainstream -> lingustic norms, e.g.vernaculars, are maintained via strong ties within close-knit communities.
Network with weak ties
These networks have the following characteristics, they are:
  • Casual acquaintances between individuals
  • Associated with socially and geographically mobile persons
  • They often characterize the relations between groups
Lead to weakening of a close-knit network structure -> these are prone to change, innovations and influence between groups and may lead to language shift/language transfer/language/language replacement.
  • asked a question related to Language
Question
3 answers
Hello all. I hope you are always in good health.
In the maintenance and shift of language, in the current era. What factors are most influential in language maintenance or language shift?
Generally, language maintenance and language shift involve attitudes, bilingualism, number of speakers, regional concentration, genealogy, etc.
Share your experience here. :)
Relevant answer
Answer
The factors are diverse and include political, social, demographic, economic, cultural, linguistic, psychological and institutional support factors. They are demonstrated in this article
  • asked a question related to Language
Question
11 answers
From Hamlet: “What a piece of work is a man, how noble in reason, how infinite in faculties, in form and moving how express and admirable, in action how like an angel, in apprehension how like a god: the beauty of the world, the paragon of animals!”
From Herder’s On the Origin of Language (Abhandlung über den Ursprung der Sprache): “... we perceive to the right and to the left why no animal can invent language, why no God need invent language, and why man, as man, can and must invent language."
When Shakespeare and Herder use the word “man”, do they mean every individual human being or all of humanity acting collectively are noble in reason (per Hamlet) or create language (per Herder)? Do they use the word “man” as representative of humanity, or to they mean that every individual human being warrants admiration?
Relevant answer
Answer
Very much both. But genius is individual.
  • asked a question related to Language
Question
33 answers
Is it a problem of philosophy, language, physics, thermodynamics, statistical mechanics, or brain physiology? Or something else? Or beyond understanding?
A physiological approach is discussed by Joseph LeDoux (in The Deep History of Ourselves, 2020) among other authors. A physics orientation is considered in Deepak Chopra, Sir Roger Penrose, Brandon Carter (How Consciousness Became the Universe: Quantum Physics, 2017). David Rosenthal has written several books of philosophy about consciousness. And Bedau 1997 and Chalmers 2006. Which is the right conceptual reference frame? Or is more than one required?
Relevant answer
Answer
  • asked a question related to Language
Question
37 answers
Fellow psychologist and people of great curiosity! Greetings! Please help a novice in the topic. I was asked of my opinions on how well our words represent our true thought and beliefs, which left me wondering if there are empirical evidences on the subject. As it is not my field, I had a quite hard time finding the right word into the search engine. It would be great if you could suggest a few readings or simply share your thoughts!
(There is no parametre so far, thus, it could be anything related to the topic ' to what extent language does reflects attitude; what are factors influencing truthfulness of the word; When we change our attitude towards a certain topic, would our words adapt just as fast? etc.)
Thank you!
Steven
Relevant answer
Answer
There are cognitive skills that guide behavior. The deployment of these skills can be regarded as involving thought at various levels of awareness, including unconscious thought. Such know-how cannot be completely verbalized and indeed, some verbalization can interfere with the acquisition or exercise of the skill. Developers of AI drawing on human exemplars of expertise face this problem when they try to reduce skills to rules (rules are verbal) inasmuch as human experts often don't seem to employ a rule-based approach and even when they do invoke rules, their rules don't fully represent their modus operandi.
  • asked a question related to Language
Question
7 answers
I am interested in meaning-making practices associated with visual language and what that means for traditional curricula in the English-speaking Caribbean.
Relevant answer
Answer
In the Middle East, there is no interest in this topic
  • asked a question related to Language
Question
7 answers
Can anyone recommend a journal for submission? I am particularly looking for journals that (i) accept pieces in the 800 to 2000 word range, and (ii) that have no publication fees.
Relevant answer
Answer
Karl Pfeifer you can try Wiley or the International Journal of Linguistics, Literature, and Translation (IJLLT)
  • asked a question related to Language
Question
10 answers
Studying one of the varieties of Persian, it is assumed that, regardless of the stress position, all the short (mono-moraic) vowels are reduced to schwa in all of the open syllables. More clearly, all long (bi-moraic) vowels are kept intact and the short vowels have a surface representation only if they are the nucleus of closed syllables. Has any research provided any evidence of a language or a variety which can fit a similar phonological pattern?
Any information would be greatly appreciated.
Relevant answer
Answer
Johan Schalin Thanks for the reply. I will read it with great interest.
  • asked a question related to Language
Question
9 answers
Dear Friends,
Greeting.
Happy New Year. I wish everybody a prosperous New Year.
I'm thinking of a project for checking the sound (phonetics) that will lost or promoted while switching from one set of alphabet to another. For example switching from Arabic letters to Latin in Turkey; does the set of Latin letters saved all Turkish phonetics (sound)? What is the advantages and/or disadvantages of such switching?
Did such work carried out anywhere?
Best Regards,
ABDUL-SAHIB
Relevant answer
Answer
Dear عائشة عبد الواحد thank you for the invaluable answer.
  • asked a question related to Language
Question
19 answers
Hi all. A project I'm working on involves the use of a two-way repeated measures ANOVA. The dependent variable is the transcriptional accuracy of sentences-in-noise (measured in proportions). The independent variables are accents of the sentences (2 accents) and visual primes (2 kinds of primes). The results show that there were significant main effects of primes and accents and a significant two-way interaction between primes and accents (F(1, 30)=9.97, p=0.004). However, as shown in the attached line chart, the two lines are almost parallel. Moreover, post-hoc paired-sample t-tests confirmed that participants' accuracy with accent2(Mean=0.77, s.d.=0.13) is significantly higher than accuracy with accent1(Mean=0.51, s.d.=0.18) in prime 1 condition, and similarly, participants' accuracy with accent2 (Mean=0.68, s.d.=0.13) is significantly higher than accuracy with accent1(Mean=0.31, s.d.=0.12) in prime 2 condition. Does this indicate that the main effects of accent and prime are not dependent on each other? If so, isn't this contradictory with the result suggesting significant interaction? Or is it that the occurrence of a significant 2-way interaction only requires that the difference between the group mean accuracies with accent 1 and 2 was smaller in prime 1 condition than in prime 2 condition, which in this case is true.
Thank you in advance!!!
Relevant answer
Answer
This is a good example of why it's important to not place too much importance on p-values.
The significant p-value tells you that the statistical procedure is able to identify an effect of the interaction against the noise of the variability of the data * . Looking at the plot, your eyes, too, tell you that the slopes are different relative to the variability in the data. The error bars on the red points overlap, and those on the blue points do not.
But this significant result does not tell you that the interaction effect is large, nor that it is of any practical importance. The plot is very helpful for the reader to understand the results. The difference in slopes is small relative to the difference in Accents, and probably relative to the difference in Primes. How you interpret these observations for your reader is up to you.
__________
* This description is not technically correct, but hopefully gives you a sense of the point I'm trying to make.
  • asked a question related to Language
Question
56 answers
Is the use of English in scientific articles a real need for an international working language, or a sign of long-lasting Colonialism that keeps limiting the development of perspectives emerging from non-native English speaking cultures?
Do we really need to publish in English? I think we do unless we find another international working language to communicate with colleagues, and people in general, who use a language different than ours. Remember that, throughout history, scholars have always found one or a small group of working languages to communicate with each other (Latin, German, French, among others).
But, now that we use English, ... do we have alternatives to communicate our findings in our own language? Some people say we don´t because we have to invest every second of our time publishing in English. Some others say that we must find a way to save some time to publish in our language in order to better develop our ideas and to better communicate with our own societies. There must be other perspectives out there....please, let us know what would you do to reconcile the different alternatives, and bring solutions into practice, and also tell us what are your institutions doing to address this issue.
Framework Readings (feel free to suggest more. I´ll keep adding):
Relevant answer
Regardless of what English means in terms of colonialism, I am glad that the language of science has been standardized. Imagine making a literature search just to find relevant studies in more than 10 languages... Is necessary to discuss whether we need to migrate to another language? Perhaps if your findings are of national relevancy, you are absolutely free to publish your results in a native-speaking journal. On the other hand, if you are aiming at an international audience, English-based journals are the go.
  • asked a question related to Language
Question
12 answers
My friend is looking for coauthors in Psychology & Cognitive Neuroscience field. Basically you will be responsible for paraphrasing, creating figures, and collecting references for a variety of publications. Please leave your email address if you are interested. 10 hours a week is required as there is a lot of projects to be done!
Relevant answer
Answer
Will message you.
  • asked a question related to Language
Question
9 answers
Hello,
We are working on a review regarding the relationship between language and the mutiple-demand network. You will be responsible for addressing the reviewer's criticisms. Please leave your email address if you are interested.
Best,
W
Relevant answer
Answer
This would be a great question to post in our new free medical imaging question and answer forum ( www.imagingQA.com ). There are already a few fMRI questions on there and a number of fMRI users and experts in the community. If useful, please feel free to open a new topic at the link below :
  • asked a question related to Language
Question
19 answers
Most teachers agree that teaching the culture of native-speaking countries is valuable, but how MUCH should this be done?  Do you have a percentage in mind or other ways of saying how much of the course should be about culture?
And how does this fit in with the multi-cultural or meta-cultural perspective and rationale for learning the other language?
Has your perspective changed over time?
Relevant answer
Answer
Language and culture are two inseparable entities. Therefore, language learning is at once cultural learning. One’s mastery of the linguistic elements alone does not guarantee he will be able to communicate through a language. Mastering the cultural element is a must, such recognition then cultivated an awareness in foreign language teaching experts that language and culture are inseparable
  • asked a question related to Language
Question
13 answers
I have a research and i should analyze the types of code-switching. however, i can't use Poplack's theory because my instructor said that it is too old. Any suggestions of new theories?
Relevant answer
Answer
Garcia or Cangarajah's concepts of translanguaging might help you.
  • asked a question related to Language
Question
3 answers
Hello,
Are there any studies in linguistics about the average information density per character according to language (in the written form)?
Actually, I'm looking for data (rankings, for instance) on the average information density per character (or for 100, 1000, etc. characters) for languages like English, French, Japanese, etc. (in their written, not spoken, form).
Thank you very much.
  • asked a question related to Language
Question
20 answers
I was trying to determine whether there are differences in the frequencies of words (lemmas) in a given language corpus starting with the letter K and starting with the letter M. Some 50 000 words starting with K and 54000 words starting with M altogether. I first tried using the chi-square test, but the comments below revealed that this was an error.
Relevant answer
Answer
Did you try Python word count?
  • asked a question related to Language
Question
27 answers
google services that went from the best search engine to the backbone of the internet are very useful for the search of information, but sometimes the language in which that information is found is not the native of the researcher, for this reason translators are used to facilitate the understanding of it
Relevant answer
Answer
Google Translate is free, fast, and pretty accurate. Thanks to its massive database, the software can deliver decent translations that can help you get the main idea of a text...
  • asked a question related to Language
Question
13 answers
Hello! I am looking for Spanish, English and Chinese native speakers to participate in my final survey for my PhD thesis.
This is the direct link.
Thank you for your participation.
Relevant answer
Answer
Interesting
  • asked a question related to Language
Question
4 answers
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
Relevant answer
Answer
Could you just say: my plate has plain plasta?
  • asked a question related to Language
Question
7 answers
We attempt to make a research to explore prosodic features of verbal irony read by Chinese EFL learners. We want to figure out:
1. the prosodic features of verbal irony read by Chinese learners;
2. the difference of prosodic features in verbal irony read by Chinese learners and native speakers;
3. whether context (high and low) influence the reading of verbal irony. 
Relevant answer
Answer
(LLS= Language Learning Strategies).
  • asked a question related to Language
Question
3 answers
Where can I get the code for K prototype algorithm for mixed attributes? Has anyone implemented it in any language?
Relevant answer
Answer
I recently found an implementation of kprototypes in Python.
Besides, here is a useful example of kprototypes.
  • asked a question related to Language
Question
5 answers
Or not.
Harry Jerison in his 1991 book Brain Size and the Evolution of Mind, at p. 89 has:
Mind is a necessary brain adaptation that organizes otherwise unmanageable amounts of neural information into a representation of the external world.
Is Jerison right?
Relevant answer
Answer
Neurons are not the best level of abstraction when speaking of mind. You don't talk about myocytes when you discuss about soccer. Concepts, symbols, and the various types of interactions between them, are more appropriate building blocks of mind.
Regards,
Joachim
  • asked a question related to Language
Question
8 answers
I am looking for any resources which may be useful in a study I am conducting on the impacts that language may have on our perception of crimes. I will be using headlines which convey a particular crime in a variety of lights; one which may appear to justify the perpetrator's actions, and one which portrays the crime in a neutral, non-biased way. I am looking for sources/previous studies which may back up this idea.
Relevant answer
Answer
Yes, it can. I would also suggest using CDA methodology. A word's dictionary and contextual meanings may be used as a starting point.
  • asked a question related to Language
Question
2 answers
How to track language change?
Relevant answer
Answer
My offering has been in relation to: "The Surname - Where has it gone?"
Has it died or simply become obsolete?
My observation is in the medical context when conversing with patients
Using first names outside of the confines of family and friends creates an erroneous sense of intimacy and social equivalence which seems to pervade everyday professional and business activities but may have some limitations in the patient - doctor relationship.
Guy Walters' offerings in the Nov 2020 issue of 'The Spectator' magazine may be more generally applicable.@
  • asked a question related to Language
Question
19 answers
What data or physics supports innateness or on the contrary, the idea that language is a creation of society? Historically, from Herder through David Hume to Jespersen, Sapir, Whorf, Zipf , language was considered to have been created by societies. Beginning around 1960 the idea of language as a genetically innate human capacity began to have influence. Who is right?
Relevant answer
Answer
Dear Robert Shour, I agree with Prof. Farangis Shahidzade post.
  • asked a question related to Language
Question
12 answers
I have been studying Zen in general and Koans in particular for a while. And it's applications in BUSINESS
The formulation of these Koans at first glance seems absurd and an austere waste of time, at least to me at first, but I suddenly started to see the logic behind it.
My troubles at the moment are;
1) How would I generate such Koans where my aim would be to seek answers that satisfy two divergent goals, tasks, concepts etc...
And second
2) if I somehow manage to generate such thing, how would I present it to my audience?
A statement, a question, a puzzle, a riddle anything else?....
The above is the object of my next publication and I it seems my brain is too small to handle it, therefore I am asking your help to generate some Koans for the business world
Many thanks in advance
Relevant answer
Answer
Can you give me more clarification on this subject
  • asked a question related to Language
Question
5 answers
The Publication Manual of APA (7th edition) has a very useful chapter on bias-free language. I would like to know if you've come across such chapters or sections in other publication manuals or style guides.
Relevant answer
Answer
My pleasure, Jakob.
  • asked a question related to Language
Question
22 answers
I would love to hear what people have come across in relation to language accessibility in publications. Ideally the journal focuses on Entomology and/or biodiversity, but I am also just curious on a broader scale if language friendly journals exist.
Relevant answer
Answer
Dear Erin Krichilsky I'm just wondering why you are looking for "a journal that accepts publications in two languages or at least is bilingual friendly". What is it good for to publish in different languages? We used to publish our research papers in German back in the 1970's and 1980's, but then we realized that the papers were not read by many researchers abroad. Then we switched to English to make sure that our papers are read worldwide (and eventually cited).
  • asked a question related to Language
Question
15 answers
What are racism's effects on language acquisition?
Whether on a personal or institutional level, please share your experiences.
Thank you.
Relevant answer
Answer
Racism, as any other from of discrimination like untouchability or other marginalization invariably affects language acquisition. In case of academic language acquisition like teaching English as second language, there is an observed and marked indications that the degree of language learning is not the same as the 'majority ' or 'mainstream ' learners.
  • asked a question related to Language
Question
3 answers
This is so far the procedure I was trying upon and then I couldn't fix it
As per my understanding here some definitions:
- lexical frequencies, that is, the frequencies with which correspondences occur in a dictionary or, as here, in a word list;
- lexical frequency is the frequency with which the correspondence occurs when you count all and only the correspondences in a dictionary.
- text frequencies, that is, the frequencies with which correspondences occur in a large corpus.
- text frequency is the frequency with which a correspondence occurs when you count all the correspondences in a large set of pieces of continuous prose ...;
You will see that lexical frequency produces much lower counts than text frequency because in lexical frequency each correspondence is counted only once per word in which it occurs, whereas text frequency counts each correspondence multiple times, depending on how often the words in which it appears to occur.
When referring to the frequency of occurrence, two different frequencies are used: type and token. Type frequency counts a word once.
So I understand that probably lexical frequencies deal with types counting the words once and text frequencies deal with tokens counting the words multiple times in a corpus, therefore for the last, we need to take into account the word frequency in which those phonemes and graphemes occur.
So far I managed phoneme frequencies as it follows
Phoneme frequencies:
Lexical frequency is: (single count of a phoneme per word/total number of counted phonemes in the word list)*100= Lexical Frequency % of a specific phoneme in the word list.
Text frequency is similar but then I fail when trying to add the frequencies of the words in the word list: (all counts of a phoneme per word/total number of counted phonemes in the word list)*100 vs (sum of the word frequencies of the targeted words that contain the phoneme/total sum of all the frequencies of all the words in the list)= Text Frequency % of a specific phoneme in the word list.
PLEASE HELP ME TO FIND A FORMULA ON HOW TO CALCULATE THE LEXICAL FREQUENCY AND THE TEXT FREQUENCY of phonemes and graphemes.
Relevant answer
Answer
Hola,
Para el cálculo de la frecuencia léxica de unidades simples o complejas, se suele utilizar WordSmith o AntCon.
Saludos
  • asked a question related to Language
Question
874 answers
Do you know any aphorisms, old sayings, parables, folk proverbs, etc. on science, wisdom and knowledge, ...?
Please, quote.
Best wishes
Relevant answer
Answer
All too often a clear conscience is merely the result of a bad memory.
  • asked a question related to Language
Question
5 answers
We are conducting a research about the language use of Manobo students on social media specifically facebook, twitter and instagram. Your input could surely enhance the said endeavor.
Thank you very much!
Relevant answer
Answer
These studies can be found on many websites
  • asked a question related to Language
Question
4 answers
My question is connected to rather unclear point of error correlation that many scholars encounter while conducting their SEM analysis. It is pretty often when scholars report procedures of correlating the error terms to enhance the overall goodness of fit for their models. Hermida (2015), for instance, provided an in-depth analysis for such issue and pointed out that there are many cases within social sciences studies when researchers do not provide appropriate justification for the error correlation. I have read in Harrington (2008) that the measurement errors can be the result of similar meaning or close to the meanings of words and phrases in the statements that participants are asked to assess. Another option to justify such correlation was connected to longitudinal studies and a priori justification for the error terms which might be based on the nature of study variables.
In my personal case, I have two items with Modification indices above 20.
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
12 item1 ~~ item2 25.788 0.471 0.471 0.476 0.476
After correlating the errors, the model fit appears just great (Model consists of 5 latent factors of the first order and 2 latent factors of the first order; n=168; number of items: around 23). However, I am concerned with how to justify the error terms correlations. In my case the wording of two items appear very similar: With other students in English language class I feel supported (item 1) and With other students in English language class I feel supported (item 2)(Likert scale from 1 to 7). According to Harrington (2008) it's enough to justify the correlation between errors.
However, I would appreciate any comments on whether justification of similar wording of questions seems enough for proving error correlations.
Any further real-life examples of wording the items/questions or articles on the same topic are also well-appreciated.
Relevant answer
Answer
Dear Artem and Marcel,
there are two problems with post-hoc correlating errors
1) the error covariance is causally unspecific (as any correlation). If one possibility is true namely that both items additionally measure an omitted latent than estimating the error cov will fit the model but the omitted latent variable still is not explicitly contained in the model. This may be unproblematic if this latent is just the response reaction on a specific word contained in both items --but sometimes it may be a substantial latent variable missing in the model whose omission will bias the effects of other, contained latent variabels.
2) While issue #1 still presumes that the factor model is correct (but the items *in addition* share a further cause, the need for estimating error covs will appear as assign of a fundamental misspecification of the factor model: If the factor model is to simple (e.g., you test a 1-factor model where as the true structure contains more) than the only proposal the algorithm can make is to estimate error covs. These can be interpreted as the valves in a technical system. Opening the valves with reduce the pressure but not solve the problem. To the contrary: Your model will fit but it is worse than before.
One simple add-hoc test is to estimate the error cov and then to include further variables in the model which correlate (are receive / emit effects) with/from/on the latent target variable. You will often see that the model which had fitted one minute ago (due to the estimation of the error cov) again shows a substantial misfit as the factor model is still wrong and cannot explain the new restrictions and correlations between the indicators and the newly added variables.
Please not that the goal in CFA/SEM is not to get a fitting model! The (mis)fit of the model is just a tool to evaluate the causal correctness. If data fit would be the essential goal than SEModeling would be so easy: Just saturate the model and you always get a perfect data fit.
One aspect is the post-hoc justification of the error-covs: I remember once reading MacCallum (I think that it was him) who wrote that he knows no colleague who would not have enough phantasy to come to an idea to explain a post-hoc need for an error covariance. :)
Hence, besides the causal issues noted above, there are statistical problems with regard to overfitting capitalization on chance (as any other post-hoc change of the model). That is: Better look onto your items before doing the model testing and think wether they could be reasons that lead to an error covariance.
One example is the longitudinal case where error covariances between the same items are expected and are included from the beginning.
If you have to post hoc include the error covariances, carefully consider other potential reasons (mainly the more fundamental issues noted in #2) and replicate the study. But replication in causal inference context should always imply an enlargement of the model (i.e., including new variables).
Best,
Holger
  • asked a question related to Language
Question
23 answers
Dear Research Colleagues,
Are you familiar with studies on language acquisition in early simultaneous trilingual children that show whether there are any delays in their language development? I am familiar with several studies on early simultaneous bilinguals indicating that such speakers are not significantly delayed in language acquisition. I wonder if trilinguals differ from mono- and bilinguals in how fast they acquire their languages.
I will appreciate your feedback.
Thank you.
Pleasant regards,
Monika
  • asked a question related to Language
Question
6 answers
By using examples of sonnets from source language.
Relevant answer
Good answer Soma Chakraborty
  • asked a question related to Language
Question
30 answers
I'm doing a comparative study on social media language used by native and non-native speakers with special reference to Instagram. I am planning on using Discourse analysis. What is your take on this? Could anyone please suggest me what else can be used?
Relevant answer
Answer
I wonder why you are carrying out this study?
What questions are you trying to answer, through examining social media language from different speakers in this way?
Why are these questions interesting?
If you are clear about your own answers to questions like these, you will be better placed to judge which analytical methods are likely to be appropriate.