Science topic

Language - Science topic

Language is a verbal or nonverbal means of communicating ideas or feelings.
Questions related to Language
  • asked a question related to Language
Question
15 answers
🔵 Discussion Post for ResearchGate:
🔹 Introduction:
Language is more than just a tool for communication—it plays a fundamental role in shaping the way people think, behave, and interact in society. The structure, phonetics, and expression patterns of a language may influence individual personalities, collective behaviors, and even the cultural identity of a society.
📌 Does language determine how people in different societies process emotions, solve problems, and interact socially? 📌 Or is it the culture of a society that shapes the language they speak?
For example:
  • Some languages are direct and assertive (e.g., German, Russian, Arabic, Hebrew). Do these languages encourage decisiveness and structured thinking in their speakers?
  • Some languages are softer, musical, and expressive (e.g., French, Korean, Portuguese, Turkish). Do they foster diplomacy, emotional intelligence, or poetic expression?
  • Some languages use indirect phrasing and high-context communication (e.g., Japanese, Chinese, Persian). Do these influence social harmony and implicit understanding?
  • Some languages are highly adaptable and pragmatic (e.g., English). Does this flexibility contribute to a culture of innovation and open-mindedness?
🔹 Key Questions for Discussion:
1. Does the structure of a language shape the personality of its speakers? 📌 Are speakers of grammatically rigid languages (such as German, Hebrew, and Turkish) more likely to be methodical and precise? 📌 Do speakers of languages with flexible word order (such as Persian and English) develop more adaptive and creative thinking?
2. Does phonetics and speech style influence social interactions? 📌 Do societies with softer, more melodious languages (such as Portuguese, Turkish, Persian) exhibit more relaxed and friendly interactions? 📌 Do societies with harder, guttural phonetics (such as Dutch, Arabic, Hebrew) foster more direct and assertive social interactions?
3. Can language influence how people perceive time and decision-making? 📌 Some languages explicitly distinguish between past, present, and future (e.g., English, French, Turkish), while others (such as Mandarin, Hebrew, Persian) treat time as more fluid. 📌 Does this difference affect how speakers plan for the future and approach decision-making?
4. How does language influence emotional expression and societal norms? 📌 Some cultures have rich vocabulary for emotions (e.g., Japanese with "wabi-sabi" for transient beauty, or German with "schadenfreude" for joy in others' misfortune). 📌 Hebrew, Persian, and Turkish all have poetic and deeply expressive linguistic traditions. Does this linguistic diversity lead to greater emotional awareness among speakers?
5. Can modifying a society’s language change its cultural values? 📌 Turkey’s language reform under Atatürk simplified the language and replaced many Persian and Arabic influences. Did this impact Turkish cultural identity? 📌 English’s evolution from Old English to modern times has removed many complexities. Has this linguistic change influenced the cultural adaptability of English-speaking societies? 📌 Persian has absorbed words from Arabic, French, and English over centuries. Has this influenced Persian cultural identity and worldviews?
6. Does language complexity correlate with problem-solving ability and creativity? 📌 Do societies with more complex linguistic structures show higher cognitive flexibility? 📌 Or do simpler languages allow for clearer, more efficient communication?
🔹 Call for Participation:
🔹 Does language simply reflect a society’s mindset, or does it actively shape how people think, feel, and behave? 🔹 We invite linguists, cognitive scientists, sociologists, and cultural researchers to share insights on how language influences personality, social behavior, and collective identity.
Relevant answer
Answer
According to cognitive linguistics, natural language is a system of prompts/shortcuts/cues for accessing embodied sensations/experiences which are much richer than the prompts themselves. To put it into proper perspective, language is not the only system of prompts. Other systems include mathematics, physics and science in general, and technology, but also arts including music. The body itself is an important system of prompts, usually unconscious. To answer your question (Does Language Shape Who We Are?), language shapes what we do and not who we are. Best, Wes
  • asked a question related to Language
Question
2 answers
Dear colleagues,
We are very pleased to invite you to submit your latest research results, developments, and ideas to the 2025 3rd International Conference on Language and Cultural Communication (ICLCC 2025) will be held in Sydney, Australia from May 16 to 18, 2025.
Please visit the official website for more information:
***Call for Papers***
The topics of interest for submission include, but are not limited to:
◕ Language
Linguistics
Chinese/Foreign Languages and Literature
Applied Linguistics
Evolution of Language
Foreign Language Education
Language Acquisition
Research on classroom Teaching Method
Language and Culture
......
◕ Cultural Communication
Communication Science
Propagation Behavior
Journalism and Communication
Journalism
Public Relations
Brand Marketing
Digital Communication Technology and Application
Network and New Media
......
****Publication****
All papers will be reviewed by two or three expert reviewers from the conference committees. After a careful reviewing process, all accepted papers will be published in ASSEHR-Advances in Social Science, Education and Humanities Research (ISSN: 2352-5398), and submitted to CPCI、CNKI for indexing.
***Important Dates****
Full Paper Submission Date: April 16, 2025
Registration Deadline: April 26, 2025
Final Paper Submission Date: May 05, 2025
Conference Dates: May 16-18, 2025
***Paper Submission***
Please send the full paper(word+pdf) to Submission System:
Relevant answer
Answer
Can anyone participate from any part of the world?
  • asked a question related to Language
Question
7 answers
🚨 New Insights on the Role of Accent in Language Learning 🚨
In traditional language learning, we emphasize vocabulary and grammar, while accent training is often considered secondary. But is that the right approach?
I’ve recently come across two key observations that suggest accent plays a much bigger role in comprehension and effective communication than we often assume:
1️⃣ Personal Observation: When an Accent Makes English Easier to Understand
I recently watched a Persian speaker speaking English with a Persian accent at a fast, natural pace. Surprisingly, I found their speech much easier to understand than that of a native English speaker!
I tested this by showing the clip to another person (who has no background in linguistics), and they also found the Persian-accented English more comprehensible than the native English version.
💡 Key Takeaways: ✅ Listening comprehension isn’t just about vocabulary and grammar—it’s heavily influenced by phonetic familiarity and accent patterns. ✅ If an accent affects how we process speech, shouldn’t accent training be introduced earlier in language education rather than later?
2️⃣ Real-World Example: Trump Needed a Translator for English!
A recent high-profile event further highlights this issue. During a meeting between Donald Trump and Indian Prime Minister Narendra Modi, a translator repeated English sentences in English for Trump!
👉 Both Trump and Modi were speaking English. 👉 Yet, Trump needed help understanding English—likely due to the Indian accent of reporters or Modi himself.
💡 Implications for Language Learning:
  • Even fluent English speakers struggle with accents. If Trump needed a translator, how can language learners expect to understand native speakers without explicit accent training?
  • Should accent be a core focus from the start, just like vocabulary and grammar?
  • AI-powered pronunciation analysis and speech training could help learners adjust to different accents from the early stages.
Final Thoughts & Open Questions
🔹 Should accent training be prioritized equally alongside vocabulary and grammar in early language learning? 🔹 How can AI-driven pronunciation tools help learners develop an intuitive understanding of accents? 🔹 Have you experienced a situation where accent played a critical role in communication, even when you knew the language?
I’d love to hear your thoughts!
Relevant answer
Answer
This is a complex question. I think that the answer requires a balance.
On the one hand, if people use bad grammar and limited or wrong vocabulary, they may be judged negatively (which is unfair, but it still happens).
On the other hand, many people believe that the FIRST goal of language education is simply to be understood by others speaking the language and to understand what they are saying.
English is an international language. Not even all native speakers speak with the same accent. Sometimes the accents of native speakers are so different that it is hard to understand each other. But sometimes the accents are just different enough to be appealing.
In thinking about this, I would not so much think in terms of accent, but just pronunciation, things like correct vowel sounds, emphasis on the proper syllables, inflection.
People in every culture brings the speech patterns of their native language to English. If communication is not effective, it may be that the English listening skills share the responsibility. For example, as a Midwest American native speaker, I had to learn to understand English spoken by Chinese natives. I also once had trouble understanding an English speaker in Scotland because I was not accustomed to her strong accent.
  • asked a question related to Language
Question
3 answers
I am working on an idea that I believe has the potential to transform how we approach accent training in second-language acquisition. Traditionally, learners focus on mimicking native speakers' pronunciation in the target language (e.g., watching movies or listening to native speakers). However, my hypothesis introduces a reverse approach:
Instead of imitating native speakers of the target language directly, learners could observe how native speakers of their target language pronounce their (learners’) native language.
For example, an English learner who speaks Persian as their first language could study how native English speakers speak Persian. By understanding the rhythm, tone, and pronunciation tendencies of English speakers in Persian, the learner can mimic these patterns and apply them to their English speaking.
In addition, advancements in AI-driven accent detection and analysis now provide opportunities to identify these subtle cross-linguistic nuances. AI could make this process more accessible, practical, and effective by analyzing speech patterns and providing real-time feedback.
Questions for Discussion:
  1. Has this reverse approach to accent training been explored or implemented in any studies or projects that you are aware of?
  2. What role could AI play in refining and scaling this method?
  3. Are there potential limitations or challenges you foresee with this approach?
I look forward to hearing your thoughts, feedback, and suggestions!
Relevant answer
Answer
Your hypothesis offers an intriguing shift in perspective for second-language accent training. Traditionally, the focus has been on mimicking native speakers of the target language, but your reverse approach could potentially leverage learners' existing linguistic frameworks to facilitate accent acquisition. Observing how native speakers of the target language pronounce the learner’s native language could indeed highlight specific phonetic and rhythmic patterns that could be useful.
AI’s role in analyzing these cross-linguistic nuances is particularly compelling. Real-time accent detection and feedback could make this method scalable and accessible to a wide range of learners. It could also identify subtle patterns that are difficult for humans to discern, providing a more tailored and efficient learning experience.
In terms of challenges, one potential limitation might be the variability in how individuals speak their native language, which could introduce inconsistencies. Furthermore, learners may find it difficult to apply these insights in spontaneous speech, where the pressure to perform accurately could override the learned patterns.
Overall, this approach, combined with AI-driven tools, could offer a more personalized and efficient path for accent training. I look forward to seeing how this concept evolves.
  • asked a question related to Language
Question
2 answers
Single-cell organisms can be conditioned (Saigusa et al. 2008); therefore, it should be expected that single cells in the neocortex can also be conditioned (Prsa et al. 2017). In the study of Prsa et al. (2017), a cell was conditioned in the motor cortex of a mouse (as evidenced by two-photon imaging) and feedback of successful conditioning was achieved by optogenetic activation of cells in the somatosensory cortex. A mouse (with head-fixed) was rewarded with a drop of water following volitional discharge of a motor cell using the method of Fetz (1969). The conditioning was achieved after 5 minutes of practice. Furthermore, a group of three cells was conditioned such that two cells were made to fire at a high rate and one cell was made to fire at a low rate, which indicates the inherent plasticity of the nervous system.
This is the first example of the brain having single-cell resolution for transferring information. It is thus not surprising that the brain of a human being (which contains ~ 100 billion neurons) can transfer 40 bits per second (over a trillion possibilities per second, i.e. 2^40 per second) when engaged in language execution (Reed and Durlach 1998), but only after many years of training. If we assume that each cell in the human brain has (on average) at least 10 levels of firing-frequency, then 100 billion neurons should be able to transfer 1 trillion output possibilities (i.e. 10 x 100 billion) or about 40 bits of information, and all done in a second.
And to free up memory space for running the heart and lungs we have information chunking (Miller 1956), so that a concept like ‘E = mc^2’ (as developed by Einstein) can be memorized and used to extract pertinent information as stored in any physics library (Clark 1998; Varela et al. 1991). The availability of books following the development of the printing press in 1436 (by Johannes Gutenberg) has contributed to world literacy by amplifying the information available to the human brain. Artificial intelligence will, no doubt, further enhance the amplification. In fact, much of what I have written over the years has been supported by Google/ResearchGate/AI—and this is without ever using chat-GPT to compose a text.
Relevant answer
Answer
Dear Moonlit Melody, I would suggest that using single-celled organisms to study Parkinsonism will not work since Parkinson’s is an attribute of multicellular animals that use dopamine to initiate ‘drive’.  However, single-called organisms could be used to study the genetic basis of learning and memory, as was done by Eric Kandel (2006) who developed methods to investigate such things in the mammalian nervous system. I suspect that a comparable genetic mechanism for learning would be found in the single-cell case, and if not this would be an interesting topic for evolutionists. As for using biology as a metaphor to understand how AI works, Elon Musk has made predictions of how his brain implants will accelerated the amount of information transmitted from the current average level of 10 bits per second (1,024 possibilities, 2^10) to over a million bits per second. Plunging a bunch of wires into the brain willy-nilly is about as useful as expecting someone who does not understand a foreign language to extract meaning from that language. Even the cochlear implant, which changes the way one’s language sounds, requires over one year of training—and music appreciation is never fully recovered. In short, the brain has to be treated like a system composed of functional modules before meaning can be extracted from its neurons and synapses.
  • asked a question related to Language
Question
5 answers
DUMBING-DOWN THE WORLD
Who is responsible for the dumbing-down of the world's population? Language is the medium of consensus and of discord. As cultures seek a meaningful world-view, volubility increases with the need for more words. Concepts become redundant and discarded; words are invented and reification changes reality. Modern science, religion – and contemporary education – are based on the presumption of Pluralism (not Realism or Materialism, as is commonly held). The primacy of a metaphysical perspective has all-but been abandoned. The vast body of academic scholarship over the last century was predicated on the catastrophic Oxford Model and is mostly meaningless and redundant. An incomprehensible waste of resources and lives supporting the crumbling edifice of Ivory Tower-based education. That culture, based on erroneous assumptions, must take responsibility for the World's stultified condition. The world-view of the vast majority is based on harm, through ignorance, lies and deception.
Relevant answer
Answer
This is an old gripe, the gripe of every generation. The world has always been dumb.
  • asked a question related to Language
Question
9 answers
When a person, who does not speak the local language, first comes into contact with the criminal justice system, the very first thing that needs to be determined in order to communicate with him is whether the person needs the assistance of an interpreter or not. IF the person has zero knowledge of the local language, that question is not difficult to answer. If he speaks a few words but not much, the answer is also simple.
However, for a person who can communicate in his/her daily life in the local language but their level might not be good enough to follow court procedures and debates, the question is more complex. In most jurisdictions I have studied, it seems like this question is answered subjectively by whoever is tasked with that duty that day and there is no standard language proficiency test or a level that has been demarcated to be the minimum needed to refuse the services of an interpreter.
I would like to know if in the USA and in other countries, a standard test or procedure exists to test the language proficiency of the accused/witness/victim for whom the local language might be their second language.
Thanks!
Relevant answer
Answer
Yes, in Pakistan, the courts have provisions to ensure the right to a fair trial, which includes the assistance of interpreters when needed. While there is no singular "standard procedure" codified across all courts, the general framework for determining the need for interpreters is guided by principles of justice and provisions within laws like the Code of Criminal Procedure (CrPC), 1898, and the Constitution of Pakistan, 1973.
Key Points on Interpreter Provisions:
  1. Right to a Fair Trial: Article 10-A of the Constitution guarantees the right to a fair trial, which implies that an accused or party who cannot understand the language of the court is entitled to assistance.
  2. Court's Discretion: The presiding judge assesses the need for an interpreter based on the circumstances of the case. This can include cases where:The accused or witness does not understand the language of the court. A party speaks a regional language or dialect not commonly used in court. The case involves foreign nationals.
  3. Applications by Parties: A party to the case may file an application requesting an interpreter. The court then considers the necessity and arranges for an interpreter, often from an official panel or external sources.
  4. Code of Criminal Procedure (CrPC), 1898:Section 361: Requires that evidence in languages not understood by the court be translated. Section 364: Mandates that statements made in a language not understood by the accused must be interpreted.
  5. Qualified Interpreters: The courts aim to ensure interpreters are neutral, qualified, and fluent in both languages. If no court-appointed interpreters are available, external qualified interpreters may be engaged.
  6. Payment and Costs: Typically, the state bears the cost of interpreters in criminal cases to uphold the principles of justice, while in civil cases, the costs may be borne by the requesting party unless otherwise directed by the court.
This procedural flexibility ensures linguistic barriers do not impede justice. However, the practice may vary depending on the jurisdiction, the nature of the case, and available resources.
  • asked a question related to Language
Question
3 answers
It has been proposed that consciousness is mediated at the level of the neocortex according to a string of declarative, conscious units, which encodes a sequence of sounds or visual objects. A single electric pulse delivered to a neocortical pyramidal fibre after a brief discharge of action potentials renders the fibre inactive for 100 ms or so, since the pulse activates a collateral that engages GABAergic neurons that inhibit the fibre for the purpose of excitability regulation (Chung and Ferster 1998; Krnjević 1974; Krnjević et al. 1966abc; Krnjević and Schwartz 1967; Schiller and Malpeli 1977; Tehovnik and Slocum 2007a). If pulses are delivered in a 10-Hz train, then an activated pyramidal fibre can be inhibited for up to 100 ms between pulses (Logothetis et al. 2010). Thus, 10-Hz stimulation can be used to inhibit a declarative, conscious unit as an animal (including a human) is made to execute a task that is based on a sequence of events such as concatenated sounds or a movie clip, each perceived, imagined, or evidenced using the motor system.
The neocortex is composed of vertically-aligned pyramidal fibres (20 to 40 neurons at any one depth) that are grouped in micro-columns, with each column measuring 30 μm in diameter and believed to encode a single feature (Peters and Sethares 1991). If one uses indwelling electrodes to pass current, a small region of the neocortex (i.e., a 100 to 200 μm diameter sphere of tissue) can be activated with currents of 2 to 5 μA (@ 0.2-ms pulse duration) to evoke or disable perception, which is estimated to drive from 60 to 250 vertically-aligned pyramidal fibres (Peters and Sethares 1991; Schmidt et al. 1996; Tehovnik and Slocum 2007ab, 2013).[1]
As discussed in a previous chapter, the electrical stimulation work of Penfield and Ojemann (Ojemann 1983, 1991; Penfield and Roberts 1966) has been central to the idea that elements in the neocortex can be inhibited by activating specific loci of the cortex to interrupt the generation of speech in alert patients.[2] One of the most important observations made in these studies is that information pertaining to language is stored uniquely in the neocortex: no two individuals have the same language map. This makes good sense, since learning a language (or learning any other faculty) is based on the history of learning, as well as genetic makeup. Therefore, to deduce what is stored within the neocortex of an individual, different declarative, conscious units must be interrupted electrically in the neocortex for different streams of consciousness. This will be no easy task, since the neocortex of humans has a storage capacity of tens of trillions of bits.[3]
To make this line of work more manageable, an understanding of how language is stored in the neocortex is paramount. Each faculty whether it is a distinct language or mathematics is stored by an independent network of neurons (Ojemann 1983, 1991; Rojas 2021). And based on how a language is learned from childhood one can spontaneously develop a verbal network without developing a reading and written network.[4] This suggests that every faculty is anchored to specific sensorimotor transformations: speaking is dependent on sound and the vocal apparatus, reading is dependent on vision (and audio for some) and eye movements, and writing is dependent on vision and hand movements (see: Ojemann 1991).
So, how does the foregoing generalize to other species? Elephants, dolphins, and whales have an advanced communication system that has yet to be deciphered (e.g., Antunes et al. 2011). The songbird, however, has a well-studied telencephalon that is known to store songs (Goldman and Nottebohm 1983; Rochefort et al. 2007), whereby the methods of Penfield and Ojemann could be used to interrupt various aspects of song generation. Thus, the stream of consciousness can be studied across different species by disabling declarative, conscious units electrically at various locations along the neural strings per species, but for this to be feasible the ethology of an animal must be well understood as it is for humans [5] and songbirds.
Finally, since information is ultimately stored at the synaptic level (Hebb 1949), methods will need to be developed to disable a single synapse to study consciousness; it is the synaptic connectivity of a neuron that determines the context within which a declarative attribute is defined per neuron and each human neocortical neuron has, on average, about 10,000 synapses.
Footnotes:
[1] Distinct colors can be evoked from the visual cortex using currents between 2 and 10 μA (Schmidt et al. 1996). To readily evoke such perception using low currents, high-impedance electrode (i.e., > 2 MΩ) that induce a high charge density are recommended (Tehovnik et al. 2009).
[2] Typically, electrical stimulation delivered to the neocortex was used to identify the language areas of the neocortex in patients who were about to have regions of the neocortex removed to treat severe epilepsy (Ojemann 1983, 1991; Penfield and Roberts 1966). In these studies, naming, reading, verbal memory, mimicry of facial movements, and phoneme identification were assessed per stimulation site, typically in Wernicke’s and Broca’s areas. The map size for the primary language was always smaller than the map size for the secondary languages (this difference is related to automaticity as argued in an earlier chapter). Surface stimulation was used with large-tipped electrodes; therefore, the current was in the milliampere range (1-4 mA), and the duration of stimulation train (i.e., the duration of inhibition) was under 15 seconds. Frequency of pulses was 60 Hz (rather than 10 Hz), and such stimulation never evoked sensations or linguistic utterances.
[3] A technology may eventually be developed to disrupt individual synapses, but currently the disruption of small groups of neurons is what is available (Tehovnik and Slocum 2007ab).
[4] Children learn verbal languages readily, but they require intensive study to read and write in a particular language. The symbols produced for writing (and for mathematics) are cultural inventions (Rojas et al. 2021).
[5] The human neocortex has a surface area of about 1,800 cm^2 (Van Essen et al. 2018). If the neocortex is composed of microcolumns measuring 30 μm in diameter (Peters and Sethares 1991), which would be the minimal size of a declarative, conscious unit devoid of its connections, then the neocortex should contain 64 x 10^6 declarative, conscious units, with each operating in parallel to encode a single feature (Logothetis et al. 2010; Murayama et al. 2011; 1991; Rutledge and Doty 1962; Schiller and Tehovnik 2015; Tehovnik and Slocum 2013). And each feature is stored according to context (Lu and Golomb 2023), which is determined by the connectivity profile per context. Each neuron in the neocortex has about 10,000 synaptic contacts, on average, suggesting an unlimited number of contexts per feature can be stored (estimate from chapter 18).
Relevant answer
Answer
Dear Dr. Steven Henderson,
Thank you for your reply. Yes, of course I am interested, and if you have thoughts on how we might deal with the vast data that may have to be collected, I would appreciate your thoughts on that, as well. Ed Tehovnik.
  • asked a question related to Language
Question
8 answers
Many have criticized Chomsky’s theory of the Universal Grammar of language (e.g., Pinker as described in Sihombing 2022), but the most effective criticisms have come from Daniel Everett, given that Chomsky (according to Everett) has never addressed the criticisms. Everett has two issues with Chomsky’s theory of language: the evolutionary timeline of language for Homo sapiens[1] and the lack of universality of language structure for all languages. On the evolution of language, Chomsky has proposed that language began some 60,000 years ago (Chomsky 2012). Everett’s contrary explanation (Everett 2017) is that rudimentary language started 2.5 million years ago in the South Pacific amongst Homo erectus, who are estimated to have had 62 billion neurons (24 billion short of Homo sapiens, Herculano-Houzel 2012), and for whom there is archeological evidence that they were skilled sailors with territories throughout the south Pacific Ocean; navigation between territorial islands was done using the stars and sea currents, which would have required some form of communication between group members (Everett 2017)[2]. Also, at the time of Homo erectus, there is evidence of an asteroid strike in the South Pacific, which could have accelerated the evolutionary process (as it did from mammals 64 million years ago) by bringing about the introduction of large, big-brained primates.
On the generalizability of Chomsky’s theory to all languages (including primitive languages), Everett (2006, 2016) spent many years in the Amazon basin of Brazil studying the Pirahã people, who have no written language or number system. To transmit their history across generations (two at most) it is all done by word-of-mouth. The language has eight consonants, three vowels, and two tones. The sentences are very simplistic, with no embedded clauses such as, “John, who is a hunter, is an active individual.” Instead, the utterance would be: “John is a hunter. John is an active individual.” This language structure is apparent when children or adults begin to learn a language (thereby having no recursive structure). Also, the language has no pronouns. Furthermore, it has a proximate tense (e.g., for the present) and a remote tense (e.g., for the past) but no perfect tense, a tense with no time stamp, e.g., “I have prepared some food.” The language does not permit the establishment of a creation myth. The sense of time, e.g., historic time, is not well developed. Much is set in the present. Hunting and foraging are a daily affair for the Pirahã people. The children are taught the names of all the plants and animals in the jungle, which can number in the thousands.
Accordingly, Chomsky’s theory fails to account for the evolutionary history of language. As well, his theory accounts only for complex, recursive languages with little to say about the more primitive languages such as the one spoken by the Pirahã people of Brazil. It is noteworthy that if a Pirahã child is raised in Sao Paulo in the Portuguese language, the child will have no problem mastering all the complexities of Portuguese, which has way more verb tenses than English but a similar number system, as well as a comparable written script.
Neanderthals (Homo neanderthalensis), who occupied Northern Europe for much of their existence up until 40,000 years ago (Sansalone et al. 2023), were around when Homo sapiens were endowed with an ability to generate speech sounds and therefore able to express their cognition (Chomsky 2012). Doreen Kimura, who spent most of her life studying how the brain processes human language by examining brain-damaged patients (Kimura 1993), believed that human language does not represent some type of species exceptionalism, but instead represents a characteristic of the brain and the body that was shaped by evolution thereby leaving traces of its genetics (e.g., Chomsky’s universal grammar) in other species such as the electric fish, song-birds, bats, elephants, dolphins, whales, and non-Homo sapiens. She argued that communication of early Homo sapiens some 500,000 years ago was non-verbal and gesture-based, but changed to the vocal apparatus at this time (i.e., by the formation of a right-angled vocal tract, see Fig. 1.1 Kimura 1993) allowing for the utterance of vowels. This notion runs contrary to the idea that some 60,000 years ago humans just started producing language spontaneously (Chomsky 2012), with no clear link to evolution, brain, and behavior despite many challenges to this idea (Bizzi and Mussa-Ivaldi 1998; Changizi 2001b, 2003; Dawkins 1976; Dawkins and Dawkins 1976; Everett 2017; Fentress and Stilwell 1973; Gallistel 1980)[3].
As for Neanderthals, Sansalone et al. (2023) have recently opined that the Neanderthal neocortex was as sophisticated as the human neocortex exhibiting a high degree of interareal integration, which does not exist in other primates. The overlap between Neanderthals and Homo sapiens 40,000 years ago permitted the sharing of genes between the two groups. A common language would have facilitated their genetic intimacy, and there is evidence that Neanderthals and Homo sapiens exchanged genes, for over 6% of the genomes of Europeans are Neanderthal. Perhaps before their extinction, Neanderthals possessed Chomsky’s universal grammar. This could be verified by evolutionary biologists.
Another issue is that Chomsky’s theory emphasizes the rapid acquisition of language during childhood, which Chomsky attributes to a universal grammar programmed genetically in all humans (Chomsky 1965). A child does not need to spend time in school to master the verbal aspects of a language, which is acquired automatically between birth and adolescence, but reading and writing necessitates schooling. FOXP2 gene expression occurs in new-born humans and in new-born and adult songbirds for the accelerated acquisition of language and songs, respectively (Rochefort et al. 2007). This acquisition is mediated by neurogenesis in the telencephalon (neocortex of humans) and the hippocampus (Goldman and Nottebohm 1983; Rochefort et al. 2007); neurogenesis ceases by the age of twelve in humans (Charvet and Finlay 2018; Sanai et al. 2011; Sorrells et al. 2018). Neurogenesis may accelerate language learning in children, whereas it promotes the learning of songs for mate selection in adult songbirds.
One might expect that the number of new words learned as a child should be much greater than the number of new words learned after the age of 10 to 12 when neurogenesis begins to subside (Charvet and Finlay 2018; Sanai et al. 2011; Sorrells et al. 2018). According to Bloom and Markson (1998) by the age of ten, children learn an average of 23,651 words to yield an acquisition rate of 2,365 words per year, and from the age of ten to eighteen children learn an average of 36,350 additional words to yield an acquisition rate of 4,544 words per year (this is based on children who attend school). Now some of this increase in acquisition rate after the age of ten may be related to a child having more methods by which to consolidate information; on this point, the ability to speak, read, and write tends to accelerate after the age of ten, which should contribute to the efficiency of word consolidation and retrieval. Nonetheless, no one would argue that language acquisition (through speaking and hearing) up to the age of 10 or 12 is relatively effortless and word and phrase utterances are free of any accent (other than the parents’/teachers’ accent) even when learning multiple languages. These points are emphasized by Chomsky (1959) and used effectively to challenge Skinner’s Verbal Behavior Theory of language (Skinner 1957).
Lastly, an analysis of 19 different languages including European and Asian languages revealed that the information transmission rate is comparable for all the languages at about 39 bits per second (Coupé et al. 2019). This means that the brain sets the same limits on language irrespective of language type, which bolsters Chomsky’s notion that there is a neuro-genetic structure in humans that controls the universal acquisition of language (Chomsky 1965).
Summary:
1. That language was acquired by Homo sapiens as late at 60,000 years ago may not be correct, since there is evidence that Homo erectus (an ancestor of Homo sapiens) some 2.5 million years ago may have required this capability to organize communications to navigate between territories in the south Pacific Ocean.
2. The theory of Universal Grammar does not account for all languages, particularly languages that have no recursive structure, such as the language of the Pirahã people of the Amazon. Nevertheless, advanced languages have a comparable information transfer rate, and Pirahã children can learn a recursive language, which suggests that all humans are genetically endowed with a common neural mechanism for the acquisition of language.
3. A universal grammar may be represented in non-human species. There is evidence that Neanderthals had a brain as advanced as that of Homo sapiens and therefore this species could have supported human-like language.
4. More English words are learned after the age of ten than before the age of ten, even though neurogenesis stops by or shortly after this age in humans. This, however, does not take away from the fact that before the age of ten children learn to speak effortlessly and without an accent, a point emphasized by Chomsky.
Footnotes:
[1] Chomsky is not sure whether language is affected by natural selection, since when asked questions about this he never gives a clear yes or no on the topic (Chomsky 2020-2023/Youtube).
[2] Soccer robots have both proprioception to note the position of their bodies as well as a visual sense to detect the ball, the goals, and the position of the other robots (Behnke and Strucker 2008). To communicate the location of the ball and other items with other robots, an allocentric coordinate system is used, much like that utilized by a group of electric fish (who use electricity to communicate), a pack of wolves (who use gestures and sounds to communicate), or a pod of killer whales (who use sounds to communicate) in pursuit of prey. Language may have evolved to enhance allocentric communication, as is required by soccer robots.
[3] For example, when an estimate is made for the value of ‘d’ (a word-syllable quotient) between the number of words (E) and number of syllables (C) using the formula ‘E = Cd’ (derived from Changizi 2001b), the value for ‘d’ turns out to be ~ 1.046 for humans [i.e., there are approximately 170,000 words of common usage in the English language and there are approximately 100,000 corresponding syllables, which yields ‘d’ = 1.046 (the ‘E’ and ‘C’ values are based on the full, 20 volume, Oxford English Dictionary)]. This means that for the English language words and syllables have a combinatorial relationship, namely, for every one word there is an average of 1.046 syllables. Now what about birdsong? Much like for human language, the number of birdsongs (E) varies as a function of the number of syllables (C), such that the number of syllables per song, ‘d’, is estimated to be 1.23 (Changizi 2001b), which is even greater than the combinatorial estimate for human language of words to syllables.
Relevant answer
Answer
I agree with Dr. Pehar that "there is a genetic, inborn make-up in humans to acquire /.../ language", but, and this is the important thing, there are few reasons to believe that this genetic make-up has the form of Chomskys "universal grammar".
  • asked a question related to Language
Question
21 answers
I would like to understand the broad range of parameters that constitute a speaker of any given language being regarded as a 'native speaker' of the said language (as opposed to merely fluent in it or possessing a bilingual proficiency of it), and at what point this status is no longer applicable to those who have acquired a language via Second Language Acquisition (SLA).
Relevant answer
Answer
Please, don't tantalize yourself. As long as you can communicate both orally and in writing in another language and you are understood, the concept of 'native speaker's ability' carries a linguistic bias and specific ideology, where only 'native speakers' are the best of a given language ... and this is WRONG in a globalized (scientific) world where everyone tries to communicate his/her ideas and research.
  • asked a question related to Language
Question
2 answers
Can we apply the theoretical computer science for proofs of theorems in Math?
Relevant answer
Answer
The pumping lemma is a valuable theoretical tool for understanding the limitations of finite automata and regular languages. It is not used for solving computational problems directly but is important for proving non-regularity and understanding the boundaries of regular languages.
  • asked a question related to Language
Question
4 answers
Soccer robots have both proprioception to note the position of their bodies as well as a visual sense that is egocentric to detect the ball, the goals, and the position of other robots (Behnke and Strucker 2008). To communicate the location of the ball and other items with other robots, an allocentric coordinate system is used, much like that utilized by a group of electric fish (who use electricity to communicate), a pack of wolves (who use gestures and sounds to communicate), or a pod of killer whales (who use sounds to communicate) in pursuit of prey. Perhaps, language evolved to enhance allocentric communication, as is required by soccer robots.
A staunch critic of Noam Chomsky, Daniel Everett has argued that language started some two million years ago (rather than 60,000 years ago, Chomsky 2012) with (bipedal) Homo erectus, who inhabited the South Pacific, used tools, and is suspected of having navigational skill to travel between islands (Everett 2016, 2017). To facilitate the travel, Everett has proposed that Homo erectus used allocentric communication—perhaps, starting with gestures before evolving into verbal behavior some 500,000 years ago for Homo sapiens (Kimura 1993). It is believed that Homo erectus evolved into Homo sapiens.
Relevant answer
Answer
Language likely evolved to enhance allocentric communication, which refers to the ability to communicate about objects, events, or entities outside of oneself. This form of communication is fundamental to human social interaction, allowing individuals to share information, coordinate actions, and build complex societies. The evolution of language provided a sophisticated tool for expressing thoughts, intentions, and observations, enabling humans to convey precise details about the external world. As social animals, humans benefit from the ability to discuss things that are not immediately present, such as distant resources, future plans, or abstract concepts. This allocentric communication likely played a crucial role in the survival and success of early human communities, as it allowed for more effective collaboration, problem-solving, and cultural transmission.
  • asked a question related to Language
Question
4 answers
I attended a lecture at the Baylor College of Medicine (~ 2019) where one of the questions was “Does birdsong have anything to do with human language?” Noam Chomsky would say, “Absolutely not!” The speaker who had just finished discussing how birdsong is influenced by dopamine, a neurotransmitter that has been implicated in reward (incidentally a specialty of one of Chomsky’s critics, B.F. Skinner) was put off by the question, delivering a non-committal answer.
The late Doreen Kimura who spent much of her life studying how the brain processes human language (Kimura 1993) needs to be mentioned here. Kimura believed that human language does not represent some type of exceptionalism, but rather just a species-specific characteristic of the brain and body that was shaped by evolution. She argued that communication of early Homo sapiens some 500,000 years ago was non-verbal and gesture-based, but that later changes to the vocal apparatus (i.e., the formation of a right-angled vocal tract, see Fig. 1.1 Kimura 1993) allowed for the production of vowels. This idea runs contrary to the notion that some 60,000 years ago humans just started producing language spontaneously (Chomsky 2012) with no clear link to evolution (Everett 2017) and animal behavior and brain organization (Bolhuis et al. 2014). The notion that human language is an evolutionary outlier must be as wrong as the idea that the sun rotates around the earth (Galileo 1616, at the time a criminal complaint was issued by the Catholic Church).
Relevant answer
Answer
That is OK.
  • asked a question related to Language
Question
3 answers
When the eyes of a person are damaged this causes complete blindness. Likewise, when Wernicke’s and Broca’s areas of neocortex are damaged this causes complete aphasia, losing the ability to comprehend language as well as the ability to produce speech (Penfield and Roberts 1966). In the absence of the eyes, one can deliver electricity to various regions of the visual system such as the lateral geniculate nucleus, the visual cortex, or the superior colliculus, for instance, to evoke fragments of visual perception (Schiller and Tehovnik 2015; Tehovnik and Slocum 2013), but complete vision has yet to be achieved using such a method during blindness. In the absence of Wernicke’s and Broca’s areas no one has yet tried to recover language by activating subcortical sites that participate in language functions such as the thalamus and cerebellum, for example (Penfield and Roberts 1966; Schmahmann 1997; Tehovnik, Patel, Tolias et al. 2021; but see Ojemann 1991).
The neocortex contains the complete declarative code for language (Corkin 2002; Kimura 1993; Penfield and Roberts 1966). This compelled Pereira, Fedorenko et al. (2018) to use fMRI to collect signals focused on the language areas of neocortex (i.e., 50,000 voxels including the entire neocortex) as sixteen subjects were tested on 180 mental concepts contained in various sentences. It was found that the fMRI signal could predict the correct stimulus sentence at a rank accuracy of 74% correctness, on average (p < 0.01, Fig. 4b of Pereira, Fedorenko et al. 2018). Note that there was some variability in the ‘selected’ 5,000 (out of 50,000) most effective voxels per subject over time. fMRI does not have single-neuron resolution spatially and temporally, but it is now believed that spanning minutes to years the composition of neocortical neurons mediating behavior can vary such that a percentage of cells always remains tuned to a task but that the composition of that percentage fluctuates (Chen and Wise 1995ab; Gallego et al. 2020; Rokni, Bizzi et al. 2007). However, delivering a speech requires great precision of word order. This precision is maintained by neocortical-cerebellar loops that are instrumental in converting the declarative code of neocortex into an explicit motor code via the cerebellum (Gao et al. 2018; Guo, Hantman et al. 2021; Hasanbegović 2024; Zhu, Hasanbegović et al. 2023; Mariën et al. 2017; Ojemann 1983, 1991; Thach et al. 1992), which stores the efference-copy representation for automatic performance (Bell et al. 1997; Chen 2019; Cullen 2015; De Zeeuw 2021; Fukutomi and Carlson 2020; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik, Patel, Tolias et al. 2021; Wang et al. 2023). Patients with damage to Boca’s area can perform movement sequences (of the upper extremities) that have been overlearned but are impaired at learning new sequences (Kimura 1993), which means under such conditions the remaining islands of neocortical and cerebellar connectivity are sufficient to generate previously learned movements. Thus, to learn new sequences (especially as it pertains to language) requires that Broca’s area be intact.
Relevant answer
Answer
دوما هناك صلة ما بين الجانب العضوي للإنسان، والجانب الهرموني، والجانب النفسي... وكل تلك العلاقات تؤثر بشكل مباشر على مخرجات الفرد اللغوية والاجتماعية والسلوكية ... اللغة جزء من منظومة الفرد يمارسها باستمرار، وأي عطب على مستوى أجهزته العضوية يؤدي فورا للتوقف عن إنتاج الكلام. وهو مؤشر انقطاع الفرد عن التعبير . لكن في ظل كل هذا نتساءل: أين تذهب رسائلنا التي لم تخرج ولم تنتقل في الهواء أو عبر الأثير؟
أين يخزن الفرد تفاعلاته وانفعالاته بعد عطب أصاب أعضاءه؟
مجال اللغة روحي إلى أبعد حد...!
  • asked a question related to Language
Question
4 answers
1. a. Whoj knows whok heard what stories about himselfk?
b. John does (=John knows whok heard what stories about himselfk).
2. a. Whoj knows what stories about himselfj whok heard?
b. John does (=John knows what stories about himselfj whok heard
/John knows whok heard what stories about hisj own)
The examples (1a) and (2a) ask questions about the matrix subject 'who', with 'John' italicized in (1b) and (2b) corresponding to the wh-constituents that are being answered. I am curious about the binding relations in these examples, particularly in (2). Can example (2a) be construed as a question target matrix subject 'who' with 'himself' bound by the matrix subject?
Relevant answer
Answer
I don't think the English language is set up to nest separate questions this way, at least not to do that and be grammatically correct. It is logical that if someone heard a story about themselves, then the question could always follow, as to what that story was, so these 2 questions can be logically nested.
But I think you're trying to ask "what were the stories, if the person heard stories about themselves ?" But you can't do that by just using "what stories" since it becomes grammatically incorrect, so to be correct you need to use "which questions" but this then becomes a logical problem because "which stories" implies the selection of stories has already been determined, and a choice just needs to be as to which one, which isn't the case here.
  • asked a question related to Language
Question
6 answers
Much has been made of the idea that humans are genetically programmed to learn languages at an early age, suggesting that learning plays a minor role in this process (Chomsky 1959). But we have argued that a large part of being able to speak at an information transfer rate exceeding 40 bits per second (i.e., over a trillion possibilities per second, Coupé et al. 2019; Reed and Durlach 1998) is due to having a one-decade-long formal education in one’s native and secondary languages (Tehovnik, Hasanbegović, Chen 2024). For example, Joseph Conrad, whose native language was Polish and who became a world renowned writer, in his 20’s learned to write in English (Wikipedia/Joseph Conrad/July 11, 2024). In what is now Poland, Conrad was mentored by his father, Apollo Korzeniouwski, who was a writer and later a convicted political activist by the Russian Empire. To escape the political turmoil of eastern Europe, Conrad (to the dislike of his father) exiled himself to England, which marked the start of his writing career. And the rest we know about: ‘Heart of Darkness’, ‘Lord Jim’, ‘Nostromo’, and so on.
The study of second language learning by 20 year olds was investigated by Hosoda et al. (2013). They recruited twenty-four Japanese university students who were serially bilingual with the earliest age of learning English at seven years of age. The students completed a 4-month training course in intensive English study to enhance their vocabulary. They learned 60 words per week for 16 weeks yielding a total of 960 words, which translates into an information transfer rate of 0.0006 bits per second (see Footnote 1), which is appreciably lower than the transfer rate of ~ 40 bits per second for producing speech (Coupé et al. 2019; Reed and Durlach 1998).
Furthermore, there is this belief that learning a language is accelerated in children as compared to adults (Chomsky 1959). By the age of eighteen, one can have memorized some 60,000 words in the English language (Bloom and Markson 1998; Miller 1996), which represents an information consolidation rate of 0.0006 bits per second (see Footnote 2), which is the same as the rate experienced by the Japanese students learning English as a second language as adults (Hosoda et al. 2013).
Two conclusions can be drawn: First, consolidating a language is many orders of magnitude slower than delivering a speech (i.e., 0.0006 bits per second vs. 40 bits per second). Second, the idea that children learn languages at an accelerated rate may not be true. This needs to be properly investigated, however, whereby the rate of language learning (in bits per second) is measured yearly starting neonatally and ending in adulthood. Also, there is more to language than just memorizing words, so linguists will need to design experiments covering all the major parameters of language and express these parameters in terms of bits per unit time. It is time that linguistics (like neuroscience) becomes a quantitative discipline.
Footnote 1: Bit-rate calculation: if each word is made up of 4 letters (on average) then the bit rate of learning (using values by Reed and Durlach 1998) = 1.5 bits per letter x 4 letters/word x 960 words/16 weeks = 360 bits per week = 0.0006 bits/sec. The learning period includes not only the time spend memorizing the words, but also the time required to consolidate the information in the brain, which occurs during sleep and during moments of immobility (Dickey et al. 2022; Marr 1971; Wilson and McNaughton 1994). After the learning there was an increase in the grey matter volume of Broca’s area, the head of the caudate nucleus, and the anterior cingulate cortex; as well, there was an increase in the white matter volume of the inferior frontal-caudate pathway and of connections between Broca’s and Wernicke’s areas (Hosoda et al. 2013). The grey and white matter enhancement correlated with the extent of word memorization.
Footnote 2: Bit-rate calculation: Memorizing 60,000 words in 18 years translates into 360,000 bits of information [i.e., 60,000 words x 4 letters per word x 1.5 bits per letters, Reed and Durlach 1998] or a word consolidation rate of 55 bits per day (or 9 words per day) over eighteen years of life. Therefore, the rate per second is 0.0006 bits per second. For other details see Footnote 1.
Relevant answer
Answer
Thank you for the suggestion, Krzysztof. Ed Tehovnik.
  • asked a question related to Language
Question
4 answers
Many have criticized Noam Chomsky’s theory of language (e.g., Pinker as described in Sihombing 2022), but the most effective criticisms have come from Daniel Everett, given that Chomsky (according to Everett) has never addressed the criticisms. Everett has two issues with Chomsky’s theory: the evolutionary timeline of language for Homo sapiens and the lack of universality of language structure for all languages. On the evolution of language, Chomsky has proposed that language began some 60,000 years ago (Chomsky 2012). Everett’s contrary explanation is that rudimentary language started 2.5 million years ago in the South Pacific amongst Homo erectus [who are estimated to have had 62 billion neurons/24 billion short of Homo sapiens, Herculano-Houzel 2012] for which there is evidence that they were skilled sailors expanding throughout the Pacific Ocean, the navigation of which (using the stars and currents) is believed to depend on having communication between group members (Everett 2017). Also, at this time there is evidence of an asteroid strike in the South Pacific, which could have accelerated the evolutionary process, as it did 64 million years ago, bringing about the large, big-brained mammals.
On the generalizability of Chomsky’s theory to all languages (including primitive languages), Everett (2006, 2016) spent many years in the Amazon basin of Brazil studying the Pirahã people, who have no written language or number system. To transmit their history across generations (two at most) it is all done by word-of-mouth. The language has eight consonants, three vowels, and two tones. The sentences are very simplistic, with no embedded clauses such as, “John, who is a hunter, is an active individual.” Instead, the utterance would be: “John is a hunter. John is an active individual.” This language structure is apparent when children or adults begin to learn a language (thereby having no recursive structure). Also, the language has no pronouns. Furthermore, it has a proximate tense (e.g., for the present) and a remote tense (e.g., for the past) but no perfect tense, a tense with no time stamp, e.g., “I have prepared some food.” The language does not permit the establishment of a creation myth. The sense of time, e.g., historic time, is not well developed. Much is set in the present. Hunting and foraging are a daily affair for the Pirahã people. The children are taught the names of all the plants and animals in the jungle, which can number in the thousands.
Accordingly, Chomsky’s theory fails to account for the evolutionary history of language. And his theories can only explain complex (recursive languages) with little to say about the more primitive languages such as the one spoken by the Pirahã people of Brazil. However, if a Pirahã child is raised in Sao Paulo in the Portuguese language, the child will master all the complexities of Portuguese, which has way more verb tenses than English and a similar number system, as well as a written script.
Relevant answer
Answer
I haven't read what Everett had to say about Chomsky's contribution to linguistics, but I think that, from the account given here, Everett's criticism of Chomsky is reductionist. While not a Chomskyan by persuasion, I believe that Chomsky's contribution is more than the evolution of language and the generalizing of his theory to all languages. If the account about Everett is not reductionist, I think Everett may have missed an important point that actually gave impetus to the theories of pragmatics and cognitive linguistics, which is Chomsky's deliberate avoidance of the intricacies of meaning and context in his formalist program. This takes me to the generalization problem referred to in this post. This is not a defense of Chomsky, but about the essence of theory. We all know that for a theory to be one, it should be able to be falsifiable. If the data of the Pirahã language contradict Chomsky's generalization, it simply means that his theory is not to be taken as an unfalsifiable theorem but as a scaffolding in the evolution of linguistic theory at large.
  • asked a question related to Language
Question
5 answers
The hippocampal formation is central to the consolidation and retrieval of long-term declarative memory, memories that are stored throughout the neocortex with putative subcortical participation (Berger et al. 2011; Corkin 2002; Deadwyler et al. 2016; Hikosaka et al. 2014; Kim and Hikosaka 2013; Scoville and Milner 1957; Squire and Knowlton 2000; Tehovnik, Hasanbegović, Chen 2024; Wilson and McNaughton 1994). Subjects that have hippocampal damage have great difficulty narrating stories (Hassabis et al. 2007ab), which can be viewed as a disruption of one’s stream of consciousness as it pertains to retrieving information. The retrieved stories, which are highly fragmented in hippocampal patients (Hassabis et al. 2007ab), are comparable to those evoked electrically by stimulating a single site in the parietal and temporal lobes (Penfield and Rasmussen 1952; Penfield 1958, 1959, 1975). Nevertheless, individuals with hippocampal damage can still engage others verbally, but the conversation is limited in that it is based on declarative memories that are not updated making the hippocampectomized interlocker seem out of touch (Corkin 2002; Knecht 2004). A rapid exchange of speech is dependent on an efference-copy representation, which is mediated through the cerebellum (Bell et al. 1997; Chen 2019; De Zeeuw 2021; Guell, Schmahmann et al. 2018; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik, Patel, Tolias et al. 2021; Wang et al. 2023).
Patient HM, who had bilateral damage of his hippocampal formation, had ‘blind memory’ (much like ‘blindsight’): when asked to name the president of the United States in the early 2000’s he failed to recall the name, but when given a choice of three names: George Burns, George Brown and George Bush he was able to select George Bush (Corkin 2002). Therefore, his unconscious stores of information were intact (which is also true of blindsight for detecting high-contrast spots of light, Tehovnik, Patel, Tolias et al. 2021). As well, HM had memory traces of his childhood (a time well before his hippocampectomy), but the specifics were lost such that he could not describe even one event about his mother or father (Corkin 2002). Although many presume that HM had memories of his childhood, these memories were so fragmented and lacking in content that referring to his childhood recollections as ‘long-term memories’ is questionable.
The idea that the brain becomes less active once a new task has been acquired through learning can be traced back to the experiments of Chen and Wise (1995ab) that were done in the supplementary motor area, Brodmann’s Area 6. Monkeys were trained to associate a visual image with a particular direction of saccadic eye movement, which could be up, down, left, or right of a centrally-located fixation of the eyes. For a significant proportion of neurons studied it was found that the activity of the cells decreased with overlearning an association. At the time of publication this counter-intuitive result was greeted with much skepticism. After reading the paper, Peter Schiller did not know what to make of the result since his results (seven years before) suggested that the supplementary motor area becomes more active and engaged once new tasks are learned (Mann, Schiller et al. 1988).
Years later, Hikosaka and colleagues continued this line of work to show that the diminution of activity with learning was a real neural phenomenon and that the diminished information was channeled to the caudate nucleus (Hikosaka 2019; Hikosaka et al. 2014; Kim and Hikosaka 2013), which is connected anatomically to the entire neocortex such that the head of the caudate innervates the frontal lobes whereas the tail of the caudate innervates the temporal lobes (Selemon and Goldman-Rakic 1985). Hikosaka (2019) has proposed that the memories of learned tasks are archived in the caudate nucleus, whereby new tasks are stored in the head of the caudate and old tasks are stored in the tail of the caudate—perhaps for immediate use by the temporal lobes which if damage disrupts long-term memories even of one’s childhood (Corkin 2002; Squire et al. 2001).
That neurons throughout the brain (i.e., the cortex and subcortex) become less responsive to task execution once overlearned is a well-established fact (Lehericy et al. 2005). We have argued that this diminution of responsivity is the brain’s way of consolidating learned information efficiently, while reducing the energy expended for the evocation of a learned behavior (Tehovnik, Hasanbegović, Chen 2024). We and others (Lu and Golomb 2023) believe that all memories are stored according to the context of the memorization, which requires that a given site in the neocortex that contains a memory fragment such as a word or visual image be networked with other neurons to recreate the context, which we refer to as a declarative/conscious unit (Tehovnik, Hasanbegović, Chen 2024). When someone narrates a story, declarative/conscious units are concatenated in a string much like the serialization of the images of a film and this process involves both the neocortex and the cerebellum (Hasanbegović 2024).
Furthermore, a primary language (as compared to secondary languages) is stored in the neocortex and cerebellum in such a way that any damage to either structure often preserves the primary language while degrading the secondary languages (Mariën et al. 2017; Ojemann 1983, 1991; Penfield and Roberts 1966). All languages are networked separately in the brain (Ojemann 1991): a unique neocortical-cerebellar loop is summoned during the delivery of a speech in the chosen language (Tehovnik, Hasanbegović, Chen 2024). The language one thinks in (i.e., one’s counting language) is the language that is well archived and highly distributed (including areas of the brain that mediate mathematics), thus making the language more resistant to the effects of brain damage.
In conclusion, information stored in the brain is no different from information stored in a university library: the ancient texts are all housed in a special climate-controlled chamber, while the remaining texts including the most recent publications are made available to all students and professors. Indeed, it is our childhood memories that define us and therefore they deserve to be archived and protected in the brain. The details of how this happens will need to be disclosed.
Relevant answer
Answer
There are different theories of information and different understandings and definitions of information. In my field, Claude Shannon's model is used, according to which information is encoded by the entropy of the signal. However, when making artificial intelligence systems, as has become fashionable, one must remember that in humans information arises only in interaction between two sources - that which arrives from sensors and that which is generated by the brain. What is available stored as memory links or genetically hardwired makes the incoming stream informative. It seems that the hippocampal structure plays a role in the interaction of the two sources.
Best!
  • asked a question related to Language
Question
4 answers
It was Pavlov who showed that language was a consequence of the human cerebral complexity and that it objectified the superiority and specificity of the human brain with respect to animal brains. He perceived language as a special type of conditioned reflexes, a second system of signalization, the first one being that of gnosis and praxis of direct thinking by images. To each image will be substituted through education its verbal denomination. Since they name everything, instead of associating images, human beings can directly associate the corresponding names, a system more efficient in maximizing the abstraction capabilities of the human brain” [Chauchard 1960, p. 122, from Michaud 2019].
In short, Pavlov believed that the process of thinking is possessed by all animals (which runs contrary to the views of Chomsky 1965, 2012), and what happened to humans (between 2 and 0.5 mya, Everett 2016; Kimura 1993) is that they invented language (as they invented writing, the steam engine, and AI) by using the ‘thinking’ process of the neocortex to make associations between sounds and objects in the real world (a little like what Chat-GPT does today, but more efficiently and at an accelerated rate during development). The universal grammar proposed by Chomsky (1965) is merely an acknowledgement that all Homo sapiens are of the same species and therefore have a common capacity to acquire language, which today includes reading and writing both of which have become global requirements for citizenship by way of state-sponsored education from K to 12. Indeed, Pavlov’s view (unlike Chomsky’s) fits better with our understanding of evolution and human inventiveness (Michaud 2019), two notions ignored by Chomsky.
Relevant answer
Answer
Furthermore, regarding a previous comment defending Noam Chomsky, the commentator needs to read more Noam Chomsky to understand that Chomsky is a philosopher and not a biologist. In all my years at MIT, he never attended not even one neuroscience seminar and I very much doubt that he attended seminars on genetics and evolutionary biology.
  • asked a question related to Language
Question
7 answers
The terms Fictional Language, Fictitious Language, Artificial Language, or Constructed Language in fiction has been used interchangeably in papers that I have read. Are there differences in these terms, and which is preferable?
Relevant answer
Answer
I agree with Ira. By 'constructed' we should understand any given language that has been artificially created by human beings. They can be applied to real life (e.g. Esperanto) or fiction, hence 'fictional' (Tolkien, G.R.R. Martin, Star Trek, etc.). I personally find 'fictitious' a less clear-cut term, as it can both characterise languages created for 'fictional' purposes and how they are perceived (as false or not genuine) inside a work of fiction by its narrative voice(s) or characters. I hope this can be of help.
  • asked a question related to Language
Question
29 answers
Dear Professors,
I am Ziad Rabea, a high school student, and I am delighted to share one of my latest projects with you, seeking your valuable feedback. After years of research and development, I'd like to introduce L.B.F.C.T (Linguistic Barriers Free Coding Technology).
WorldLang is a new programming language featuring dynamic keyword importation and an integrated translator, which enables the translation of code and language keywords from one language to another dynamically.
Context and Motivation:
According to statistics from Statista and Ethnologue, native English speakers comprise about 380 million of the global population of 8 billion, approximately 4.7%. Additionally, those who speak English as a second language constitute about 13%, leaving over 82% of individuals worldwide who do not speak English. Given that the next Steve Jobs could emerge from this vast majority of non-English speakers, it is imperative to provide tools that foster innovation and creativity across linguistic boundaries.
Although there have been previous attempts to address this issue, such as Citrine and Supernova, they often fell short due to the concept of localization. While creating a programming language that allows coding in one’s native language is a significant step, it does not solve the problem entirely and can even exacerbate it. For instance, a programming language tailored to a specific language would be unusable by anyone except this language speakers, hindering collaborative development across diverse linguistic groups.
What's new? :
WorldLang is the world’s first programming language to feature dynamic keyword importation during the tokenization phase and an integrated translator. This allows users to download code written in one language, translate it into their native language, edit it, and then retranslate it back into the original language or any other language. This capability ensures that developers from different linguistic backgrounds can collaborate seamlessly. WorldLang is a global symphony of code.
As a high schooler, I accept that my research skills may not be that good, but I would love to hear your thoughts, feedback, and suggestions on WorldLang.
If you are interested in collaborating or testing this new language, please feel free to reach out. Your expertise and insights will be invaluable in refining and improving this technology.
Thank you, and I look forward to an engaging discussion!
Best regards,
Ziad Rabea
Relevant answer
Answer
Dear Ziad Rabea ,
By taking a multilingual software development approach, you can improve the efficiency of your localization processes, reduce translation cost, and provide a better experience for your software's international users. Here's how you can switch to a multilingual software development process.
Regards,
Shafagat
  • asked a question related to Language
Question
1 answer
1)Identify concrete situation.
2)Have empathy.
3)Either already know the language or have an effective enough AI translator.
Relevant answer
Answer
To interpret something:
1. **Understand Context**: Consider the context and background information.
2. **Analyze Content**: Examine the details and main points.
3. **Identify Key Themes**: Determine the central themes or messages.
4. **Evaluate Significance**: Assess the importance and implications.
5. **Formulate Insight**: Develop your understanding or conclusion based on the analysis.
This approach helps in making sense of data, texts, or situations effectively.
  • asked a question related to Language
Question
3 answers
Ablation of the cerebellum does not abolish locomotion in mammals (Ioffe 2013); it merely induces atonia: body movements become clumsy with postural and vestibular deficits, which is related to the negation of both proprioceptive and vestibular input to the cerebellum, which encodes where the body is with respect to itself and the outside world, i.e., with respect to the gravitational axis (Carriot et al. 2021; Demontis et al. 2017; Fuchs and Kornhuber 1969; Lawson et al. 2016; Miles and Lisberger 1981). Animals have difficulty crossing a balance beam following complete cerebellar damage and the righting reflex is interrupted. Consciousness, which is a declarative attribute, is not affected following cerebellar damage (D’Angelo and Casali 2013; Petrosini et al. 1998; Tononi and Edelman 1998). As with cerebellar impairment, following neocortical ablation, locomotion is not eliminated but the sequencing of movement is severely affected (Vanderwolf 2007; Vanderwolf et al. 1978). Stepping responses can be evoked in spinal animals, but with a total loss of balance and muscular coordination since both cerebellar and neocortical support is now absent (Audet et al. 2022; Grillner 2003; Sherrington 1910).
Following a stroke that affected the left mediolateral and posterior lobes of the cerebellar cortex (including the left dentate nucleus), it was found that the subject (aged 72), a (right handed) war correspondent who had been versed in seven languages, could no longer communicate in his non-primary languages (see Fig. 1, Mariën et al. 2017): French, German, Slovenian, Serbo-Croatian, Hebrew, and Dutch (in the order of having learned the languages before the age of 40). Before the stroke, the subject used Dutch, French, and English regularly. After the stroke his primary language, English, remained intact. Most significantly, the day of the stroke, all thinking in the second languages was abolished (see Footnote 1). One day following the stroke, however, the French language returned. Nevertheless, the remaining secondary languages were abnormal. Reading was better preserved than oral and written language, likely because reading is dependent mainly on scanning a page with the eyes and having an intact neocortex for word comprehension (fMRI revealed language activations in neocortex and in the intact right cerebellar hemisphere, Mariën et al. 2017). Speaking and writing, on the other hand, are more dependent on the sequencing of multiple muscle groups, a task of the cerebellum (Heck and Sultan 2002; Sultan and Heck 2003; Thatch et al. 1992). When speaking or writing in a non-primary language, English words would intrude. The naming of objects and actions verbally was impaired, and writing was severely disrupted. When high-frequency visual stimuli (objects, animals, etc.) were presented visually (1 month after the stroke), identifying an object with the correct word surpassed 80% correctness for English, French, and Dutch, whereas it remained at under 20% correctness for German, Slovenian, Serbo-Croatian, and Hebrew. Since the execution of behavior depends on loop integrity between the neocortex and cerebellum (Hasanbegović 2024), it is highly likely that damage to the cerebellum undermined this integrity such that the least overlearned routines—German, Slovenian, Serbo-Croatian, and Hebrew—were disturbed. Note that a functional left neocortex (of the right-handed subject) with a preserved right cerebellum was sufficient to execute the overlearned languages—English, French, and Dutch.
Based on our understanding of cerebellar function, if the entire cerebellum (including the subjacent nuclei) were damaged in the subject, we would expect that even English, the primary language, would be compromised, and most importantly, the learning of a new language would be rendered impossible, given the dependence of behavioral executions (and learning) on intact neocortical-cerebellar loops (Hasanbegović 2024; also see: Sendhilnathan and Goldberg 2000b; Thach et al. 1992). Thus, thinking is affected by damage to neocortical-cerebellar loops, which concurs with the behavioral findings of Hasanbegović (2024).
Footnote 1: Self-report by the patient about the day of the cerebellar stroke: “I was watching television at my apartment in Antwerp when suddenly the room seemed to spin around violently. I tried to stand but was unable to do so. I felt a need to vomit and managed to crawl to the bathroom to take a plastic bowl. My next instinct was to call the emergency service, but the leaflet I have outlining the services was in Dutch and for some reason, I was unable to think (or speak) in any language other than my native English. I have lived in Antwerp for many years and use Dutch (Flemish) on a day-to-day basis. I called my son-in-law, who speaks fluent English and he drove me to Middelheim Hospital. We normally speak English when together. I understood none of the questions asked to me in Dutch by hospital staff and they had to be translated back to me in English. My speech was slurred. I had lost some words, I was aware of that, but I cannot recall which words. I made no attempt to speak any of the other languages I know, and in the first hours of my mishap happening, I do not think I realized that I had other languages.” (Mariën et al. 2017, p. 19)
Figure 1. Human cerebellar cortex. The mediolateral and posterior lobes are indicated. The mediolateral lobe of the cerebellum (right and left) is part of the cortico-frontal-cerebellar language loop (Stoodley and Schmahmann 2009), and cerebellar grey matter density in bilingual speakers is correlated with language proficiency (Pliatsikas et al. 2014). Typically, the innervation of the left neocortical language areas is strongest to the right cerebellum in right-handed subjects (Van Overwalle et al. 2023). Illustration from figure 8 of Tehovnik, Patel, Tolias et al. (2021).
Relevant answer
Answer
Ευχαριστώ για τα καλα λόγια, thank you for the words of encouragement.
  • asked a question related to Language
Question
2 answers
We can totally get the sentence meaning without them.
Relevant answer
Answer
Chinese does not use verb tenses, or the verb "to be" I understand, causing listeners to have to figure out timing by context. But verb tenses are deeply ingrained in English.
Yes, we can decode the meaning, but it makes the speaker sound uneducated as the result of using "bad grammar", thus lowering credibility.
Note that when AI translates Chinese into English, it generally translates as present tense, even when the text is clearly talking about something in the past or the future.
  • asked a question related to Language
Question
4 answers
Theta activity (~ 6-10 Hz) has been associated with transitions between different frames of consciousness, as studied using binocular rivalry (Dwarakanath, Logothetis 2023). This rhythm is modulated by neurons in the septal area by way of the hippocampus (Buzsáki 2006; Stewart and Fox 1990). A travelling theta wave occupies the posterior-anterior length of the hippocampus during locomotion along a track (Lubenov and Siapas 2009; Zhang and Jacobs 2015). Both excitatory (cholinergic) and inhibitory (GABAergic) neurons located within the septum are important for maintaining this rhythm (Stewart and Fox 1990). These neurons not only innervate the hippocampus, but they also affect the neocortex (Beaman et al. 2017; Bjordahl et al. 1998; Engel et al. 2016; Goard and Dan 2009; McLin et al. 2002; Miasnikov et al. 2009; Pinto et al. 2013; Tamamaki and Tomioka 2010; Vanderwolf 1969, 1990) so that the two regions can exhibit synchronized activations when tasks such as running along a track, playing a musical instrument, or delivering a speech are being executed. These behaviors require transitions between different frames of consciousness, as stored declaratively within the neocortex (Corkin 2002; Dwarakanath, Logothetis 2023; James 1890; Sacks 1976, 2012; Squire et al. 2001). Having both excitatory and inhibitory inputs to the neocortex (Stewart and Fox 1990; some 2/3 of neocortical neurons are excitatory and the remainder are inhibitory, Bekker 2011) allows for specific strings of consciousness to be concatenated, but only after overtraining which diminishes the roll of the cerebellar cortex (e.g., Lisberger 1984; Miles and Lisberger 1981). Thus, the concatenated items of the neocortex would need to have ready access to the brain stem and spinal cord nuclei to produce a sequence of behaviors (Kumura 1993; Vanderwolf 2007). For this to be accomplished there needs to be a fine interplay between the inhibitory and excitatory fibres of the neocortex. Exactly how this happens sequentially remains to be deduced by careful experimentation, but we now have the technology to study this globally in the brain (e.g., Hasanbegović 2024).
The travelling wave via the hippocampus (Lubenov and Siapas 2009; Zhang and Jacobs 2015) must be paired with specific neocortical neurons to deliver a declarative expression, such as—"I want to be a scientist”—which is generated by the muscles controlled by the brain stem vocal apparatus (see Footnote 1). Each cycle of a travelling wave would sample a particular sequence of activations within the neocortex and across one cycle a specific collection of neurons would be sequenced, and items stored within each neuron delivered verbally. This process would be repeated—the repetition of unique strings of consciousness—until the completion of a speech. The cerebellar cortex would only be engaged while delivering a speech, if alterations needed to be made to the executable code, which would happen, for example, if someone from the audience asked a question. Such an alteration would require a volitional intervention by the speaker (i.e., by the neocortex) to interrupt the automatic running of the executable code as memorized.
Footnote 1: The reason humans have been endowed with speech is because the M1 pyramidal fibres innervate the vocal apparatus directly which is composed of the following cranial nerves: V, VII, X, and XII (Aboitiz 2018; Kimura 1993; Ojemann 1991; Penfield and Roberts 1966; Simonyan and Horwitz 2011; Vanderwolf 2007). This allows for maximal control over the speech muscles. It is known that most speech, irrespective of language type, can be transferred at about 40 bits per second (Coupé et al. 2019; Reed and Durlach 1998; Tehovnik and Chen 2015). One will need to investigate whether this limit is set by the number of pyramidal fibres dedicated to the production of speech [note that a brain-machine interface for speech was found to transfer 2.1 bits per second for neural recordings made in the speech area of M1 (Willett, Shenoy et al. 2023), which falls well short of the 40 bits per second needed for normal performance]. Some 100 of the 700 skeletal muscles of the human body are involved in the delivery of a speech to operate the vocal apparatus (Simonyan and Horwitz 2011).
Relevant answer
Answer
That is one of the most amazing sequential descriptions inhave ever seen put together!! very enjoyable and informative reading!!!!
  • asked a question related to Language
Question
5 answers
When does a language become dead?
Relevant answer
Answer
when people stop using it their communication for one reason or another. Different factors can lead to language death; historical, political, cultural, economical, social, psychological, when? when the most of the previous factors meet together
  • asked a question related to Language
Question
2 answers
Without a neocortex language processing in humans is impossible (Kimura 1993; Ojemann 1983, 1991; Penfield and Roberts 1966) and without a hippocampus (but with an intact neocortex and cerebellum) new language associations cannot be consolidated into long-term memory (Corkin 2002). Noam Chomsky (1965), the father of modern linguistics, made two bold claims some 60 years ago. First, he declared that all humans have a universal grammar that is genetically based and that explains why language acquisition is so rapid in young children. Second, he proposed that a central process in language acquisition is a principle called ‘merge’, which takes two syntactic elements ‘a’ and ‘b’ and merges them to form ‘a + b’. For example, ‘the’ and ‘apple’ are combined to yield ‘the apple’. This process can apply to the results of its own output such that ‘ate’ can be combined with ‘the apple’ to yield ‘ate the apple’. Language is thus built-up from component parts using a process called Merge. The basic elements of language (whether auditory or visual) are stored in Wernicke’s and Broca’s areas in a declarative format (Corkin 2002; Penfield 1975; Penfield and Roberts 1966; Scoville and Milner 1957; Squire and Knowlton 2000; Squire et al. 2001) according to the learning history of an individual to create a linguistic map that is unique (Ojemann 1991).
The neocortex of mammals was designed to make associations at the synaptic level, which is well established (Hebb 1949, 1961, 1968; Kandel 2006; also see Pavlov 1927, pp. 328 who found that classical conditioning is rendered ineffective 4.5 years after neocortical removal, but ‘vegetative’ conditioning is intact, Gallistel 2022). Normally, electrical stimulation of M1 (i.e., motor cortex) yields a muscle twitch, but after electrical stimulation of M1 is temporally paired with the electrical stimulation of V1 (i.e., the visual cortex) then electrical stimulation of V1 evokes a muscle twitch on its own (Baer 1905; Doty 1965, 1969). Furthermore, V1 conditioning is dependent on descending pyramidal fibres (Logothetis et al. 2010; Rutledge and Doty 1962; Tehovnik and Slocum 2013), which means subcortical circuits must be involved in the learning process. And we already know which subcortical structures are important here: the hippocampus consolidates the declarative information at the level of the neocortex (Corkin 2002; Penfield 1975; Penfield and Roberts 1966; Scoville and Milner 1957; Squire and Knowlton 2000; Squire et al. 2001; Swain, Thompson et al. 2011) and the cerebellum converts the declarative information into executable code, i.e., to drive the vocal cords for speaking and hand movements for writing (Tehovnik, Hasanbegović, Chen 2024).
Hence, the neocortex, the hippocampus, and the cerebellum together are necessary for humans to acquire language as envisioned by Chomsky (1965). And this capacity evolved from mechanisms already existent in mammals/ vertebrates (i.e., a telencephalon and a cerebellum) and that was passed on to archaic Homo sapiens some five hundred thousand years ago (Kimura 1993), but some believe that the basic elements of language existed in Homo erectus 2.5 million years ago (Everett 2016).
Note: Activation of two microzones composed of Purkinje neurons in the cerebellar flocculus (one for horizontal movement and a second for vertical movement) using optogenetics induces precise movement of the ipsilateral eye of the mouse (from Fig. 5 of Blot, De Zeeuw et al. 2023). This precision is such that each eye has independent innervation for VOR and OKN (the independence allows the eyes to verge across different depth planes). Although we do not have the data for driving the vocal cords, distinct microzones must be activated when we learn to speak a language. This is how declarative information of the neocortex is converted into a motor response (a sound) during learning. No need to invoke abstract concepts to explain Chomsky’s ‘Merge’ since the brain is explainable biologically.
Relevant answer
  • asked a question related to Language
Question
7 answers
The 5th International Conference on Language, Art and Cultural Exchange (ICLACE 2024) will be held on May 17-19,2024 in Bangkok, Thailand.
ICLACE 2024 is to bring together innovative academics and industrial experts in the field of Language, Art and Culture to a common forum. The primary goal of the conference is to promote research and developmental activities in Language, Art and Culture and another goal is to promote scientific information interchange between researchers, developers, engineers, students, and practitioners working all around the world.
The conference will be held every year to make it an ideal platform for people to share views and experiences in Language, Art and Culture and related areas. We warmly invite you to participate in ICLACE2024 and look forward to seeing you in Bangkok, Thailand!
Important Dates:
Full Paper Submission Date: March 10, 2024
Registration Deadline: April 1, 2024
Final Paper Submission Date: April 28, 2024
Conference Dates: May 17-19,2024
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕Language
· Philosophy of Language and International Communication, Language and National Conditions
· Oral Teaching, Chinese Language and Literature, Philosophy in Language
· Body Language Communication, Language Research and Development, Language Expression
· Analysis and Research on Teachers' teaching Language and Network Language
◕ Art
· Materials and Technology, Environmental Sculpture Modeling, Murals and Reliefs, Decorative Foundation, Aesthetics
· Public Facilities Design, Architecture and Environment Design, Space Form Design, Public Governance Change
· Exhibition Design, Art design, Digital Media Technology, Landscape Planning and Design, Gem design, Industrial Design
· Art Theory, Music and Dance, Drama and Film and Television, Fine Arts, Chinese Calligraphy and Painting, Film and Film Production
◕ Cultural Exchange
· Campus and Corporate Culture Construction, Adult Education and Special Education, Creative Culture Industry and Construction, Educational Research
· Chinese Traditional Culture and Overseas Culture, Comparative Study of Chinese and Foreign Literature, Comparison of Chinese and Foreign Cultures and Cross-Cultural Exchanges
· Regional Culture and Cultural Differences, Intangible Cultural Heritage, Cultural Confidence and Connotation
· Red Inheritance and Cultural Heritage, Cultural Industry, Drama, Philosophy and History
For More Details please visit:
Relevant answer
Answer
Thanks for the response.
  • asked a question related to Language
Question
4 answers
Are people more likely to mix up words if they are fluent in more languages? How? Why?
Relevant answer
Answer
Certainly! A person who is eloquent in more than one language is more likely to code-switch and mix up words from different languages within her L1. Language users be it consciously or unconsciously, seek facilitating things for themselves. Reasons for this interference vary:
1/ Similarities in pronunciation, grammar, vocabulary among languages systems like French, English, Spanish do play an important role in a multilingual society. The fact of knowing more than one language because of historical reasons, mixing up words become crucial when people communicate with others from different languages. A person who is fluent in French may easily mix up words when using English. The same thing happens to learners who mix up words from French when writing in or speaking English.
2/Language dominance: A bilingual speaker who uses the second language the whole day at work and with colleagues may not prevent herself from mixing up words when using her mother tongue at home.
3/ Prestige is another reason why people mix up words. For example, in Algeria people who uses French (a second language) words or sentences with Arabic is considered intellectual.
3/Actually languages interferences and code-witching occur even in the same language. For instance, a person who lives or works in an area which is far from home may be noticed since she uses different vocabulary and body language. The same thing happens to the same language user when words are mixed up using her own language at home.
  • asked a question related to Language
Question
2 answers
Listening to the BBC this morning (Jan 13, 2024), whereby the guests were inter-language translators of literary books. The question addressed by the host of the program was: Will AI replace translators? After a group of translators of scholarly novels and poems was interviewed, it became clear that these folks will not be replaced any time soon, no matter how good the AI technology might now appear to be. The reason? Context!
For example, take the expression ‘Trump Matters’. Now a simple interpretation of this might be: “Yes, Trump is a human being and like all human beings he matters.” But someone who has absorbed lots of current affairs as it pertains to the United States might interpret it as a play on words based on the expression ‘Black Lives Matter’. If so, this introduces a whole new series of complexities for a translator. First one must understand that ‘Black Lives Matter’ is a movement in the United States that responded to the death of an African American, George Floyd, who was choked to death by a police officer. Within this context, the term ‘Trump Matters’ cannot be translated using the simple formulation and the translator must be familiar with the movement and with the intentions of the person who coined the term ‘Trump Matters’, who believes that he matters because he and his supporters are trying to suppress the history of the African American experience in the US, which is something that the Ku Klux Klan did throughout the United States within the period of American Reconstruction and beyond (1865-1960).
Neuroscientists now understand that objects/words are stored in the brain according to their context (Lu and Golomb 2023). Furthermore, when the neocortex/hippocampal complex is damaged one cannot learn new words (Corkin 2002); following cerebellar damage one cannot learn new movements as triggered by a declarative, conscious context such as an imbedded word (Tehovnik, Hasanbegović, Chen 2024). Finally, following destruction of the language areas of neocortex, one is (forever) ‘blind’ to all words and phrases for both their reception and production (Kimura 1993; Penfield and Roberts 1966) even if the cerebellum remains intact.
In short, AI will not be replacing the human brain anytime soon due to the problem of storing ‘context’. It is the storage of context along with the object that makes an individual’s recollection of history unique, which means Einstein, Kasparov, and Pelé cannot easily be converted from one to the other, brain-wise. So, all the nonsense of hooking up different brains to transfer their experiences (e.g., Pais-Vieira, Nicolelis et al. 2013) is just that: nonsense (see Tehovnik and Teixeira e Silva 2014).
Relevant answer
Answer
Until AI systems learn to determine meaning, translators can sleep peacefully. This is unachievable within a 5-10 year horizon.
  • asked a question related to Language
Question
3 answers
No one has the mental capacity to know all languages. Additionally, the more languages one is fluent in, the more likely that individual will mix up words. Thus, knowing enough languages for survival is optimal while artificial intelligence could and potentially will bridge language barriers. Of course knowing three languages or more is somewhat of an advantage.
Relevant answer
Answer
Sure, the focus study helps to find many special points of Strength in the language.
  • asked a question related to Language
Question
6 answers
this is what they say on etymoline.com:
"late 14c., auctorisen, autorisen, "give formal approval or sanction to," also "confirm as authentic or true; regard (a book) as correct or trustworthy," from Old French autoriser, auctoriser "authorize, give authority to" (12c.) and directly from Medieval Latin auctorizare, from auctor (see author (n.))."
Relevant answer
Answer
That is the way. The first step is the addition of the suffix -ize, which is used to create verbs from adjectives, to the root 'author' (in its adjectival meaning) creating the verb digitalized with the meaning of 'making something A ('author')'. After this, we add negative preffix 'un', which means 'the opposite or contrary action of V', creating 'unauthorize'. The same evolution follows the chain: digital>digitalize>undigitalize.
  • asked a question related to Language
Question
5 answers
Could it be possible to access similar studies in commonly used languages other than English regarding the relationship between title length and citation in academic articles?
Relevant answer
Answer
Hello Prof Metin Orbay
I have also heard (but do not know there is proof) that if you make your journal title a question that it is less likely to get read and cited. This could be an old wives' tale!
  • asked a question related to Language
Question
3 answers
Our world has many kind of languages. Some languages have important contribution for our lives. For example we can learn a lot of sciences and technologies transfer from that language. But the other ones maybe ourselves don't find any advantages to study that language. Looks like just waste our time. What is your opinion about this topics ?
Relevant answer
Answer
Thank you for your insight Rhianon Allen, and Victoria Sethunya.
  • asked a question related to Language
Question
3 answers
‘Entrance to courses is frequently restricted by high prerequisites in terms of prior academic performance (Arendt, Lange, & Wakefield, 1986; Crawford-Lange, 1985; Lange, 1987). This elitism is curious when one considers that it operates under the assumption that some students cannot learn a second language when virtually all students have achieved proficiency in a first language (Crawford & McLaren 2004, p. 141).
Should Higher Education institutes in native English-speaking countries request from Non-native English Speakers (NNES) English proficiency requirements for entry without mandating the same proficiency tests for Native English Speakers (NES)?
Some Higher Education institutes in native English-speaking countries require proof of proficiency from Non-native English-speaking individuals for entrance. There is no question that students need to communicate in the target culture language. However, these institutes enforce strict IELTS band scores for each language skill (reading, writing, speaking, and listening) from NNES but do not mandate that NES undertake the proficiency test. This assumes that NES are naturally skilled in reading, writing, speaking, and listening, whereas, in reality, not all NES have strong writing or reading skills.
Arguments to consider:
1) Some NNES might have exam anxiety, which puts them at a disadvantage when taking English proficiency tests.
2) Some topics in English proficiency tests are specific to NES cultures that NNES may be unfamiliar with.
3) NNES should have the opportunity to be accepted regardless of their English proficiency scores with options for prerequisite courses for improvement.
4) Different cultures have different writing styles. Language Tests assessors might not be familiar with these cultural differences, which may affect grading.
Relevant answer
Answer
The question of English proficiency is a requirement by all the institutions with a some differences from one institution to another on the basis of the discipline every student wants to study.
  • asked a question related to Language
Question
7 answers
As in French le/la, in German der/die/das & other languages, thera are genders for words & so articles in some languages. Grammaticaly gender for words are complete redundancy !? Governments have to cancel them offically as soon as possible so that people can learn those languages easily also. One of the reason English almost became universal language is due to being genderless for words !
"It's an inheritance from our distant past. Researchers believe that Proto-Indo-European had two genders: animate and inanimate. It can also, in some cases, make it easier to use pronouns clearly when you're talking about multiple objects."
As Mark Twain once wrote in reference to German:
A person’s mouth, neck, bosom, elbows, fingers, nails, feet, and body are of the male sex, and his head is male or neuter according to the word selected to signify it, and not according to the sex of the individual who wears it! A person’s nose, lips, shoulders, breast, hands, and toes are of the female sex; and his hair, ears, eyes, chin, legs, knees, heart, and conscience haven’t any sex at all…
Relevant answer
Answer
Each language has its own rules and structure. Governments have nothing to do with this. It is language specific. You can not change it in a fortnight.
Regards
Mustapha Boughoulid
  • asked a question related to Language
Question
2 answers
The question of transliteration (transcription) from Ukrainian Cyrillic to Latin in scientific texts is something that each Ukrainian researcher faced in his life. However, I could not find a perfect solution, so I am asking for your opinion.
Previously, for the transliterations of Ukrainian texts, I used transliteration rules from 27.01.2010 (http://ukrlit.org/transliteratsiia), which are known in Ukraine but not always understandable for foreigners. E.g. my previous surname was also transliterated using this standard (Дарія Ширяєва -> Dariia Shyriaieva, and to be honest, I do not know any foreigner who can read my name correctly using this official transliteration, especially the four vowels in the line "iaie"...)
So, it was not an ideal option, but I was used to it, and it is an accepted transliteration. That's why I defended this transliteration in my discussions with others.
However, I see that many people follow the ISO 9 standard (https://en.wikipedia.org/wiki/ISO_9) as an international standard, which also seems not ideal to me (at least the last version). Also, recently I found the mention of a new transliteration standard (quite a strange one!): "DSTU 9112:2021. Cyrillic-Latin transliteration and Latin-Cyrillic retransliteration of Ukrainian texts. Writing rules" (https://uk.wikipedia.org/wiki/%D0%94%D0%A1%D0%A2%D0%A3_9112:2021).
Could you please explain how you transliterate Ukrainian texts, which standard you use and why?
Thank you very much!
Dariia
Relevant answer
Answer
The same. As far as I understand, 2010 style is still used in official documents. Plus, despite the shortcomings, this is the simplest option, for which the standard Latin alphabet layout is enough.
  • asked a question related to Language
Question
9 answers
In UAE,  there are other Arabic dialects used. I just want to examine students attitudes towards English, Standard Arabic and the spoken dialect using the matched guise technique. So which dialect to use the in the recorded  dialect guise  ?
Relevant answer
  • asked a question related to Language
Question
6 answers
One of the answer would be the sensory input, but I want to know what others think
Relevant answer
Answer
Language remains the birth child of every children of the universe .We all know that at the birth time of every child which occurs the sound resembling the sound of the arrival creating the sound calling the problems of new arrival .Every human beings irrespective , caste , creed & religion are social animals sitting the company of their society & culture & with growth of every human beings language comes automatically in their ear & mind . It is naturally that every human beings take the meaning of understand with the language .
This is my personal opinion
  • asked a question related to Language
Question
10 answers
In Turkey, translation is used in the multiple-choice format in language proficiency exams. I wonder if there are any other examples around the world.
  • asked a question related to Language
Question
4 answers
Hi, everyone. :)
Language maintenance and language shifting is an interesting topic. Talking about Indonesia, our linguists note that until 2022 Indonesia has 718 languages. Indonesia really cares about the existing languages.
One thing that is interesting, language maintenance and language shift are also influenced by geographical conditions.
To accommodate 718 different languages, Indonesia has a geographical condition of islands. If we move from island to island in Indonesia, the use of the language is very contrasting, there is contact of different languages ​​between us.
Some literature states that language maintenance and language shift are strongly influenced by the concentration of speakers in an area.
So, in the developments related to the topic of language maintenance and language shift regarding geographical conditions, to what extent have linguists made new breakthroughs in this issue?
I think that the study of language maintenance and language shifts related to regions is the same as the study of food availability or state territory which makes the area the main factor for this defense.
I throw this question at all linguists, do you have a new point of view in the keywords language, maintenance, and geographical.
Kind regards :)
Relevant answer
Answer
Language maintenance is the maintenance of a language (usually L1) despite the influence of external sociolinguistic forces (usually powerful language(s) and language shift, is a shift, transfer, replacement or language assimilation of usually L1 to L2 due mainly to the external sociolinguistic forces influencing a speech community to shift to a different language over time. This happens because speakers may perceived the new language as prestigious, stabilized, standardized over their L1 (lower-status). An example is the shift from first languages to second language(s) such as the English language.
Solution for language maintenance and protection from language shift rests on Social networks.
Social network deals with the relationships contracted with others, with the community structures and properties entailed in these relationships (Milroy, 1978,1980 &1987)
· It views social networks as a means of capturing the dynamics underlying speakers’ interactional behaviours and cultures.
The fundamental assumption is that people create their communities with meaningful framework in attaining stronger relationship for solving the problems of daily life.
Personal communities are constituted by interpersonal ties of different types, strengths, and structural relationships between links (varying in nature) but a stronger link can become the anchor to the network.
For close-knit network with strong ties
Such networks have the following characteristics, they are
  • Relatively dense = everyone would know everyone else (developing a common behavior and culture)
  • Multiplex = the actors would know one another in a range of capacities
Where do we find some close-knit networks? In smaller communities, but also in cities, because of cultural and economical diversity, e.g. newer emigrants communities, or High-educated individuals.
Functions:
  1. Protect interest of group
  2. Maintain and enforce local conventions and norms that are opposed to the mainstream -> lingustic norms, e.g.vernaculars, are maintained via strong ties within close-knit communities.
Network with weak ties
These networks have the following characteristics, they are:
  • Casual acquaintances between individuals
  • Associated with socially and geographically mobile persons
  • They often characterize the relations between groups
Lead to weakening of a close-knit network structure -> these are prone to change, innovations and influence between groups and may lead to language shift/language transfer/language/language replacement.
  • asked a question related to Language
Question
3 answers
Hello all. I hope you are always in good health.
In the maintenance and shift of language, in the current era. What factors are most influential in language maintenance or language shift?
Generally, language maintenance and language shift involve attitudes, bilingualism, number of speakers, regional concentration, genealogy, etc.
Share your experience here. :)
Relevant answer
Answer
The factors are diverse and include political, social, demographic, economic, cultural, linguistic, psychological and institutional support factors. They are demonstrated in this article
  • asked a question related to Language
Question
11 answers
From Hamlet: “What a piece of work is a man, how noble in reason, how infinite in faculties, in form and moving how express and admirable, in action how like an angel, in apprehension how like a god: the beauty of the world, the paragon of animals!”
From Herder’s On the Origin of Language (Abhandlung über den Ursprung der Sprache): “... we perceive to the right and to the left why no animal can invent language, why no God need invent language, and why man, as man, can and must invent language."
When Shakespeare and Herder use the word “man”, do they mean every individual human being or all of humanity acting collectively are noble in reason (per Hamlet) or create language (per Herder)? Do they use the word “man” as representative of humanity, or to they mean that every individual human being warrants admiration?
Relevant answer
Answer
Very much both. But genius is individual.
  • asked a question related to Language
Question
36 answers
Is it a problem of philosophy, language, physics, thermodynamics, statistical mechanics, or brain physiology? Or something else? Or beyond understanding?
A physiological approach is discussed by Joseph LeDoux (in The Deep History of Ourselves, 2020) among other authors. A physics orientation is considered in Deepak Chopra, Sir Roger Penrose, Brandon Carter (How Consciousness Became the Universe: Quantum Physics, 2017). David Rosenthal has written several books of philosophy about consciousness. And Bedau 1997 and Chalmers 2006. Which is the right conceptual reference frame? Or is more than one required?
Relevant answer
Answer
  • asked a question related to Language
Question
37 answers
Fellow psychologist and people of great curiosity! Greetings! Please help a novice in the topic. I was asked of my opinions on how well our words represent our true thought and beliefs, which left me wondering if there are empirical evidences on the subject. As it is not my field, I had a quite hard time finding the right word into the search engine. It would be great if you could suggest a few readings or simply share your thoughts!
(There is no parametre so far, thus, it could be anything related to the topic ' to what extent language does reflects attitude; what are factors influencing truthfulness of the word; When we change our attitude towards a certain topic, would our words adapt just as fast? etc.)
Thank you!
Steven
Relevant answer
Answer
There are cognitive skills that guide behavior. The deployment of these skills can be regarded as involving thought at various levels of awareness, including unconscious thought. Such know-how cannot be completely verbalized and indeed, some verbalization can interfere with the acquisition or exercise of the skill. Developers of AI drawing on human exemplars of expertise face this problem when they try to reduce skills to rules (rules are verbal) inasmuch as human experts often don't seem to employ a rule-based approach and even when they do invoke rules, their rules don't fully represent their modus operandi.
  • asked a question related to Language
Question
230 answers
And if in addition to advancing in “Artificial Intelligence” we further investigate our “Natural Intelligence”!?
for example, Natural Intelligence and Research in Neurodegenerative diseases.
While we are still at an early stage in answering some key questions about Natural Intelligence [NI] [such as what algorithms the mind uses] the rapidly advancing Artificial Intelligence [AI] has already begun to change our Daily Lives. Machine learning has brought to light remarkable potential in healthcare, facilitating speech recognition, clinical image analysis, and medical diagnosis. For example, there is a growing need for automation of medical imaging, as it takes a lot of time and resources to train an Expert Human Radiologist. Deep learning AI architectures have been developed to analyze medical images of the brain, lungs, heart, breast, liver, skeletal muscle, some of which have already been used in clinics to aid in disease diagnosis. Juana Maria Arcelus-Ulibarrena
Cfr.
This Question does not refer to "NATURALISTIC INTELLIGENCE" but to "NATURAL INTELLIGENCE"
We are asking by NATURAL INTELLIGENCE [NI] not by NATURALIST INTELLIGENCE
Relevant answer
  • asked a question related to Language
Question
7 answers
I am interested in meaning-making practices associated with visual language and what that means for traditional curricula in the English-speaking Caribbean.
Relevant answer
Answer
In the Middle East, there is no interest in this topic
  • asked a question related to Language
Question
8 answers
Can anyone recommend a journal for submission? I am particularly looking for journals that (i) accept pieces in the 800 to 2000 word range, and (ii) that have no publication fees.
Relevant answer
Answer
Here is a free one (Academic Voices) that serves a similar function:
However, I was hoping for a journal that was roughly in the same ballpark as my grammatical topics.
  • asked a question related to Language
Question
10 answers
Studying one of the varieties of Persian, it is assumed that, regardless of the stress position, all the short (mono-moraic) vowels are reduced to schwa in all of the open syllables. More clearly, all long (bi-moraic) vowels are kept intact and the short vowels have a surface representation only if they are the nucleus of closed syllables. Has any research provided any evidence of a language or a variety which can fit a similar phonological pattern?
Any information would be greatly appreciated.
Relevant answer
Answer
Johan Schalin Thanks for the reply. I will read it with great interest.
  • asked a question related to Language
Question
9 answers
Dear Friends,
Greeting.
Happy New Year. I wish everybody a prosperous New Year.
I'm thinking of a project for checking the sound (phonetics) that will lost or promoted while switching from one set of alphabet to another. For example switching from Arabic letters to Latin in Turkey; does the set of Latin letters saved all Turkish phonetics (sound)? What is the advantages and/or disadvantages of such switching?
Did such work carried out anywhere?
Best Regards,
ABDUL-SAHIB
Relevant answer
Answer
Dear عائشة عبد الواحد thank you for the invaluable answer.
  • asked a question related to Language
Question
19 answers
Hi all. A project I'm working on involves the use of a two-way repeated measures ANOVA. The dependent variable is the transcriptional accuracy of sentences-in-noise (measured in proportions). The independent variables are accents of the sentences (2 accents) and visual primes (2 kinds of primes). The results show that there were significant main effects of primes and accents and a significant two-way interaction between primes and accents (F(1, 30)=9.97, p=0.004). However, as shown in the attached line chart, the two lines are almost parallel. Moreover, post-hoc paired-sample t-tests confirmed that participants' accuracy with accent2(Mean=0.77, s.d.=0.13) is significantly higher than accuracy with accent1(Mean=0.51, s.d.=0.18) in prime 1 condition, and similarly, participants' accuracy with accent2 (Mean=0.68, s.d.=0.13) is significantly higher than accuracy with accent1(Mean=0.31, s.d.=0.12) in prime 2 condition. Does this indicate that the main effects of accent and prime are not dependent on each other? If so, isn't this contradictory with the result suggesting significant interaction? Or is it that the occurrence of a significant 2-way interaction only requires that the difference between the group mean accuracies with accent 1 and 2 was smaller in prime 1 condition than in prime 2 condition, which in this case is true.
Thank you in advance!!!
Relevant answer
Answer
This is a good example of why it's important to not place too much importance on p-values.
The significant p-value tells you that the statistical procedure is able to identify an effect of the interaction against the noise of the variability of the data * . Looking at the plot, your eyes, too, tell you that the slopes are different relative to the variability in the data. The error bars on the red points overlap, and those on the blue points do not.
But this significant result does not tell you that the interaction effect is large, nor that it is of any practical importance. The plot is very helpful for the reader to understand the results. The difference in slopes is small relative to the difference in Accents, and probably relative to the difference in Primes. How you interpret these observations for your reader is up to you.
__________
* This description is not technically correct, but hopefully gives you a sense of the point I'm trying to make.
  • asked a question related to Language
Question
56 answers
Is the use of English in scientific articles a real need for an international working language, or a sign of long-lasting Colonialism that keeps limiting the development of perspectives emerging from non-native English speaking cultures?
Do we really need to publish in English? I think we do unless we find another international working language to communicate with colleagues, and people in general, who use a language different than ours. Remember that, throughout history, scholars have always found one or a small group of working languages to communicate with each other (Latin, German, French, among others).
But, now that we use English, ... do we have alternatives to communicate our findings in our own language? Some people say we don´t because we have to invest every second of our time publishing in English. Some others say that we must find a way to save some time to publish in our language in order to better develop our ideas and to better communicate with our own societies. There must be other perspectives out there....please, let us know what would you do to reconcile the different alternatives, and bring solutions into practice, and also tell us what are your institutions doing to address this issue.
Framework Readings (feel free to suggest more. I´ll keep adding):
Relevant answer
Regardless of what English means in terms of colonialism, I am glad that the language of science has been standardized. Imagine making a literature search just to find relevant studies in more than 10 languages... Is necessary to discuss whether we need to migrate to another language? Perhaps if your findings are of national relevancy, you are absolutely free to publish your results in a native-speaking journal. On the other hand, if you are aiming at an international audience, English-based journals are the go.
  • asked a question related to Language
Question
12 answers
My friend is looking for coauthors in Psychology & Cognitive Neuroscience field. Basically you will be responsible for paraphrasing, creating figures, and collecting references for a variety of publications. Please leave your email address if you are interested. 10 hours a week is required as there is a lot of projects to be done!
Relevant answer
Answer
Will message you.
  • asked a question related to Language
Question
9 answers
Hello,
We are working on a review regarding the relationship between language and the mutiple-demand network. You will be responsible for addressing the reviewer's criticisms. Please leave your email address if you are interested.
Best,
W
Relevant answer
Answer
This would be a great question to post in our new free medical imaging question and answer forum ( www.imagingQA.com ). There are already a few fMRI questions on there and a number of fMRI users and experts in the community. If useful, please feel free to open a new topic at the link below :
  • asked a question related to Language
Question
19 answers
Most teachers agree that teaching the culture of native-speaking countries is valuable, but how MUCH should this be done?  Do you have a percentage in mind or other ways of saying how much of the course should be about culture?
And how does this fit in with the multi-cultural or meta-cultural perspective and rationale for learning the other language?
Has your perspective changed over time?
Relevant answer
Answer
Language and culture are two inseparable entities. Therefore, language learning is at once cultural learning. One’s mastery of the linguistic elements alone does not guarantee he will be able to communicate through a language. Mastering the cultural element is a must, such recognition then cultivated an awareness in foreign language teaching experts that language and culture are inseparable
  • asked a question related to Language
Question
13 answers
I have a research and i should analyze the types of code-switching. however, i can't use Poplack's theory because my instructor said that it is too old. Any suggestions of new theories?
Relevant answer
Answer
Garcia or Cangarajah's concepts of translanguaging might help you.
  • asked a question related to Language
Question
3 answers
Hello,
Are there any studies in linguistics about the average information density per character according to language (in the written form)?
Actually, I'm looking for data (rankings, for instance) on the average information density per character (or for 100, 1000, etc. characters) for languages like English, French, Japanese, etc. (in their written, not spoken, form).
Thank you very much.
  • asked a question related to Language
Question
20 answers
I was trying to determine whether there are differences in the frequencies of words (lemmas) in a given language corpus starting with the letter K and starting with the letter M. Some 50 000 words starting with K and 54000 words starting with M altogether. I first tried using the chi-square test, but the comments below revealed that this was an error.
Relevant answer
Answer
Did you try Python word count?
  • asked a question related to Language
Question
27 answers
google services that went from the best search engine to the backbone of the internet are very useful for the search of information, but sometimes the language in which that information is found is not the native of the researcher, for this reason translators are used to facilitate the understanding of it
Relevant answer
Answer
Google Translate is free, fast, and pretty accurate. Thanks to its massive database, the software can deliver decent translations that can help you get the main idea of a text...
  • asked a question related to Language
Question
13 answers
Hello! I am looking for Spanish, English and Chinese native speakers to participate in my final survey for my PhD thesis.
This is the direct link.
Thank you for your participation.
Relevant answer
Answer
Interesting
  • asked a question related to Language
Question
4 answers
We are developing a test for ad-hoc (ad-hoc) and scalar implicatures (SI) and are showing 3 images (of similar nature) to the participants: image, image with 1 item, image with 2 items.
Eg. Plate with pasta, a plate with pasta and sauce, a plate with pasta, sauce and meatballs.
A question for an ad-hoc is: My pasta has meatballs, which is my pasta?
Q. for an SI is: My pasta has sauce or meatballs, which is my pasta? (pasta with sauce is the target item since we are testing pragmatic implicatures, where 'or' means 'not both'.
The item that causes many difficulties in making up questions is the image without any items, ie. plate with pasta. How do we phrase the question so that it elicits this image as a target response, without using too complex syntax?
Negation; "My plate has no sauce or meatballs", "My plate has only pasta, no sauce and no meatballs", seems like a complex structure to introduce as a counterbalance to the other type of items.
Has anyone tested something similar, without negation? We would be grateful for any kind of tips and hints.
Relevant answer
Answer
Could you just say: my plate has plain plasta?
  • asked a question related to Language
Question
7 answers
We attempt to make a research to explore prosodic features of verbal irony read by Chinese EFL learners. We want to figure out:
1. the prosodic features of verbal irony read by Chinese learners;
2. the difference of prosodic features in verbal irony read by Chinese learners and native speakers;
3. whether context (high and low) influence the reading of verbal irony. 
Relevant answer
Answer
(LLS= Language Learning Strategies).
  • asked a question related to Language
Question
3 answers
Where can I get the code for K prototype algorithm for mixed attributes? Has anyone implemented it in any language?
Relevant answer
Answer
I recently found an implementation of kprototypes in Python.
Besides, here is a useful example of kprototypes.
  • asked a question related to Language
Question
5 answers
Or not.
Harry Jerison in his 1991 book Brain Size and the Evolution of Mind, at p. 89 has:
Mind is a necessary brain adaptation that organizes otherwise unmanageable amounts of neural information into a representation of the external world.
Is Jerison right?
Relevant answer
Answer
Neurons are not the best level of abstraction when speaking of mind. You don't talk about myocytes when you discuss about soccer. Concepts, symbols, and the various types of interactions between them, are more appropriate building blocks of mind.
Regards,
Joachim
  • asked a question related to Language
Question
8 answers
I am looking for any resources which may be useful in a study I am conducting on the impacts that language may have on our perception of crimes. I will be using headlines which convey a particular crime in a variety of lights; one which may appear to justify the perpetrator's actions, and one which portrays the crime in a neutral, non-biased way. I am looking for sources/previous studies which may back up this idea.
Relevant answer
Answer
Yes, it can. I would also suggest using CDA methodology. A word's dictionary and contextual meanings may be used as a starting point.
  • asked a question related to Language
Question
2 answers
How to track language change?
Relevant answer
Answer
My offering has been in relation to: "The Surname - Where has it gone?"
Has it died or simply become obsolete?
My observation is in the medical context when conversing with patients
Using first names outside of the confines of family and friends creates an erroneous sense of intimacy and social equivalence which seems to pervade everyday professional and business activities but may have some limitations in the patient - doctor relationship.
Guy Walters' offerings in the Nov 2020 issue of 'The Spectator' magazine may be more generally applicable.@
  • asked a question related to Language
Question
19 answers
What data or physics supports innateness or on the contrary, the idea that language is a creation of society? Historically, from Herder through David Hume to Jespersen, Sapir, Whorf, Zipf , language was considered to have been created by societies. Beginning around 1960 the idea of language as a genetically innate human capacity began to have influence. Who is right?
Relevant answer
Answer
Dear Robert Shour, I agree with Prof. Farangis Shahidzade post.
  • asked a question related to Language
Question
12 answers
I have been studying Zen in general and Koans in particular for a while. And it's applications in BUSINESS
The formulation of these Koans at first glance seems absurd and an austere waste of time, at least to me at first, but I suddenly started to see the logic behind it.
My troubles at the moment are;
1) How would I generate such Koans where my aim would be to seek answers that satisfy two divergent goals, tasks, concepts etc...
And second
2) if I somehow manage to generate such thing, how would I present it to my audience?
A statement, a question, a puzzle, a riddle anything else?....
The above is the object of my next publication and I it seems my brain is too small to handle it, therefore I am asking your help to generate some Koans for the business world
Many thanks in advance
Relevant answer
Answer
Can you give me more clarification on this subject
  • asked a question related to Language
Question
5 answers
The Publication Manual of APA (7th edition) has a very useful chapter on bias-free language. I would like to know if you've come across such chapters or sections in other publication manuals or style guides.
Relevant answer
Answer
My pleasure, Jakob.
  • asked a question related to Language
Question
22 answers
I would love to hear what people have come across in relation to language accessibility in publications. Ideally the journal focuses on Entomology and/or biodiversity, but I am also just curious on a broader scale if language friendly journals exist.
Relevant answer
Answer
Dear Erin Krichilsky I'm just wondering why you are looking for "a journal that accepts publications in two languages or at least is bilingual friendly". What is it good for to publish in different languages? We used to publish our research papers in German back in the 1970's and 1980's, but then we realized that the papers were not read by many researchers abroad. Then we switched to English to make sure that our papers are read worldwide (and eventually cited).
  • asked a question related to Language
Question
15 answers
What are racism's effects on language acquisition?
Whether on a personal or institutional level, please share your experiences.
Thank you.
Relevant answer
Answer
Racism, as any other from of discrimination like untouchability or other marginalization invariably affects language acquisition. In case of academic language acquisition like teaching English as second language, there is an observed and marked indications that the degree of language learning is not the same as the 'majority ' or 'mainstream ' learners.
  • asked a question related to Language
Question
3 answers
This is so far the procedure I was trying upon and then I couldn't fix it
As per my understanding here some definitions:
- lexical frequencies, that is, the frequencies with which correspondences occur in a dictionary or, as here, in a word list;
- lexical frequency is the frequency with which the correspondence occurs when you count all and only the correspondences in a dictionary.
- text frequencies, that is, the frequencies with which correspondences occur in a large corpus.
- text frequency is the frequency with which a correspondence occurs when you count all the correspondences in a large set of pieces of continuous prose ...;
You will see that lexical frequency produces much lower counts than text frequency because in lexical frequency each correspondence is counted only once per word in which it occurs, whereas text frequency counts each correspondence multiple times, depending on how often the words in which it appears to occur.
When referring to the frequency of occurrence, two different frequencies are used: type and token. Type frequency counts a word once.
So I understand that probably lexical frequencies deal with types counting the words once and text frequencies deal with tokens counting the words multiple times in a corpus, therefore for the last, we need to take into account the word frequency in which those phonemes and graphemes occur.
So far I managed phoneme frequencies as it follows
Phoneme frequencies:
Lexical frequency is: (single count of a phoneme per word/total number of counted phonemes in the word list)*100= Lexical Frequency % of a specific phoneme in the word list.
Text frequency is similar but then I fail when trying to add the frequencies of the words in the word list: (all counts of a phoneme per word/total number of counted phonemes in the word list)*100 vs (sum of the word frequencies of the targeted words that contain the phoneme/total sum of all the frequencies of all the words in the list)= Text Frequency % of a specific phoneme in the word list.
PLEASE HELP ME TO FIND A FORMULA ON HOW TO CALCULATE THE LEXICAL FREQUENCY AND THE TEXT FREQUENCY of phonemes and graphemes.
Relevant answer
Answer
Hola,
Para el cálculo de la frecuencia léxica de unidades simples o complejas, se suele utilizar WordSmith o AntCon.
Saludos
  • asked a question related to Language
Question
874 answers
Do you know any aphorisms, old sayings, parables, folk proverbs, etc. on science, wisdom and knowledge, ...?
Please, quote.
Best wishes
Relevant answer
Answer
All too often a clear conscience is merely the result of a bad memory.
  • asked a question related to Language
Question
5 answers
We are conducting a research about the language use of Manobo students on social media specifically facebook, twitter and instagram. Your input could surely enhance the said endeavor.
Thank you very much!
Relevant answer
Answer
These studies can be found on many websites
  • asked a question related to Language
Question
4 answers
My question is connected to rather unclear point of error correlation that many scholars encounter while conducting their SEM analysis. It is pretty often when scholars report procedures of correlating the error terms to enhance the overall goodness of fit for their models. Hermida (2015), for instance, provided an in-depth analysis for such issue and pointed out that there are many cases within social sciences studies when researchers do not provide appropriate justification for the error correlation. I have read in Harrington (2008) that the measurement errors can be the result of similar meaning or close to the meanings of words and phrases in the statements that participants are asked to assess. Another option to justify such correlation was connected to longitudinal studies and a priori justification for the error terms which might be based on the nature of study variables.
In my personal case, I have two items with Modification indices above 20.
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
12 item1 ~~ item2 25.788 0.471 0.471 0.476 0.476
After correlating the errors, the model fit appears just great (Model consists of 5 latent factors of the first order and 2 latent factors of the first order; n=168; number of items: around 23). However, I am concerned with how to justify the error terms correlations. In my case the wording of two items appear very similar: With other students in English language class I feel supported (item 1) and With other students in English language class I feel supported (item 2)(Likert scale from 1 to 7). According to Harrington (2008) it's enough to justify the correlation between errors.
However, I would appreciate any comments on whether justification of similar wording of questions seems enough for proving error correlations.
Any further real-life examples of wording the items/questions or articles on the same topic are also well-appreciated.
Relevant answer
Answer
Dear Artem and Marcel,
there are two problems with post-hoc correlating errors
1) the error covariance is causally unspecific (as any correlation). If one possibility is true namely that both items additionally measure an omitted latent than estimating the error cov will fit the model but the omitted latent variable still is not explicitly contained in the model. This may be unproblematic if this latent is just the response reaction on a specific word contained in both items --but sometimes it may be a substantial latent variable missing in the model whose omission will bias the effects of other, contained latent variabels.
2) While issue #1 still presumes that the factor model is correct (but the items *in addition* share a further cause, the need for estimating error covs will appear as assign of a fundamental misspecification of the factor model: If the factor model is to simple (e.g., you test a 1-factor model where as the true structure contains more) than the only proposal the algorithm can make is to estimate error covs. These can be interpreted as the valves in a technical system. Opening the valves with reduce the pressure but not solve the problem. To the contrary: Your model will fit but it is worse than before.
One simple add-hoc test is to estimate the error cov and then to include further variables in the model which correlate (are receive / emit effects) with/from/on the latent target variable. You will often see that the model which had fitted one minute ago (due to the estimation of the error cov) again shows a substantial misfit as the factor model is still wrong and cannot explain the new restrictions and correlations between the indicators and the newly added variables.
Please not that the goal in CFA/SEM is not to get a fitting model! The (mis)fit of the model is just a tool to evaluate the causal correctness. If data fit would be the essential goal than SEModeling would be so easy: Just saturate the model and you always get a perfect data fit.
One aspect is the post-hoc justification of the error-covs: I remember once reading MacCallum (I think that it was him) who wrote that he knows no colleague who would not have enough phantasy to come to an idea to explain a post-hoc need for an error covariance. :)
Hence, besides the causal issues noted above, there are statistical problems with regard to overfitting capitalization on chance (as any other post-hoc change of the model). That is: Better look onto your items before doing the model testing and think wether they could be reasons that lead to an error covariance.
One example is the longitudinal case where error covariances between the same items are expected and are included from the beginning.
If you have to post hoc include the error covariances, carefully consider other potential reasons (mainly the more fundamental issues noted in #2) and replicate the study. But replication in causal inference context should always imply an enlargement of the model (i.e., including new variables).
Best,
Holger
  • asked a question related to Language
Question
23 answers
Dear Research Colleagues,
Are you familiar with studies on language acquisition in early simultaneous trilingual children that show whether there are any delays in their language development? I am familiar with several studies on early simultaneous bilinguals indicating that such speakers are not significantly delayed in language acquisition. I wonder if trilinguals differ from mono- and bilinguals in how fast they acquire their languages.
I will appreciate your feedback.
Thank you.
Pleasant regards,
Monika
  • asked a question related to Language
Question
6 answers
By using examples of sonnets from source language.
Relevant answer
Good answer Soma Chakraborty
  • asked a question related to Language
Question
30 answers
I'm doing a comparative study on social media language used by native and non-native speakers with special reference to Instagram. I am planning on using Discourse analysis. What is your take on this? Could anyone please suggest me what else can be used?
Relevant answer
Answer
I wonder why you are carrying out this study?
What questions are you trying to answer, through examining social media language from different speakers in this way?
Why are these questions interesting?
If you are clear about your own answers to questions like these, you will be better placed to judge which analytical methods are likely to be appropriate.
  • asked a question related to Language
Question
15 answers
Dear Colleague,
It would be your generosity to respond to the questionnaires and also distribute it among your colleagues, students, and networks.
We would like to ask you if you would be so kind as to complete the following online questionnaires of a cross-cultural research study designed to investigate the relationship between CALL literacy and the attitudes of language teachers and students towards Computer-Assisted Language Learning (CALL).
Teachers and students who have previously answered the questionnaire say that it took about 10-20 minutes to do so. Your help would be very much appreciated.
Be sure that all the personal data provided from the questionnaire will be kept strictly confidential in our reports. Your personal data will not be disclosed nor used for any other purpose than educational research.
As a cross-cultural study, I need a good number of data from different countries. Please circulate this post through your networks.
Your input is really important for our study.
If you are both a teacher and a student please respond to both questionnaires.
Thank you in advance for your help and cooperation.
Regards,
Dara Tafazoli
Mª Elena Gómez Parra
Cristina A. Huertas Abril
University of Cordoba, Córdoba, Spain
Relevant answer
Answer
Gladly! And I'll convey your questionnaire to my students as well.
Interesting questionnaire although quite long, but I hope your project will benefit from that.
I wish you good luck with your research!
  • asked a question related to Language
Question
3 answers
Software engineering Software Effort estimation
Relevant answer
Answer
There are several methods of software development effort estimation which are based on different size metrics such as Function Points, Object Points, Use Case Points. The methods based on these metrics use different environmental and technical factors which influence software development effort. I want to do research on Use Case Point  based software development effort estimation. So I need a dataset of industrial software projects in which the characteristics of software projects are given in terms of Use Case Points metrics.
  • asked a question related to Language
Question
3 answers
In his important poem “Little Gidding”, T.S.Elliot makes the soul of a dead man say of his and by implication all our lives:
Since our concern was speech, and speech impelled us
To purify the dialect of the tribe
And urge the mind to aftersight and foresight,
….
DOES human speech “urge the mind to aftersight and foresight”?
If so, that seems to me very important!
Relevant answer
Answer
Human speech does indeed urge the mind to foresight and aftersight, but only up to a point, and in my world that is not very far compared with information taken into the brain visually. The latter is the main channel for creation of memory and thought, by conversion of images in the form of stored holographic structures which are capable of being reinvoked given sufficient stimulus by later experience. Apologies for my somewhat technical approach to this interesintg statement by Elliot, but I wrote a paper in November 2019 (attached) which seems to be a little relevant to his statement. It is a bone of conetention with me how language is inadequate to convey undersanding of the way in which the minds of tohers's work in the accurate transmission of information. Unfortunately, this is one of the most convenient means we have currently at our disposal for transmission of understanding, one to another, but this will change when we know more about the operation of mind and memory, on which subject I have been toiling for far too long.
Nick
  • asked a question related to Language
Question
7 answers
I'm often very skeptical about the language decisions and policies issued by governments or self-proclaimed language authorities to control the way people use language. Nevertheless, I'm just curious to know if there is evidence for the (partial or full) success of such a top-down approach in some contexts.
Relevant answer
Answer
Hi Moustafa Amrate This is a really thought-provoking question. In the past, that was certainly the case in many different European contexts, see for example what the prescriptivists tried to do with the English language (for ex. Dryden wanted to "fix" the language) and the (mostly failed attempts) of the Royal Society to reform the language in the XVII century. Today three examples of academies that are considered as sources of how language should be used are the Académie française in France. This body gives prescriptive rules on how good French should be, even though I don't think it uses measures to enforce its recommendations (but I'm not sure). The same can be said about the Royal Spanish Academy in Spain that gives recommendations for standardisation in the many Spanish-speaking countries (with controversies about what real Spanish is, considering that Standard Spanish is simply a regional variety, Castillan - something that it is true for EVERY Standard) is THE prevailing variety. In Italy we have the Accademia della Crusca, even though I suspect this is not formally into language planning policies as in other contexts.
It is not by chance that all these languages that I am here referring to are Romance languages, with a long history of well-established prescriptivism.
Hope this may be useful, this is not my area of study, just some comments :)
  • asked a question related to Language
Question
10 answers
In the Kenyan context mother tongue is regarded as the language of the school catchment area. First language is regarded as the language acquired before none. Some vernaculars are mother tongues and first languages.
Relevant answer
Answer
International Mother Language Day 21 February
  • asked a question related to Language
Question
14 answers
Can you think of a research work OR a way to prove that "a certain bag of words has more value / worth / creativity than other set of words" !
For example: Enjoy is more proper than chill OR Observe has more weight than See.
Relevant answer
Answer
I am not sure that you can prove that one word is better than another, but you can set criteria to judge against.
For example, in the context in which you use it, "chill" is slang and "enjoy" is not. "Observe" is more elevated vocabulary than "see." Elevated vocabulary that is not slang could be among your criteria for evaluating language.
But which vocabulary is best depends on the situation. In some situations, "chill" may be arguably better, or more appropriate, than "enjoy."
  • asked a question related to Language
Question
20 answers
Some philosophers/mathematicians (e.g., Tarski) laid some emphasis on construing a language that does not admit of contradictions, and were even ready to pay the price (if you want to call it thus) of excluding semantic terms and the like. I came to ask myself if it is actually a problem (rather than an advantage) of a language that it is able to express many things (including contradictions). What do you think?
Relevant answer
Answer
Paraconsistent logics embrace contradictions to a certain extent rather than exclude them.
"A primary motivation for paraconsistent logic is the conviction that it ought to be possible to reason with inconsistent information in a controlled and discriminating way. The principle of explosion [= the classical rule than any proposition whatsoever follows from a contradiction] precludes this, and so must be abandoned." Wikipedia
  • asked a question related to Language
Question
3 answers
For instance, language A uses different verbs for different sex, whereas language B does not. Thus, it can save some words to describe the sex of the subject in language A rather than language B.
Relevant answer
لايستطيع القانون حفظ اللغات بشكل مطلق, بقدر محافظة الفرد على لغته والحرص على التواصل
  • asked a question related to Language
Question
3 answers
Hi, I am a german university student (business administration and psychology) and I am going to write my bachelor thesis.
I would like to research a correlation between stress and the language. For the following points I need your help:
- differently option for stress induction
- or unsolvable tasks for stress induction
- or questionnaire for stress induction
I know about the trier social stress test and the socially evaluative cold water stress test, so I need other options. The best way for me is, to have a computeraided stress test.
I hope you can help me and make my student life a little easier :-).
Best regards,
Timo Köhler
Relevant answer
Answer
You might like to try a variation of the unsolvable tracing task used by Roberts et al (2019). It's more typically referred to as a 'frustration tolerance task' or an 'ego depletion paradigm' than a stress induction per se, but it serves a similar purpose. Other challenging cognitive tests (e.g. serial sevens task) can also be used in the same way.
Source:
Roberts, A. C., Yap, H. S., Kwok, K. W., Car, J., Chee-Kiong, S. O. H., & Christopoulos, G. I. (2019). The cubicle deconstructed: Simple visual enclosure improves perseverance. Journal of Environmental Psychology, 63, 60-73.
  • asked a question related to Language
Question
8 answers
You tend to see papers being published which contain a lot of grammatical and language errors.
These errors can make the papers very difficult to read and can in some cases really hurt the credibility of the work. Some journals do a good job at correcting these things, but some journals do nothing.
Should they do a better job or is it entirely up to the authors to fix these things? Once the paper is published, the authors are typically required to transfer the copyright, so I feel like it's the journals' obligation to fix these things.
The same questions can be asked about figure quality.
I would be great to hear from editors, both academic and professional.
Relevant answer
Answer
I agree with all the above comments it is both the writers' and journals' responsibilites to see to it that published papers look/read professional(ly). There is a third and fourth line of responsibility however Graduate programs ought to be teaching professional (i.e. journal quality) wrtiting, and professors ought to be mentoring their students regarding writing both by demanding solid writing on exams and term papers and on theses and dissertations--even going so far as lowering grades for less than professional quality writing (and modeling such writing in their comments) and making students rewrite their theses and dissertations until they are up to snuff.
  • asked a question related to Language
Question
2 answers
What is the best Instructional Design Model (IDM) to follow when designing an AR learning app for language acquisition for non speakers age (23-30)?
Relevant answer
Answer
Hi Eshrak,
This is an interesting question and as any interesting question, there is no straightforward answer to it. The ID models that may fit (partially) are many and may range from the "canonical" instructional systems design ADDIE model (needs analysis, design, development, implementation, evaluation) to include rapid prototyping (AKA, rapid application development) and D3(data-driven development). The latter full advantage of data and machine learning. In 2017 van Merriënboer and Kirschner published a book on instructional design of training programs for complex learning which including courses, curricula, or environments(e.g. AI). This is the full citation: Van Merriënboer, J. J., & Kirschner, P. A. (2017). Ten steps to complex learning: A systematic approach to four-component instructional design. Routledge.
You may also find relevant the following summary of the empirical literature on the elements of effective design of computer-based interventions:
Mayer, R.E. (2008). Applying the science of learning: Evidence-based principles for the design of multimedia instruction. American psychologist, 63(8), p.760.
Kim, Y., & Baylor, A. L. (2016). Research-based design of pedagogical agent roles: A review, progress, and recommendations. International Journal of Artificial Intelligence in Education, 26(1), 160-169.
Please let me know if you have more questions.
Best,
  • asked a question related to Language
Question
11 answers
According to Noam Chomsky, "the Martian language might not be so different from human language after all.”  And, "if a Martian visited Earth, it would think we all speak dialects of the same language, because all terrestrial languages share a common underlying structure” — he must mean "universal grammar."  Others also believe that since the laws of the universe are supposedly the same everywhere, the language alien civilizations use might be fundamentally similar.  Stephen Krashen, on the other hand, wrote "It is possible that alien language will be completely different from human languages." Do you think alien language would be similar to or different from human language?  
Relevant answer
Answer
Quite an appealing discussion! Any postulates on the topic can be only speculative until we finally meet an alien race. However, literature, and more specifically, SF, has contributed some invaluable ideas. The Encyclopedia of Fictional and Fantastic Languages by Tim Conley and Stephen Cain gather loads of such examples.
For instance, Ted Chiang's Heptapod A, and Heptapod B. The former, as described in the novella, sounds like "a wet dog shaking the water out of its fur" (119), that is, an unpronounceable sound for human physiology. The latter, in turn, was so different that enabled its speakers to realise time in a non-linear way.
There are also cases like the Kesh language, describe in Ursula Le Guin's Always Coming Home that is more phonetically similar to human languages but quite distinct when it comes to grammar, which also happens because of physiological differences between humans and the aliens.
Most of the cases to point into one direction: if the alien species are physiologically similar to humans, so are their languages; whereas if the aliens' physical buildup is different, so is their language. As Michael W. Marek has mentioned, the unimaginably different culture the alien races might have developed over millennia of existence may cause quasi-untranslatable languages - we can already see similar cases in human languages. That makes total sense, in my opinion.
  • asked a question related to Language
Question
16 answers
Our language is the origin and the building mean of formal languages of math and physics. Artificial intelligence mashines creates even their own language.
Are there research to create new languages to create new science or to simplify and make more understandable the current science? Or is it just my fantasy? Maybe if a man can see, say in ifrared range then he could invent new words? Maybe we should go in this direction?
How will one create new language describing our world and qualitatively different from the today one? Maybe we should study other creatures likes delphines?
Relevant answer
Answer
Yes, we can make science more clear and powerful with new language, but we can't neglect English because English currently plainly settled as the principle language of universal logical correspondence, specialists keep on distributing their work in different dialects than English too.We encourage mainstream researchers to attempt to handle this issue and propose potential methodologies both for incorporating non-English scientific knowledge viably and for upgrading the multilingualism of new and existing information accessible just in English for the clients of such learning.
  • asked a question related to Language
Question
15 answers
Dear RG Colleagues,
I need a reference(s) that I can cite in a research paper that will support the commonly accepted claim: it is easier to learn a foreign language that is linguistically similar to our native language (or our second/third language that we already know).
Thank you!
Monika
Relevant answer
Answer
Hakan Ringbom's (2007) Cross-linguistic Similarity in Foreign Language Learning is devoted to this issue. He mentions that the degree of congruence between the systems determines how much facilitation there will be in language learning.
  • asked a question related to Language
Question
177 answers
When, where, and by whom were they implemented? Why do you think they were successful?
Centuries of linguistic imposition associated with colonial expansion, followed by the monolingual policies of governments seeking to create national identities, and more recently the global expansion of corporate power and communications networks, have taken their toll on many languages, to the point where some have become extinct and others are faced with the challenge of revitalizing themselves to avoid extinction. Some language communities have had more success than others in meeting this challenge and fortifying their mother tongues. I am interested in reading more about these efforts, and I think that the diverse, multicultural composition of ResearchGate makes it an ideal forum for discussing this topic.
I am attaching the English version of the Universal Declaration of Linguistic Rights (Barcelona, 1996) as an initial contribution to the discussion.
Relevant answer
Psycholinguistics/Hemispheric Lateralization of Language
ontents
  • 1 Introduction
  • 2 The History of Discoveries
    • 2.1 Jean Baptiste Bouillaud and Simon Alexandre Ernest Aubertin
    • 2.2 Paul Broca
    • 2.3 Carl Wernicke
  • 3 Methods of Assessing Lateralization
    • 3.1 Lesion Studies
    • 3.2 Split Brain Studies
    • 3.3 Wada test
    • 3.4 Functional transcranial Doppler ultrasonography
    • 3.5 Electrical stimulation, TMS and Imaging
  • 4 Cerebral Dominance: Language Functions of The Left and Right Hemispheres
  • 5 Anatomical Asymmetries
  • 6 Proposed Correlations
    • 6.1 Handedness
    • 6.2 Sex Differences
    • 6.3 Sign Language and Bilingualism
    • 6.4 Culture and Language Lateralization
  • 7 Reorganization following brain injury
  • 8 Learning Exercise: 8 Questions on Hemispheric Language Lateralization
  • 9 References
Introduction
Hemispheric lateralization refers to the distinction between functions of the right and left hemispheres of the brain. If one hemisphere is more heavily involved in a specific function, it is often referred to as being dominant (Bear et al., 2007). Lateralization is of interest with regards to language, as it is believed that language is a heavily lateralized function: certain aspects of language are found to be localized in the left hemisphere, while others are found in the right, with the left hemisphere most often dominant. This was initially proposed by early lesion-deficit models and studies with split-brain patients, and has been shown in more recent years through tests like the Wada test and imaging studies. There have been studies which show that there are anatomic asymmetries located near and around the regions associated with language, and each hemisphere has shown to play its own but separate role in the production and comprehension of speech. The hemispheric lateralization of language functions has been suggested to be associated with both handedness, sex, bilingualism, sign-language, and a variance amongst cultures. It has also been proposed that a reorganization occurs following brain injury that involves a shifting of lateralized function, as long as the injury occurs early in life.
The History of Discoveries
Jean Baptiste Bouillaud and Simon Alexandre Ernest Aubertin
French physician Jean Baptiste Bouillaud (1796-1881) was one of the earliest proponents of hemispheric language lateralization. On February 21, 1825, Bouillaud presented a paper to the Royal Academy of Medicine in France which suggested that, because so many human tasks are performed using the right hand (such as writing), the left hemisphere might be the in control of that hand. This observation implies that language, at the core of writing, would be localized in the left hemisphere. It was already known at this time that motor function was primarily controlled by the hemisphere ipsilateral to the side of the body through lesion studies. Bouillaud also proposed that speech is localized in the frontal lobes, a theory that was carried on by Bouillaud’s son-in-law Simon Alexandre Ernest Aubertin (1825-1893), who went on to work with famed French neurologist Paul Broca in 1861. Together, Aubertin and Broca examined a patient with a left frontal lobe lesion who had lost nearly all ability to speak; this case and several others similar to it became the basis behind the earliest theories of language lateralization.
📷
Paul Broca, image obtained from Clower, W. T., Finger, S. (2001)
Paul Broca
French neurologist Paul Broca (1824-1880) is often credited as being the first to expound upon this theory of language lateralization. In 1861, a 51-year-old patient named Leborgne came to Broca; Leborgne was almost completely unable to speak and suffered from cellulitis of the right leg. Leborgne was able to comprehend language but was mostly unable to produce it. He responded to almost everything with the word “tan” and thus came to be known as Tan. Broca theorized that Tan must have a lesion of the left frontal lobe, and this theory was confirmed in autopsy when Tan died later that year (Bear et al., 2007). In 1863, Broca published a paper in which he described eight cases of patients with damage to the left frontal lobe, all of whom had lost their ability to produce language, and included evidence of right frontal lesions having little effect on articulate speech (Bear et al., 2007). These findings led Broca to propose, in 1864, that the expression of language is controlled by a specific hemisphere, most often the left (Bear et al., 2007). “On parle avec l’hemisphere gauche,” Broca concluded (Purves et al., 2008)- we speak with the left hemisphere.
Carl Wernicke
German anatomist Carl Wernicke (1848-1904) is also known as an early supporter of the theory of language lateralization. In 1874, Wernicke found an area in the temporal lobe of the left hemisphere, distinct from that which Broca had described, which disrupted language capabilities (Bear et al., 2007). He then went on to provide the earliest map of left hemisphere language organization and processing.
Methods of Assessing Lateralization
Lesion Studies
A good deal of what we know about language lateralization comes from studying the loss of language abilities following brain injury (Bear et al., 2007). Aphasia, the partial or complete loss of language abilities occurring after brain damage, is the source of much of the information on this subject (Bear et al., 2007). As shown in the studies of Bouillaud, Aubertin, Broca and Wernicke described above, lesion studies combined with autopsy reports can tell us a a lot about the localization of language, which ultimately has supplied information on lateralization. Lesion studies have shown that, not only is the left cerebral hemisphere most often dominant for language, but also that the right hemisphere generally is not, as lesions in the right hemisphere rarely disturb speech and language function (Bear et al., 2007).
The dangers of using lesion studies are, of course, that they may overemphasize the relevance of particular localized areas and their associated functions. The connection between brain regions and behaviours is not always simple, and is often based on a larger network of connections. This is shown in the fact that the severity of an individual’s aphasia is often related to the amount of tissue damaged around the lesion itself (Bear et al., 2007). It is also known that there is a difference in the severity of the deficit depending on whether the area was removed surgically, or was caused by stroke. This is the case because strokes affect both the cortex and the subcortical structures; this is due to the location of the middle cerebral artery, which supplies blood to the areas associated with language, as well as involvement of the basal ganglia, and is often the cause of stroke. As such, surgically produced lesions tend to have milder effects than those resulting from stroke (Bear et al., 2007).
File:Splitbrain.jpg
An example of a study involving language in a split-brain patient. The individual says he does not see anything, because the dominant left hemisphere cannot "speak". Image obtained from Experiment Module: What Split Brains Tell Us About Language
Split Brain Studies
Studies of patients who have had commissurotomies (split-brain patients) have provided significant information about language lateralization. Commissurotomy is a surgical procedure in which the hemispheres are disconnected by cutting the corpus callosum, the massive bundle of 200 million axons connecting the right and left hemisphere (Bear et al., 2007). Following this procedure, almost all communication between the hemispheres is lost, and each hemisphere then acts independently of the other. What is striking about split-brain patients with regards to the study of language lateralization is that a word may be presented to the right hemisphere of a patient whose left hemisphere is dominant, and when the patient is asked to name the word they will say that nothing is there. This is because, although the right hemisphere “saw” the word, it is the left hemisphere which “speaks.” If that same word is presented to the left hemisphere, the patient is able to verbalize the response (Bear et al., 2007). As such, split-brain patients have presented substantial evidence that language function is generally lateralized in the left hemisphere.
Wada test
The Wada test was created by Juhn Wada at the Montreal Neurological Institute in 1949, and was designed specifically to study lateralization. A fast-acting barbiturate such as sodium amytal is injected into the carotid artery on one side (although current procedures prefer to use a catheter which is inserted into the femoral artery), and is then transported to the cerebral hemisphere on the opposite side. It then serves to anaesthetize that side of the brain for approximately 10 minutes, after which it begins to wear off and the functions which were disrupted by the anaesthetic gradually return, often displaying aphasic errors (Bear et al., 2007; Wada and Rasmussen, 1960). During the time in which the patient is anaesthetized, he or she is assessed on their ability to use language. If the left hemisphere is anaesthetized and is the dominant hemisphere, the patient loses all ability to speak, whereas if the left hemisphere is anaesthetized but the right hemisphere is dominant, the patient will continue to speak throughout the procedure (Bear et al., 2007).
In a study published in 1977, Brenda Milner used the Wada test to demonstrate that 98% of right-handed people and 70% of left-handed people have a dominant left hemisphere with regards to language and speech function. Her results also showed that 2% of right-handed people have a dominant right hemisphere, which is the same percentage of patients that display aphasia following a lesion to the right hemisphere (Branch et al., 1964).
This procedure is also used prior to brain surgery in order to determine the dominant hemisphere, so as to avoid removal of an area associated with speech and language.
Functional transcranial Doppler ultrasonography
Functional transcranial Doppler ultrasonography (fTCD) is a non-invasive method for examining event-related changes in cerebral blood flow velocity in the middle cerebral arteries(Knecht et al., 1998). This technique can reliably assess which hemisphere is dominant and to what extent, which regards to language lateralization. Studies using fTCD have shown a linear relationship between handedness and language (Knecht et al., 2000).
Electrical stimulation, TMS and Imaging
Electrical stimulation was pioneered by Wilder Penfield and his colleagues at the Montreal Neurological Institute in the 1930s, and helped to identify certain lateralized areas associated with speech and language. Electrical stimulation is the application of an electrical current directly to the cortical tissue of a patient who is conscious. Penfield found that stimulating the left frontal or temporal regions of the left hemisphere with an electrical current accelerated the production of speech. He also found that stimulation can cause inhibition in complex functions like language, as applying a current to the areas associated with speech production in the left hemisphere while the patient is engaged in speech serves to disrupt this behaviour (Penfield, 1963). This procedure is performed during surgery while the skull is removed, and as such it is not a commonly used method of assessment.
Transcranial Magnetic Stimulation (TMS) is a non-invasive procedure, often combined in studies with MRI, which has helped to map the regions associated with speech, showing lateralization to be dominant in the left hemisphere. TMS has also shown that, following brain injury, it is more likely that it is the tissue surrounding the lesion that acts in a compensatory way rather than the opposite hemisphere providing compensation. The major drawback of TMS is, of course, the fact that the magnetic stimulation must pass through the scalp, skull, and meninges before stimulating the brain region of choice.
Imaging studies have proven to be incredibly useful in determining lateralization of language abilities. Functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) have been able to show the complex circuitry associated with speech and language; they have also proven to be consistent with the findings from previous lesion studies, as well as Penfield’s electric stimulation (Bear et al., 2007). There has been some controversy regarding bilateral activation shown in fMRI studies, the reasons unknown, however it has been suggested that perhaps the right hemisphere is involved in aspects of speech that are not measured by such tests as the Wada procedure (Bear et al., 2007). A significant finding is that fMRI results during developmental years show activation during speech and the use of language mainly in the left hemisphere, providing further evidence in support of left hemisphere dominance (Bear et al., 2007).
Cerebral Dominance: Language Functions of The Left and Right Hemispheres
The perisylvian cortex of the left hemisphere is involved in language production and comprehension, which is why it is often referred to as dominant, or said to "speak" (Ojemann, G. A., 1991; Purves et al., 2008). Roger Sperry and his colleagues’ split-brain studies have shown that the left hemisphere is also responsible for lexical and syntactic language (grammatical rules, sentence structure), writing and speech (Purves at al., 2008). Other aspects of language which are thought to be governed in most people by the left hemisphere include audition of language-related sounds, recognition of letters and words, phonetics and semantics.
The right hemisphere, though generally not dominant in terms of linguistic ability, has its role in the use of language. Split-brain studies present evidence that, despite the right hemisphere having no “speech,” it is still able to understand language through the auditory system. It also has a small amount of reading ability and word recognition. Lesion studies of patients who have right hemisphere lesions show a reduction in verbal fluency and deficits in the understanding and use of prosody. Patients who have had their right hemisphere surgically removed (hemispherectomy) show no aphasia, but do show less obvious deficiencies in areas such as verbal selection and understanding of metaphor. It has thus been concluded that the right hemisphere is most often responsible for the prosodic and emotional elements of speech and language (Purves et al., 2008).
Anatomical Asymmetries
The structural differences between the right and left hemisphere may play a role in the lateralization of language. In the nineteenth century, anatomists observed that the left hemisphere’s Sylvian fissure (lateral sulcus) is longer and less steep than that of the right (Bear et al., 2007). In 1980, Graham Ratcliffe and his colleagues used evidence of this asymmetry of the Sylvian fissure, shown in carotid angiogram, combined with results of Wada testing, and found that individuals with speech regions located in the left hemisphere had a mean difference of 27 degrees in the angle of the blood vessels leaving the posterior end of the Sylvian fissure, while those with language located in the right hemisphere had a mean angle of zero degrees.
File:Planum temporale.jpg
Asymmetry of the planum temporale. Image obtained from Labspace:Understanding Dyslexia
In the 1960s, Norman Geschwind and his colleagues at Harvard Medical School found that the planum temporale, the superior portion of the temporal lobe, is larger in the left hemisphere in almost two thirds of humans (Geschwind & Levitsky, 1968), an observation which was later confirmed with MRI (Bear et al., 2007; Purves et al., 2008). This asymmetry exists even in the brain of the human fetus (Bear et al., 2007). The correlation of this asymmetry with the left hemisphere’s language dominance is refuted by many due to the fact that 67% of people show this structural asymmetry, while 97% show left hemispheric dominance. Another problem which exists in examining asymmetry of the planum temporale is how the anterior and posterior borders of this region are defined, and the fact that investigators differ in this definition. This is especially a problem when the transverse gyrus of Heschl, used to mark the anterior of the planum temporale, appear in double (which is not unusual). There are differing opinions as to whether or not the second transverse gyrus should be defined as being within the planum temporale, or outside of it (Beardon, A. A., 1997).
Proposed Correlations
Handedness
The correlation between handedness and hemispheric lateralization is described in the results of the Wada test, described above. The majority of the population is right handed (approximately 90%), and the Wada test results propose that 93% of people’s left hemisphere is dominant for language (Bear et al., 2007). A linear relationship between handedness and langage has been shown using fTCD in a study done by Knecht et al. (2008); their findings show an 27% incidence for right hemisphere dominance in their group of left-handers, a finding consistent with the notion of there being a linear relationship between handedness and incidence of right hemisphere dominance in left-handers (Knecht et al., 2000). This study used a word generation task, and admits that perhaps a measurement of prosody or other such suspected right hemisphere functions may have a different relationship with handedness (Knecht et al., 2000). It is also true that correlation does not necessarily imply causation, and it is also suggested that there is no direct relationship between handedness and language at all, as the majority of left-handers also have their language lateralized in the left hemisphere (Purves et al., 2008). It is, however, a physical example of functional asymmetry, and it is certainly possible that a more substantial connection between handedness and language will be found.
Sex Differences
The tendency for women to score higher than men on language-related tasks is perhaps the result of the fact that women also tend to have a larger corpus callosum than men, indicating more neural connections between the right and left hemispheres. fMRI studies show that women have more bilateral activation than men when performing rhyming tasks, and PET studies show that women have more bilateral activation than men during reading tasks. Perhaps the bilateral activation implies the use of what are thought to be right hemisphere language abilities, such as prosody and intonation. Research has also shown that women have a greater ability to recover from left hemisphere brain damage; the evidence provided by the imaging studies in combination with the results of recovery following injury have led to the controversial suggestion that language is more unilateral in men than in women.
Sign Language and Bilingualism
Sign language has shown to be lateralized in the left hemisphere of the brain, in the left frontal and temporal lobes. This is known through the use of lesion studies, in which the patients had left hemisphere lesions in the areas associated with language which impaired their ability to sign, while right hemisphere lesions in the same areas show no linguistic deficit (Hickock et al., 1998). Lesions in the right hemisphere of signers did, however, show a limited use of spatial information encoded iconically (which is when the sign is similar-looking to its referent). This is in keeping with the belief that visuo-spatial ability is a right hemisphere function and suggests that the role of the right hemisphere in sign language is in the non-linguistic features of sign language.
Bilingualism is thought to be an overlapping of populations of neurons corresponding to each language, all of which are located in the frontal and temporal regions of the left hemisphere associated with speech comprehension and speech production.
Culture and Language Lateralization
When thinking of language there is a tendency to focus on that language in which you think, however it has been proposed that lateralization of language functions can vary from culture to culture. Asian languages show more bilateral activation during speech than European languages, likely because Asian languages employ a far greater use of right hemisphere abilities, for example prosody, and the use of spatial processing for the more “pictorial” Chinese characters; Native American languages also show a good deal of bilateral activity.
Reorganization following brain injury
Studies have been done following brain injury to determine the level of recovery of language and speech ability, and whether or not recovery is based on lateralized function. Bryan Woods and Hans-Leukas Teuber looked at patients with prenatal and early postnatal brain injury located in either the right or left hemisphere and drew several conclusions. First, if the injury occurs very early, language ability may survive even after left hemisphere brain damage. Second, they found that an appropriation of language regions by the right hemisphere is responsible for the survival of these abilities, but because of this there is a tendency for visuo-spatial ability to be diminished. Third, right hemisphere lesions have the same effect in prenatal and early postnatal patients as they do in adults. Brenda Milner and Ted Rasmussen used the Wada test to determine that early brain injury can cause either left, right or bilateral speech dominance, and that those who retained left hemisphere dominance had damage that was not in either the anterior (Broca’s) or posterior (Wernicke’s) speech zone. Those whose dominance shifted to the right hemisphere most often had damage to these areas. Milner and Rasmussen also found that brain damage which occurs after the age of 5 does not cause a shift in lateralization but rather reorganizes within the hemisphere, potentially employing surrounding areas to take responsibility for some aspects of speech.
In patients who have had hemispherectomy of the left hemisphere, the right hemisphere can often gain considerable language ability. When performed in adulthood, speech comprehension is usually retained (though speech production suffers severe deficits); reading capability is small, and there is usually no writing capability at all.
Learning Exercise: 8 Questions on Hemispheric Language Lateralization
1. In terms of hemispheric lateralization and split-brain patients (individuals which have had commissurotomies), if the word “pencil” was presented to the right field of vision of a split-brain patient and he/she was asked to report what they had seen, the patient would respond:
a) by selecting a pencil with the contralateral hand
b) by saying the word “pencil”
c) by saying “nothing is there”
d) by selecting a pencil with the ipsilateral hand
2. The left hemisphere is responsible for all aspects of syntax, except parsing. True or false?
3. What is the structural evidence given to explain the fact that women tend to score higher than men on language-related tasks? What implications might this have on gender differences in patients with aphasia?
4. What 3 conclusions did Bryan Woods and Hans-Leukas Teuber draw regarding the reorganization of language ability following brain injury? Would there be differences in such reorganization in people who are hearing impaired?
5. Through what anatomical system is the right hemisphere able to understand language? What happens to language ability following a removal of the right hemisphere? In what ways do individuals who have had their right hemisphere removed differ from split-brain patients?
6. What were the symptoms of the patient “Tan” which, when presented to neurologist Paul Broca in 1861, propelled Broca to his theory regarding hemispheric language lateralization? Based on current methods of assessment, would Broca's theory still be considered valid today? Why or why not?
7. Which type of study would be best used in order to assess anatomical asymmetry and why?
8. Which type of study is most useful in assessing the connection between hemispheric language lateralization and handedness, and why?
References
Beaton, A. A. (1997). The Relation of Planum Temporale Asymmetry and Morphology of the Corpus Callosum to Handedness, Gender, and Dyslexia: A Review of the Evidence. Brain and Language 60, 255–322
Bear, M. F., Connors, B. W., Paradiso, M. A. (2007). Neuroscience: Exploring the Brain, 3rd edition. Lippincott Williams & Wilkins: USA.
Branch, C., Milner, B., Rasmussen, T. (1964). Intracarotid Sodium Amytal for the Lateralization of Cerebral Speech Dominance. Journal of Neurosurgery, Vol. 21, No. 5, pp 399-405.
Clower, W. T., Finger, S. (2001). Discovering Trepanation: The Contribution of Paul Broca. Neurosurgery, Vol. 49, No. 6, pp 1417-1426.
Geschwind, N., Levitsky, W. (1968). Human Brain: Left-Right Asymmetries in Temporal Speech Region. Science, New Series, Vol. 161, No. 3837, pp. 186-187.
Hickok, G., Bellugi, U., Klima, E. S. (1998). The neural organization of language: evidence from sign language aphasia. Trends in Cognitive Sciences, Vol. 2, No. 4, pp 129-136.
Jay, T. B. (2003). The Psychology of Language. Prentice Hall: New Jersey, USA.
Knecht, S., Deppe, M., Ebner, A., Henningsen, H., Huber, T., Jokeit, H, Ringelstein, E.-B. (1998). Noninvasive Determination of Language Lateralization by Functional Transcranial Doppler Sonography : A Comparison With the Wada Test. Stroke, Vol. 29, pp 82-86.
Knecht, S., Deppe, M., Drager, B., Bobe, L., Lohmann, H., Ringelstein, E.-B., Henningsen, H. (2000). Language lateralization in healthy right-handers. Brain, Vol. 123, pp 74-81.
Kolb, B., Whishaw, I. Q. (2009). Fundamentals of Human Neuropsychology, 6th edition. Worth Publishers: USA.
Ojemann, G. A. (1991). Cortical Organization of Language. The Journal of Neuroscience, Vol. 7, pp 2281-2287.
Penfield, W. (1963). The Brain's Record of Auditory and Visual Experience. Brain, Vol. 86, No. 4, pp. 595-696.
Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W. C., LaMantia, A., McNamara, J. O., White, L. E. (2008). Neuroscience, 4th edition. Sinauer Associates, Inc.: Massachusetts, USA.
Wada, J., Rasmussen, T. (1960). Intracarotid Injection of Sodium Amytal for the Lateralization of Cerebral Speech Dominance Experimental and Clinical Observations. Journal of Neurosurgery, Vol. 17, No. 2.
  • asked a question related to Language
Question
4 answers
With a group of other teachers, I am currently writing course syllabuses for various (Common European Framework of Reference) CEFR levels. My view is that I can't include all the contents of a given CEFR level in a course but the essentials. However, other teachers disagree and say I should include everything, even when those contents will not be explicitly taught.
Relevant answer
Answer
This is a very interesting point you're making. Regarding CEFR, as it is stated in the name, is a reference and not concrete. Hence, some overlap between bands. I would be more willing to give students a copy of the CEFR chart for them to refer to throughout the course to see what they are doing. As mentioned by Sandy Arief, this would be a great place to start. Good luck. https://www.cambridgeenglish.org/Images/126011-using-cefr-principles-of-good-practice.pdf
  • asked a question related to Language
Question
4 answers
Hello,
I am looking for literature or research that focuses on how different structural or agency-based factors influence the strengthening of different language hierarchies in school settings.
We conducted some research on Hungarian language teaching for Hungarian minority students in Romanian language schools. The short story is the following. These classes are optional and the curriculum and how they need to be organized are not clear and in this context seemingly the goals formulated by teachers (language revitalization, Hungarian as a basic element of Hungarian identity for students) are not in concordance with the actual language teaching practices and the teachers', schools' and students' attitudes toward the language, and through this teachers unintendedly reify the lack of importance or asymmetry between Romanian and Hungarian languages.
So can you suggest some literature that could help me contextualize or findings?
I am familiar with Shohamy's Hidden language strategies, Tollefson's book on inequalities and some of Ricento's work.
Relevant answer
Answer
Hi , in my paper on assessment policy change- The Challenges of Implementing Assessment Policy Change and the Mitigating Factors for Success at Schools in Malaysia, I relied on Halasz (2002); Fullan, (2005); Priestley (2005), Priestley et al (2010). In describing teachers' reaction to change and reform where they feel professionally marginalized, dis-empowered and afflicted by bureaucracy, I relied on sources from Ball (2008); Goodson (2003); and Levin, 2008. Though I did not specifically zoom in on language, it is part of the overall context. I think you will these literature indirectly useful.
  • asked a question related to Language
Question
21 answers
Dear Colleagues,
According to Ethnologue (2005) there are 7099 living languages in the world. I imagine this number may have changed. Could you provide me with a more current number and a citable source?
Many thanks,
Monika
Relevant answer
Answer
  • asked a question related to Language
Question
4 answers
"Txtng: The Gr8 Db8" is the name of the famous book on texting written by David Crystal.
What is the name of the language used in social media?
Is it texting, text messages, textism, netspeak, thumbspeak net write, ICT English, computer mediated communication, internet language, chat language? Does it have an agreed upon name?
Relevant answer
Answer
As a field of study still in its infancy, I don't think there is an agree upon definition for the various terminology out there. Crystal himself referred to it as 'netspeak and textspeak' in his glossary, but it is from 2004 so this may not reflect current usage. 'Textese' also appears commonly in recent scholarly literature. Perhaps you could do keyword search terms in some research databases and see what comes up more frequently?
  • asked a question related to Language
Question
3 answers
Texting in Arabic is called Franco-Arabic. Can anyone let me know its etymology? Thank you very much indeed!
Relevant answer
Answer
I don't think that franco-arabe means texting someone in Arabic. As hyphenation suggests, it is a mixture of Arabic and French, a case of code-mixing in the dialects of Morocco, Algeria, and Tunisia as a result of French cultural colonization. True, most often in texting people French is transliterated in Arabic, but that does not make such texting franco-arabe.
  • asked a question related to Language
Question
12 answers
I am researching whether dance can be considered a language, and different languages effects on movement quality. i am interested in how dancing to spoken word, interpreting it affects dancers cognition, is it that we are listening to the rhythm of the words and sentences using our embodied movement vocabulary or is it that dancers are trying to interpret the spoken language in its literal meaning if they understand the spoken languages? Is it possible to translate spoken word with just movement and allow the audience to understand it without previous knowledge of the spoken languages? is it possible to understand spoken languages through movement only, even in its simplest context?
Relevant answer
Answer
Hello Karolina,
I've just happened on your discussion. We are always communicating through movement, intentionally or not aren't we? I work and dance a lot with people who don't have verbal language, so yes dancing, moving and touch is their language. I'm also interested in your ideas around being multi-lingual. It is not something I'd considered before, how speaking a different language changes the language of the body. However, since I moved to France 3 and a half years ago and observed my children become bilingual I am fascinated in how the tone of the body (as well as the voice of course), quality of movement and gestures change when they speak French. They quite literally become different people and I think respond differently to situations depending on whether they are 'in' English or French. You are probably experiencing that when you teach in different languages, are you teaching from a different place, attitude, orientation? I think it is more than just sound and rhythm etc....Just thoughts but I'm interested in your research...
Thanks Lisa
  • asked a question related to Language
Question
6 answers
Use of smartphones, Microsoft word and autocorrector has changed the way youngsters read and write these days. Is technology to be blamed for this or the laziness of our generation, or something else?
Relevant answer
Answer
sometimes