Science topic
Consciousness - Science topic
Consciousness are sense of awareness of self and of the environment.
Questions related to Consciousness
Background Summary:
The relationship between science and spirituality has long been a subject of debate, often framed as a dichotomy—science grounded in empirical evidence and logical reasoning, and spirituality rooted in faith, intuition, and transcendence. Historically, the two have been seen as separate, with science focusing on the material world and spirituality addressing the metaphysical. However, recent advancements in various fields of science, particularly quantum physics, neuroscience, and cosmology, have begun to blur the lines between these traditionally distinct realms.
As science continues to uncover the mysteries of consciousness, the nature of reality, and the origins of the universe, new questions arise: Can spirituality, which encompasses the search for meaning and the experience of the divine, be reconciled with scientific inquiry? Can modern science, in its pursuit of understanding the fundamental laws of nature, leave room for concepts such as God, purpose, and interconnectedness?
Furthermore, many contemporary thinkers and scientists argue that spirituality could be incorporated into modern scientific frameworks. Practices like meditation, mindfulness, and consciousness studies have already demonstrated measurable impacts on human health and cognition. The exploration of how spiritual experiences can be understood and validated within scientific paradigms opens exciting opportunities for interdisciplinary research.
The human brain represents a complex structure that includes psychological, neurological, metacognitive, socio-emotional, and many other levels. These layers are displayed through a multidimensional mystery called consciousness. According to psycholinguistics, meaning is the main unit of consciousness. In your perspective, what are the key elements of consciousness?
“Genes are destiny and nurture finalizes this destiny; therefore, every organism has a distinct conscious experience making the Hard Problem irrelevant.” (Tehovnik et al. 2025, in: Automaticity & Consciousness, in preparation)
“27. Consciousness i n Perspective.
Having ascertained that human consciousness i s able to separate from physical reality and interact with other intelligences in other dimensions within the universe, and that i t is both eternal and destined for ultimate return to the Absolute we are faced with the question: "So what happens then?"
Since memory is a function of consciousness and therefore enjoys the same eternal character as the consciousness which accounts for its existence it must be admitted that when consciousness returns to the Absolute it brings with it all the memories it has accumulated through experience in reality. The return of consciousness to the Absolute does not imply an extinction of the separate entity which the consciousness organized and sustained in reality. Rather, i t suggests a differentiated consciousness which merges with and participates in the universal consciousness and infinity of the Absolute without losing the separate identity and accumulated self-knowledge which its memories confer upon it. What it does lose i s the capability for generation of independent thought holograms, since that can be done only by energy in motion. In other words, i t retains the power to perceive but loses the power of will or choice. In exchange, however, this consciousness participates in the all-knowing infinite continuum of consciousness which is a characteristic of energy in the ever present. Consequently, it is accurate to observe that when a person experiences the out-of- body state he is, in fact, projecting that eternal spark of consciousness and memory which constitutes the ultimate source of his identity to l e t i t play in and learn from dimensions both inside and outside the time-space world in which his physical component currently enjoys a short period of reality.“( https://www.cia.gov/readingroom/docs/cia-rdp96-00788r001700210016-5.pdf ).
“27. Consciousness in Perspective.
Having ascertained that human consciousness is able to separate from physical reality and interact with other intelligences in other dimensions within the universe, and that it is both eternal and destined for ultimate return to the Absolute we are faced with the question: "So what happens then?"
Since memory is a function of consciousness and therefore enjoys the same eternal character as the consciousness which accounts for its existence it must be admitted that when consciousness returns to the Absolute it brings with it all the memories it has accumulated through experience in reality. The return of consciousness to the Absolute does not imply an extinction of the separate entity which the consciousness organized and sustained in reality. Rather, it suggests a differentiated consciousness which merges with and participates in the universal consciousness and infinity of the Absolute without losing the separate identity and accumulated self-knowledge which its memories confer upon it. What it does lose is the capability for generation of independent thought holograms, since that can be done only by energy in motion. In other words, it retains the power to perceive but loses the power of will or choice. In exchange, however, this consciousness participates in the all-knowing infinite continuum of consciousness which is a characteristic of energy in the ever present. Consequently, it is accurate to observe that when a person experiences the out-of- body state he is, in fact, projecting that eternal spark of consciousness and memory which constitutes the ultimate source of his identity to let it play in and learn from dimensions both inside and outside the time-space world in which his physical component currently enjoys a short period of reality.“( https://www.cia.gov/readingroom/docs/cia-rdp96-00788r001700210016-5.pdf ).
To exactly quantify the afterlife, first we must confirm, then chart, the probable multiverse through engineering. Then we must engineer a machine to find where one’s individuality goes throughout the multiverse, after death in this universe.
Can only first-person reports measure consciousness?
It has been proposed that consciousness is mediated at the level of the neocortex according to a string of declarative, conscious units, which encodes a sequence of sounds or visual objects. A single electric pulse delivered to a neocortical pyramidal fibre after a brief discharge of action potentials renders the fibre inactive for 100 ms or so, since the pulse activates a collateral that engages GABAergic neurons that inhibit the fibre for the purpose of excitability regulation (Chung and Ferster 1998; Krnjević 1974; Krnjević et al. 1966abc; Krnjević and Schwartz 1967; Schiller and Malpeli 1977; Tehovnik and Slocum 2007a). If pulses are delivered in a 10-Hz train, then an activated pyramidal fibre can be inhibited for up to 100 ms between pulses (Logothetis et al. 2010). Thus, 10-Hz stimulation can be used to inhibit a declarative, conscious unit as an animal (including a human) is made to execute a task that is based on a sequence of events such as concatenated sounds or a movie clip, each perceived, imagined, or evidenced using the motor system.
The neocortex is composed of vertically-aligned pyramidal fibres (20 to 40 neurons at any one depth) that are grouped in micro-columns, with each column measuring 30 μm in diameter and believed to encode a single feature (Peters and Sethares 1991). If one uses indwelling electrodes to pass current, a small region of the neocortex (i.e., a 100 to 200 μm diameter sphere of tissue) can be activated with currents of 2 to 5 μA (@ 0.2-ms pulse duration) to evoke or disable perception, which is estimated to drive from 60 to 250 vertically-aligned pyramidal fibres (Peters and Sethares 1991; Schmidt et al. 1996; Tehovnik and Slocum 2007ab, 2013).[1]
As discussed in a previous chapter, the electrical stimulation work of Penfield and Ojemann (Ojemann 1983, 1991; Penfield and Roberts 1966) has been central to the idea that elements in the neocortex can be inhibited by activating specific loci of the cortex to interrupt the generation of speech in alert patients.[2] One of the most important observations made in these studies is that information pertaining to language is stored uniquely in the neocortex: no two individuals have the same language map. This makes good sense, since learning a language (or learning any other faculty) is based on the history of learning, as well as genetic makeup. Therefore, to deduce what is stored within the neocortex of an individual, different declarative, conscious units must be interrupted electrically in the neocortex for different streams of consciousness. This will be no easy task, since the neocortex of humans has a storage capacity of tens of trillions of bits.[3]
To make this line of work more manageable, an understanding of how language is stored in the neocortex is paramount. Each faculty whether it is a distinct language or mathematics is stored by an independent network of neurons (Ojemann 1983, 1991; Rojas 2021). And based on how a language is learned from childhood one can spontaneously develop a verbal network without developing a reading and written network.[4] This suggests that every faculty is anchored to specific sensorimotor transformations: speaking is dependent on sound and the vocal apparatus, reading is dependent on vision (and audio for some) and eye movements, and writing is dependent on vision and hand movements (see: Ojemann 1991).
So, how does the foregoing generalize to other species? Elephants, dolphins, and whales have an advanced communication system that has yet to be deciphered (e.g., Antunes et al. 2011). The songbird, however, has a well-studied telencephalon that is known to store songs (Goldman and Nottebohm 1983; Rochefort et al. 2007), whereby the methods of Penfield and Ojemann could be used to interrupt various aspects of song generation. Thus, the stream of consciousness can be studied across different species by disabling declarative, conscious units electrically at various locations along the neural strings per species, but for this to be feasible the ethology of an animal must be well understood as it is for humans [5] and songbirds.
Footnotes:
[1] Distinct colors can be evoked from the visual cortex using currents between 2 and 10 μA (Schmidt et al. 1996). To readily evoke such perception using low currents, high-impedance electrode (i.e., > 2 MΩ) that induce a high charge density are recommended (Tehovnik et al. 2009).
[2] Typically, electrical stimulation delivered to the neocortex was used to identify the language areas of the neocortex in patients who were about to have regions of the neocortex removed to treat severe epilepsy (Ojemann 1983, 1991; Penfield and Roberts 1966). In these studies, naming, reading, verbal memory, mimicry of facial movements, and phoneme identification were assessed per stimulation site, typically in Wernicke’s and Broca’s areas. The map size for the primary language was always smaller than the map size for the secondary languages (this difference is related to automaticity as argued in an earlier chapter). Surface stimulation was used with large-tipped electrodes; therefore, the current was in the milliampere range (1-4 mA), and the duration of stimulation train (i.e., the duration of inhibition) was under 15 seconds. Frequency of pulses was 60 Hz (rather than 10 Hz), and such stimulation never evoked sensations or linguistic utterances.
[3] A technology may eventually be developed to disrupt individual synapses, but currently the disruption of small groups of neurons is what is available (Tehovnik and Slocum 2007ab).
[4] Children learn verbal languages readily, but they require intensive study to read and write in a particular language. The symbols produced for writing (and for mathematics) are cultural inventions (Rojas et al. 2021).
[5] The human neocortex has a surface area of about 1,800 cm^2 (Van Essen et al. 2018). If the neocortex is composed of microcolumns measuring 30 μm in diameter (Peters and Sethares 1991), which would be the minimal size of a declarative, conscious unit devoid of its connections, then the neocortex should contain 64 x 10^6 declarative, conscious units, with each operating in parallel to encode a single feature (Logothetis et al. 2010; Murayama et al. 2011; 1991; Rutledge and Doty 1962; Schiller and Tehovnik 2015; Tehovnik and Slocum 2013). And each feature is stored according to context (Lu and Golomb 2023), which is determined by the connectivity profile per context. Each neuron in the neocortex has about 10,000 synaptic contacts, on average, suggesting an unlimited number of contexts per feature can be stored (estimate from chapter 18).
The rate of glucose consumption by the neocortex is reduced by over 80% during anesthesia (Sibson et al. 1998), which disables the synapses (Richards 2002) that are supported by glial tissue (Engl and Attwell 2015). It is the synapses that provide the brain with its computational power (Hebb 1949). Disconnected (pig) neurons on life support (e.g., Sestan 2018)[1] have no ability to transfer information, and some might argue that such cells have been reduced to having a computational power below that of a single-cell organism, the amoeba (Saigusa et al. 2008), since they have been taken out of their ‘social’ environment for the expected programming between individual neural members. The organizational life of a multicellular organism is no trivial matter, requiring that each cell be subjected to some biological constraints (Albert et al. 2002) in exchange for the energy efficiency obtained per cell, which scales as the 3/4th power of an organism’s body mass (DeLong et al. 2010; Kleiber 1947; Wells 2007). This process has been shaped by 500 million years of evolution. We are a long way away from merely dumping a bunch of disconnected neurons into a dish that self-organize into a superorganism that supports consciousness, a well-studied topic by entomologist E.O. Wilson (Wilson 2012). Indeed, this calls for taking evolution seriously so that one day we might be able to really engineer a superorganism, which is not trivial (also see ‘converting rodents into humans’[2]).
Footnotes:
[1] The Yale researcher, Prof. Nenad Sestan, has managed to keep pig brains that were detached from the body and on life support (i.e. a warm blood supply mediated by pumps) alive for up to 36 hours (Regalado 2018). This result created quite a sensation at the National Institutes of Health with some even suggesting that this could yield the possibility of studying consciousness and the brain in the absence of the body. A notable observation was that the EEG activity of the pig brains was flat. What is clear from this is that a body is necessary to give the brain life through feedback (Tehovnik and Chen 2015). The challenge now is to determine how much of the body (or its prosthetic equivalent) is sufficient to provide function to a brain. A similar misthinking, as that which motivated Sestan (2018), has occurred by investigators who have hooked up two brains via wires to create the illusion that significant information can be transmitted between them (cf. Pais-Vieira et al. 2013 & Tehovnik and Teixeira e Silva 2014).
[2] Converting rodents into humans: Brain tissues from humans, called organoids, have been implanted into the brains of mice raising the possibility of having human brain cells incorporated into the rodent biostructure (Mansour et al. 2018). Some have speculated that this could endow rodents with an enhanced cognitive ability if the number of human cells were numerous enough (Begley 2017). Note that there are some 71 million neurons in a mouse brain, so this would require a significant addition of organized tissue. A fear persists amongst bioethicists that such implants might give rodents some degree of humanness: i.e., augmented consciousness. But injections of neural tissue into a foreign body are riddled with incompatibilities such as problems with blood supply, immunity, and functional connectivity.
The bird brain, unlike the human brain, regularly injects itself with new neurons via neurogenesis, and therefore it might provide clues about the challenges of adding new neurons into another’s nervous system (Barnea and Pravosudov 2011). Cell proliferation, cell migration via glia, and cell replacement are some of the steps that make up neurogenesis. Riding the brain of old cells is also part of the process (as anyone who has ever received chemotherapy understands). To add, this process is tightly regulated. For example, in the canary, neural augmentation occurs in the vocal control nuclei during periods of song and mating (Goldman and Nottebohm 1983). The point behind emphasizing this detail is to show that for the new neurons to contribute, one might need to reprogram the existing tissues—neurons, glia, epithelia—so that the new neurons are accepted and utilized effectively. At this point, injections of human neurons into a rodent brain may be more prone to producing cognitive deficits than cognitive enhancements.
With the rapid advancement of artificial intelligence, some believe that one’s consciousness (post-death) will be stored in the cloud with the possibility of it being hooked up to a biological or prosthetic body, thereby yielding a doppelgänger of the original (Figure 33; see Nicolelis 2011, p. 61). Furthermore, it is presumed that irrespective of the body type, all the characteristics of consciousness will be preserved. This thinking is severely flawed, since consciousness is shaped by the sensory feedback it receives through the body (Birbaumer 2006; Fetz 1969; Fetz and Baker 1973; Fetz and Finocchio 1971, 1972; Tehovnik and Chen 2015; Wyler and Burchiel 1978; Wyler et al. 1979). Imagine having Einstein’s consciousness attached to Pelé’s body, a body that was conditioned for football and not physics. This is where René Descartes’s mind-body duality fails neuroscience and biology (Noble and Noble 2023).
The following quotation from Donald Hebb is instructive:
“Yerkes (1912) trained an earthworm to choose one arm of a T-maze, using electric shock as punishment for error and the moist burrow as reward for correct choice. The habit was acquired in twenty trials, 2 days at ten trials per day, about what might be necessary for the laboratory rat. No errors were made on the third day, though the behaviour was somewhat inconsistent in the following week as between good days and bad days (even worms have them). Yerkes then removed the brain, or principal ganglia, by cutting off the head—the anterior four and a half segments. The animal continued to respond correctly, showing that there were sufficient synaptic modifications in the remaining ganglia to mediate the response—until the new head regenerated, at which time the habit was lost. The noise generated by the new ganglia, the irrelevant neural activity of the uneducated brain, was sufficient to disrupt learning completely.” (Hebb 1961, p. 78)
In the foregoing, the new brain cells of the worm had to be reprogrammed by the new feedback coming through the body. What this indicates is that the interaction between the brain, which houses one’s consciousness, and the body assures that every one of us is unique, and that upon death there will never be another Einstein or Pelé, and therefore it will be the written and visual records that will keep these individuals (including us) alive after death.
Figure 33. A brain that can be plugged into any body such that the body will assume the personality of the brain automatically.
Employing a natural approach to accomplish work in quantum computing intuitively appears to be a better prospect to compute in the quantum than the current technical, mechanical, artificial versus natural, approach to quantum computing.
In quantum mechanics, the act of observation collapses the wave function, determining a particle’s state.
Similarly, quantum consciousness plays a role in shaping our reality.
Some theories suggest that conscious observation influences the outcome of events.
Consider the famous Schrödinger’s cat thought experiment: Until observed, the cat exists in a superposition of alive and dead states. Consciousness collapses this superposition into a definite reality.
The poster symbolizes the power of Donald Trump and his neo-Nazi movement to ‘Make America Great Again’. Be warned: his playbook is totally Hitlerian and therefore should not be portrayed (as some in the Comedy club world have) as a political buffoon of no consequence. To start, the Democratic Party and its supporters need to wake up. And it is time that Americans form a Real 3rd Party (under the leadership of a Burnie-Sanders-like candidate) to give Americans choice at the ballot box, otherwise they will wake up to Donald Trump as president (for life) this November. Please tell me why I am wrong?
Sherrington (1906) predicted that the neocortex mediates all program changes to movement (which is regulated by consciousness/learning, Hebb 1949, 1968), whereas the cerebellum maintains a steady flow of movement once the changes are put in place (that is, once the efference-copy code is reset in the cerebellum via neocortical intervention, Chen 2019; Cullen 2015; De Zeeuw 2021; Fukutomi and Carlson 2020; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik et al. 2021; Wang et al. 2023). That the human brain is composed of 86 billion neurons with the neocortex accounting for 16 billion and the cerebellum accounting for 69 billion leaving some one billion for the remaining structures (Herculano-Houzel 2009) is well accepted. The remaining one billion neurons (1% of the total) are left for sorting out functional details as they pertain to the olfactory bulb for the sense of smell and the thalamus for relaying gustatory, somatosensory, vestibular, auditory, and visual information. Moreover, the superior colliculus is for orienting toward and away from external stimuli, and the hypothalamus is connected to the hormonal and vascular system, and finally various brain stem nuclei such as the locus coeruleus are for transitioning between wakefulness and sleep and the substantia nigra is for mediating behavioral drives, i.e., the speed of emotive responses. Lastly, we cannot forget innervations of the autonomic and ocular and skeletal nuclei situated in the brain stem and spinal cord to finalize glandular secretions and muscle contractions. The 1% of neurons in the human brain—without which death would ensue—is present in all vertebrates. Thus, what distinguishes humans from other vertebrates is the ratio of neurons utilized for information storage in the telencephalon (i.e., neocortex of mammals) and cerebellum versus the neurons in the brain stem and spinal cord, which are required to maintain the organism (see Figure 28). It is obvious that consciousness must scale with this ratio in vertebrates (Hebb 1968; Koch 2013; Morgan 1900). As for invertebrates, a similar segregation between the capacity to store information and to maintain the organism must exist. Just how segregated these two properties are amongst the ganglia of invertebrates needs clarification. Indeed, ants have a communication system based on pheromones with a throughput of 1.4 bits per second (Tehovnik et al. 2021), which is based on a 20-item pheromone alphabet (Hölldobler and Wilson 1990; McIndoo 1914).[1] It is unclear where this alphabet is stored, but some have suggested that information related to pheromone communication is house separately from the general olfactory sense (Nishikawa et al. 2012).
Footnote:
[1]The bit-rate is low because the olfactory system is slow acting, taking over a second to be engaged (McIndoo 1914).
Figure 28. The vertebrate brain is made up of the telencephalon (cerebrum that includes the hippocampus), the cerebellum, the optic tectum, and the olfactory bulb. Not labelled is the brain stem, and not shown is the spinal cord. The cerebellum in the lamprey and amphibian is small and therefore not marked; it sits on top of the brain stem. The telencephalon co-evolved with the cerebellum, since the two structures work in tandem for regulating sensation and movement and they are connected anatomically in all vertebrates (Cheng et al. 2014; Murakami 2017; Murray et al. 2017; Nieuwenhuys 1977; Xu et al. 2016). The sizes of the brains are not to scale.
Many people do not take into consideration the possibility to explain consciousness using QM and its formalism. Why?
As mentioned, multicellular organisms came about during the Cambrian period some 500 million years ago (Bronfman et al. 2016; Cisek 2019). A notion that is common to evolutionary biologists is that as the mass of an organism increases, the energy consumed per unit mass decreases according to a ¾ power (Figure 25). This relationship is known as Kleiber’s law (DeLong et al. 2010; Kleiber 1947; Wells 2007) and it is related to the ratio between the surface area of an object (i.e., the body) and its volume and to the central regulation of tissue metabolism by nervous and hormonal mechanisms (Kleiber 1947).[1] For a large organism to thrive (i.e., to reach its reproductive age, Melville’s Moby Dick, a Sperm whale, would have taken over 20 years of growth, Whitehead 2003)[2], the environment needs to be relatively stable for the largest organisms to mature; the dinosaurs of 64 million years ago went extinct when the environment underwent a radical change: an asteroid hit the earth which caused an ecological maelstrom (Alverez et al. 1979; Condie 2011; Kumar and Hedges 1998).[3] Hence, based on Kleiber’s law a multicellular organism is more energy efficient per unit mass than a single-cell organism and therefore evolution is biased in favor of promoting larger organisms with the limitation that this requires a period of ecological stability to support an extended maturation process. Indeed, the larger the organism the longer it takes to achieve full size, since adding new cells through mitosis takes time.
Invertebrate and vertebrate longevity is related to the total number of cells that need to be added during development via mitosis after insemination (Figure 26).[4] For example, the full development of a fruit fly takes two weeks, and it is composed of ~ 5 million cells at adulthood (Li et al. 2022); the full development of a human takes ~ 18 years (over 450 times longer), and it is composed of 30 trillion cells at adulthood (Hatton et al. 2023). Therefore, longevity scales with the total number of body cells required at sexual maturity (Tyshkovskiy et al. 2023). For the fruit fly, the rate of cell addition after insemination is 2.5 x 10^6 cells per week, whereas for the human, the rate of cell addition after insemination is 31 x 10^9 cells per week—note the ten-thousand-fold acceleration for humans, which is a property of ‘compound’ replication.[5] [6] Of course, the replication process in multicellular organisms is highly regulated (Barnea and Pravosudov 2011; Tyshkovskiy et al. 2023), otherwise an organism would be terminated by the willy-nilly reproduction of cells [e.g., Burkitt’s lymphoma is an example of unconstrained replication, such that the tumor doubles in size every 2 to 3 days displacing the organs of the gut if not treated (Tehovnik 2017)]. It is noteworthy that tissue-gene expression is a better predictor of longevity than is body weight (Tyshkovskiy et al. 2023). Thus, longevity is genetically programmed as are the other attributes that distinguish different species, such as the differential number of neurons between species; the more neurons the greater the presumed information capacity of a brain (Figure 27).
Vertebrates have existed since the Cambrian period and their basic brain structure—the telencephalon (which includes the hippocampus), the cerebellum, the brain stem, and the spinal cord has been preserved since this time. The lamprey, which is considered an ancient species with an evolutionary history of 450 million years (Grillner 2003), has all the component parts of a vertebrate brain (Figure 28). Birds and Mammals, as we understand them today, largely came about after the extinction of the dinosaurs 64 million years ago, and they are known to have a greater brain size to body size ratio than fishes, amphibians, and reptiles, including dinosaurs (Jerison 2009), and 500 million years ago the dominant species lived mainly in water since the land was not yet populated by plants (Darwin 1859). The augmentation of brain size in birds and mammals during the post-dinosaur period would have increased the overall energy consumption of an animal (Herculano-Houzel 2011), particularly as it relates to the telencephalon (see the enhanced relative size of the telencephalon as compared to other brain structures in birds and mammals, Figure 28), which may have been offset by a reduction in body size. Indeed, Kleiber’s law is operative in all animals, including those with an augmented telencephalon (Kleiber 1947; Kolokotrones et al. 2010; Moses et al. 2016).
As discussed, conscious operations are continuous during wakefulness and sustained by the mammalian telencephalon whose consumption of energy is 20 time higher per neuron than that of the neurons in the subcortex, and whose consumption is independent of whether an animal is mobile or immobile, a propensity that is not true of the neurons in the subcortex (Figure 7 of Chapter 4). It is unknown whether this fact about mammals generalizes to birds and other vertebrates (this needs immediate study—which could then be extended to the ganglia of invertebrates). Nevertheless, the telencephalon of birds, reptiles, amphibians and fishes, albeit relatively smaller than the other brain regions, is like the telencephalon of mammals in that it combines and stores sensorimotor information. For example, songbirds store learned melodies in the telencephalon and like humans they utilize the projections from the telencephalon that directly innervate the neurons in the brain stem to vocalize the melodies (Aboitiz 2018; Goldman and Nottebohm 1983; Rochefort et al. 2007; Simonyan and Horwitz 2011).[7] [8]The telencephalon of fishes contains neurons that are engaged in making sensorimotor associations and they store that information for long-term use (Gómez et al. 2016; Murray et al. 2017; Rodríquez-Expósito et al. 2017; Zacks and Jablonka 2023).
The telencephalon (or the neocortex) supports consciousness evolutionarily by increasing the number of cortical neurons as a function of developmental duration, the number of years between insemination and sexual maturity (see Figure 29). To demonstrate the compounding nature of neocortical mitosis, the total number of neurons present in adulthood can be divided by developmental duration. From the smallest to the largest mammals (i.e., from the mouse to the killer whale) this value varies from 0.08 x 10^9 neurons per year to 2.2 x 10^9 neurons per year, a 28-fold difference while holding time constant. This difference is because the base (or principal) before a replication is small for the mouse and large for the killer whale. In the case of the killer whale, it is composed of a large number of cells requiring protracted environmental stability, while taking advantage of Kleiber’s law of energy efficiency. But the large number of neurons in the neocortex should facilitate an animal’s ability to solve problems related to survival (this will be discussed in a future chapter).
For consciousness to prevail, one needs to appreciate the embodied brain, which requires never separating the brain from the body (Clark 1998; Gibson 1979; Tehovnik and Chen 2015; Varela et al. 1991) and which runs contrary to Dualism, a biologically-bankrupt philosophy that severs the brain from the body as originally conceptualized by René Descartes. Embodiment can be traced back 600 million years to our ancestor, the amoeba (Lahr et al. 2019), which is totally embodied (the organelles and genetic material controlling motivational states are enclosed by one outer membrane), and which existed well before multicellular organisms.
It has been suspected for over 100 years that even single-celled animals have some degree of consciousness, given that they must survive the elements by learning to find food and avoid predators (Hebb 1968; Koch 2013; Morgan 1900; Nakagaki et al. 2000; Saigusa et al. 2008), and this is despite their short lifespan of two days or so in the case of the amoeba, which reproduces asexually and whose genetic variability is assured by mutations caused by interactions with the outside world (Noble and Noble 2024)[9]. All animals have a living constitution or ‘state of mind’ that is modifiable over the duration of their short or long lives (Hebb 1968), and which can be transmitted to the next generation if the changes are stored in the sex or a-sex cells at the nuclear and extra-nuclear level. Be aware that an egg carries with it all the extra-nuclear material of a parent, material that has been modified by parental experience, since no cell is completely isolated from the outside world including the reproductive cells (Noble and Noble 2024). An amoeba unlike a multicellular organism can engage in unconstrained reproduction, since each cell is an independent unit, whose numbers are controlled by the resources of the environment (Rogerson 1980).
Now let us consider the comparative cognitive capacity (or comparative intelligence) between species. This should be evaluated according to the duration of a species’ evolutionary presence. Crocodiles have survived at least two extinctions (Platt et al. 2013), which translates into an earthly presence of at least 200 million years. Humans, on the other hand, with their tremendous capacity for discovery and innovation (but only in recent times) have been present for a fraction of this time of under 0.5 million years (Kimura 2003); based on all the environmental damage produced by humans thus far (Rockström and Gaffney 2021) it remains unclear whether they will match the crocodile, evolutionarily. Some in the AI world believe that machines are being given the capacity for too much intelligence (e.g., Hinton 2024). Perhaps, evolution has endowed humans with too much intelligence—and without upgrading their emotionality. Just open any copy of The Economist magazine and it will become clear that economic growth—at all costs—is a human obsession, which does not make us that different from the common ant, who also operates according to a growth-bust model of existence (Wilson 2012).
Summary.
1. Kleiber’s law indicates that as multicellular organisms get larger, the amount of energy utilized per unit weight (or per unit cell) diminishes. In a stable environment this propensity should encourage the evolution of larger organisms. Larger organisms, however, may not have enough time to reach sexual maturity before being terminated by an unstable environment, which was the case for the dinosaurs. An advantage of being a smaller organism (and thus having a short longevity) is that genes of a species can be altered rapidly, as experiments on fruit flies have illustrated (a species of choice by geneticists).
2. Consciousness is a property of mammals such that the neurons in the telencephalon exhibit a high and steady metabolic rate per neuron during wakefulness. The same needs to be verified for the telencephalon of other vertebrates.
3. The vertebrate telencephalon, which also includes the hippocampus, acts as a storage facility of learned information by altering the synapses during development and adulthood. The expression of the stored information through the motor system represents consciousness.
4. Going from the smallest to the largest mammals, the number of neurons in the telencephalon (i.e., the neocortex of mammals) exhibits an explosive addition of neurons. This addition requires protracted environmental stability, even though the large number of neurons should enhance an animal’s propensity to solve problems related to survival.
5. The embodied brain for the mediation of consciousness can be traced back to the amoeba.
6. The comparative intelligence of an animal should be assessed according to a species’ evolutionary longevity. A species that has overcome many mass extinctions, such as the crocodile, deserves to be called intelligent.
7. And yes, humans live in an egocentric universe, cognitively.
Footnotes:
[1] The optimal shape to reduce heat loss is a sphere, which has the lowest surface area to volume ratio, but most animals are not shaped like a sphere. And the least optimal shape is a flat plane with little volume. The surface area of humans is related to body mass by a 2/3 power (Kleiber 1947), and a slim person would have a higher power value than a stout person. And to hold heat in a body, an animal can grow body hair (as do dogs) or behaviorally alter the surface area of the body to decrease or increase heat transfer, and in the case of humans clothing can be used for this purpose. Regulating body temperature is part of an animal’s goal-directed behavior and is regulated by the hypothalamus (Mogenson 1977).
[2] Large animals might have fewer predators due to a size advantage, although a whale can be overcome by a well-organized group of humans (Melville 1851).
[3] On record, evolving multicellular organisms have endured five mass extinctions due to environmental collapse, and after each extinction it is presumed that evolution underwent an acceleration to fill in the ecological void created by the collapse (Condie 2011; Kumar and Hedges 1998). Homo sapiens are the result of such a collapse 64 million years ago.
[4] Note that some animals reproduce using asexual reproduction, but sexual reproduction is most common in the animal world for it enhances survivability by mixing the genes (Noble and Noble 2023).
[5] By the age of 18 there are 972 weeks of development: 36 weeks prenatally and 936 weeks postnatally. The rate calculation is based on a total number of cells at adulthood of 30 x 10^12, and this value was divided by 972 weeks to yield 31 x 10^9 cells per week.
[6] Something that distinguishes fishes, amphibians and reptiles from birds and mammals is that the former classes typically continue to grow (defined as the growth of the vertebral column) for the duration of their lives (i.e., these animals have in-determinant growth, Hariharan et al. 2016), whereas birds and mammals grow to adulthood after which they live out their life at a fixed size/length (Williams 2017), but the vertebral column of Kangaroos continues to grow throughout life albeit at a lower rate following adulthood. By having continuous growth, the sensors of the body must be under constant adjustment so that the produced movements over an animal’s lifespan are re-aligned with the sensors (this is why animals need frequent efference-copy updates, especially animals that are always growing: not surprisingly all vertebrates have a cerebellum to recalibrate the relationship between the senses and the motor systems). The maximal growth of fishes, amphibians, and reptiles is restricted by predation, disease, or natural disaster, whereas the maximal longevity of birds and mammals is restricted by these factors.
[7] The direct innervation of the brain stem nuclei by the telencephalon assures that the telencephalon has maximal control over the muscles (Simonyan and Horwitz 2011; Vanderwolf 2007), which is a requirement for any motor system requiring precision.
[8] Frogs exhibited the earliest vocalization 250 million years ago and the larynx (which contains the vocal cords in humans) existed in the lung fish 400 million years ago, even though they may not have been used for vocalization (Simonyan and Horwitz 2011).
[9] An amoeba unlike a multicellular organism can engage in unconstrained reproduction, since each cell is an independent unit, whose numbers are controlled by the resources of the environment (Rogerson 1980).
Figure 25. Energy utilization versus mass of an organism (i.e., for mammals, Kolokotrones et al. 2010; Moses et al. 2016). The larger the organism the less energy consumed per unit mass. Is this what drives evolutionary bigness? Cost: the bigger you are, the harder you fall during periods of resource depletion, which can lead to extinction. Note that the slope of a log-log plot yields the exponent of a power function (slope = [log10 y2/y1] / [log10x2/x1]).
Figure 26. An animal’s full weight varies with longevity (cf. the red bars with the blue bars). Obtained from Tyshkovskiy et al. 2023.
Figure 27. Information is plotted as a function of the number of neurons. Two cases are considered, connected (exp > 1) and disconnected neurons (exp = 1). Brains are interconnected which enhances information capacity in terms of storage and transmission. Figure modified from Tehovnik and Chen (2015).
Figure 28. The vertebrate brain is made up of the telencephalon (cerebrum that includes the hippocampus), the cerebellum, the optic tectum, and the olfactory bulb. Not labelled is the brain stem, and not shown is the spinal cord. The cerebellum in the lamprey and amphibian is small and therefore not marked; it sits on top of the brain stem. The telencephalon co-evolved with the cerebellum, since the two structures work in tandem for regulating sensation and movement and they are connected anatomically in all vertebrates (Cheng et al. 2014; Murakami 2017; Murray et al. 2017; Nieuwenhuys 1977; Xu et al. 2016). The sizes of the brains are not to scale.
Figure 29. Number of neocortical neurons is plotted as a function of developmental duration in years. Data on the various mammals was obtained from the following papers: Herculano-Houzel 2011 (mouse, rat, baboon, monkey, human), Herculano-Houzel et al. 2014 (elephant), and Ridgway et al. 2019 (killer whale). The data are fitted to an exponential equation (dashed curve).
How do the brain’s objective, physical processes - such as neural activity and biochemical reactions - give rise to the deeply subjective and personal experiences we call consciousness? In other words, how does the brain, as a physical system, create or transform these electrical and chemical signals into subjective phenomena like thoughts, emotions, and sensations?
Artificial Intelligence (AI) was seen as the new revolution (and it still is) some years back. Nevertheles, recent discussions seem to present an aspect of AI that deviates from what humans anticipated, including potential “takeovers”.
According to a post by Professor Philip Goff (2019) on The Conversation, consciousness is unobservable. It is nearly impossible to see someone’s feelings merely by looking at them. And since you can’t look inside their heads and judge the same, we prefer to make inferences. When it comes to immeasurable parameters, the famous correlation analysis leads the talk.
In the digital domain, about 90% of these inferences are data driven. But can these data be always right and can they define consciousness (if they do arise)? Obviously not. If so or otherwise, should we be worried that AI will someday gain consciousness (to surprise humanity, as discussed in recent debates) especially when we almost absolutely rely on data that are only partly understood from these ”machines” ?
If physics adjusted for the law of identity, could we exactly quantify the afterlife? How?
Maybe so:
1)On Physics:
Presentation Critical Rationalist Physics
2)Deductive reasoning:
Conventional physics emphasizes experiments verifying objective reality but both quantum mechanics (QM) and originator of the multiverse hypothesis Hugh Everett suggest there's no such thing as objective reality.
Regarding QM - if quantum superposition is taken to its logical extreme, everything in the universe would affect everything else. Regarding Everett - his idea of the universal wavefunction says the observed and observer are all mixed together. These two references mean an experimenter's consciousness can never avoid influencing (technically, biasing) an experiment.
Physicists would be aware of these QM/Everett things but they seem to be unconsciously reverting to a classical view in which objective reality exists in all space-time, and not just in the limited perceptions of humans or animals. Our restricted senses (along with the limited technology and mathematics developed by humans to date) might view a quantum superposition where everything, including consciousness, fills all space and time very differently. For example - instead of occupying the whole of spacetime, a subatomic particle could be interpreted as being in more than one place simultaneously (this is what quantum physics says today).
If the existence of our science was separate from existence of the universe, there would indeed be objective reality for scientific theories to investigate. Suppose the accepted concept of time needs an additional component of not being purely linear but of being curvilinear. Future warping of space-time could modify that curve and form a circular time in which far distant centuries and millennia could directly connect with the remote past.
If a civilization is sufficiently advanced (advanced beyond our comprehension), the universe's origin is potentially artificial - without referring to religious beliefs. When circular time entangles science and technology from the distant future (observers) with a created universe (the observed), it'd be perfectly acceptable for objective reality not to exist and for experimenters to unavoidably influence experiments.
This is because studying the universe would have taught the observers how to do the wonders they perform. One of those wonders would be creating the observed cosmos whose fine-tunings would once again ultimately produce observers who'd produce the requisite cosmos. The cyclical nature of space-time would be transferred from current cosmology's repeated Big Bangs and Big Crunches to observers' future science constantly producing the observed universe.
RESPECTFULLY, pan-dualism is more plausible than pan-deism. All entities either are unique, or too different for perfect prediction. Plus, humans may be bound by some rules(genetics, environment, circumstances, etc.) but, without the fundamental choice to focus on life, human reason would be impossible. Plus, humans can lose all their cells yet survive and retain their individual identities. So, at least humans have some immaterial tracker(maybe souls). Pan-deism depends on the unlikely premise that a creator destroyed itself(thus, all existence are dubiously the creator’s debris). We don’t know who created us or how. Thus, pan-dualism has the most evidence, while making the least assumptions.
A major goal of learning and consciousness is to automate behavior--i.e., to transition from ‘thinking slow’ to ‘thinking fast’ (Kahneman 2011)--so that when an organism is subjected to a specific context that an automatic response will be executed with minimal participation from volitional circuits (i.e., in the neocortex). When one needs to enter a secure area, it is common for one to be confronted with a keypad upon which one must punch out the code to gain entry. At the beginning of learning the code, one is given a number, e.g., ‘3897’, which must be put to declarative memory. After having entered the facility on numerous occasions, one no longer needs to remember the number, but just the spatial sequence of the finger presses. Thus, the code has been automated by the brain. In fact, often the number is no longer required, since the nervous system automatically punches out the number using implicit memory (something like never needing to recall the rules of grammar to write correct sentences).
So, how does the brain automate behavior? The first clue to this question comes from studies on express, saccadic eye movements (Schiller and Tehovnik 2015). Express saccades are eye movements generated briskly to single targets at latencies between 80 and 125 ms. In contrast, regular saccades are saccadic eye movements generated to a single or to multiple targets (as used in discrimination learning such as match-to-sample) whose latencies vary from 125 to 200 ms, or greater depending on task difficulty (see Figure 14). The behavioral context for the elicitation of express saccades is to have a gap between the termination of the fixation spot and the onset of a single punctate visual target (Fischer and Boch 1983). The distributions of express saccades and regular saccades are bimodal, suggesting that two very different neural processes are in play when these eye movements are being evoked. After carrying out lesions of different parts of the visual system (i.e., the lateral geniculate nucleus parvocellular, the lateral geniculated nucleus magnocellular, area V4, the middle temporal cortex, the frontal eye fields, the medial eye fields, or the superior colliculus) it was found that lesions of the superior colliculus abolished express saccades, and for all other lesion types the express saccades were spared. Thus, a posterior channel starting in V1 and passing through the superior colliculus mediates express saccades (Schiller and Tehovnik 2015). Furthermore, the minimal latency for express saccades (i.e., 80 ms) is accounted for by the summed, signal latency between the retina and area V1 (i.e., 30 ms), the signal latency between area V1 and the superior colliculus (i.e., 25 ms), and the signal latency between the superior colliculus, the saccade generator, and the ocular muscles (i.e., 25 ms, Tehovnik et al. 2003)[1]. What this indicates is that express saccade behavior bypasses the frontal cortex and the posterior association areas of the neocortex (i.e., V4 and the medial temporal cortex), and is transmitted directly from V1 to the brain stem[2].
For oculomotor control, parallel pathways occur between (1) the posterior and the anterior regions of the neocortex (i.e., including, respectively, V1 and the frontal eye fields[3]) and (2) the brain stem ocular generator, which mediates ocular responses in mammals (Figure 15, Tehovnik et al. 2021). The idea that parallel pathways between the neocortex and brain stem mediate specific responses, such as the V1-collicular pathway subserving ocular automaticity, is not new. Ojemann (1983, 1991) has proposed that a multitude of parallel pathways subserves language, since once a language is mastered, it becomes a highly automated act, and electrical perturbation of a focal neocortical site affects a specific component of a language, but not an entire language string, as long as the remaining parallel pathways are intact. Global aphasia occurs when all the parallel pathways of Wernicke’s and Broca’s areas are damaged (Kimura 1993; Ojemann 1991; Penfield and Roberts 1966).
Why is it that express saccades and regular saccades alternate across trials in a quasi-random order (Schiller and Tehovnik 2015)? Lisberger (1984) has studied latency oscillations across trials for the vestibuloocular reflex by measuring the onset of an eye movement after the beginning of a head displacement. He found latency values as low as 12 ms and as high as 20 ms (Lisberger 1984; Miles and Lisberger 1981). At a 12-ms latency, the signal would need to bypass the cerebellar cortex and be transmitted from the vestibular nerve through the vestibular nucleus (which is a cerebellar nucleus) to the abducens (oculomotor) nucleus to contract the eye muscles within 12 ms (Lisberger 1984). At a 20-ms latency, the signal would pass from the vestibular nerve to the cerebellar cortex by way of the granular-Purkinje synapses and then to the vestibular and abducens nuclei to arrive at the muscles within 20 ms. The difference between the fast and slow pathway is 8 ms, and it is the additional 8 ms through the cerebellar cortex that allows for any corrections to be made to the efference-copy code[4].
In the case of regular versus express saccades, the minimal latency difference is 45 ms (i.e., 125 ms – 80 ms = 45 ms, Schiller and Tehovnik 2015). So, what could explain this difference? Regular saccades utilize both posterior and anterior channels in the neocortex, for paired lesions of the superior colliculus and the frontal eye fields are required to abolish all visually guided saccades (Schiller et al. 1980). Perhaps, the longer latency of regular saccades as compared to express saccades is due to transmission by way of the frontal eye fields for regular saccades, as well as having the signal sent through the cerebellar cortex via the pontine nuclei and inferior olive to update any changes to the efference-copy code. Express saccades, on the other hand, utilize a direct pathway between V1 and the saccade generator, with access to the cerebellar nuclei (i.e., the fastigial nuclei[5], Noda et al. 1991; Ohtsuka and Noda 1991) for completion of a response at a latency approaching 80 ms—a latency that is too short for frontal lobe/temporal lobe participation and the conscious evaluation of the stimulus (at least 125 ms is required for a frontal/temporal lobe signal to arrive in V1, Ito, Maldonado et al. 2023)[6]. Utilizing the fast pathway would not permit any changes to the efference-copy code and furthermore there would be no time for the conscious evaluation of the stimulus conditions. This general scheme for slow versus fast ‘thinking’ (Kahneman 2011) can be applied to any behavior, as the behavior changes from a state of learning and consciousness to a state of automaticity and unconsciousness[7].
While thinking slow, the human cerebellum can update as many as 50,000 independent efference-copy representations (Heck and Sultan 2002; Sultan and Heck 2003). And we know that during task execution the entire cerebellar cortex is engaged including circuits not necessary for task execution (Hasanbegović 2024). This global reach assures that all aspects of a behavior are perfected through continuous sensory feedback; hence, evolution left nothing to chance.
The number of neurons dedicated to a behavioral response decreases as a function of automaticity. This translates into a reduction in energy expenditure per response for the neurons as well as for the muscles[8]. The first evidence for this idea came from the work of Chen and Wise (1995ab) on their studies of neurons in the medial and frontal eye fields of primates (see Figure 15, monkey). Monkeys were trained on a trial-and-error association task, whereby an animal fixated a central spot on a TV monitor, and arbitrarily associated a visual object with a specific saccade direction by evoking a saccadic eye movement to one of four potential targets (up, down, left, or right) to get a reward (see Figure 16, left-top panel, the inset). An association was learned to over 95% correctness within 20 trials; unit recordings were made of the neurons in the medial and frontal eye fields during this time. The performance of an animal improved on a novel object-saccade association, such that the neurons exhibited either an increase in unit spike rate over an increase in the proportion of correct trials (Figure 16, novel, top panel), or an increase followed by a decrease in unit spike rate as the proportion of correct trials increased (Figures 16, novel, bottom panel, and Figure 17, novel, top panel). When the neurons were subjected to a familiar association, the discharge often assumed the same level of firing achieved following the asymptotic performance on novel associations: namely, high discharge and modulated (Figure 16, familiar, top panel) or low discharge and unmodulated (Figure 16, familiar, bottom panel; Figure 17, familiar, top panel). Accordingly, many neurons studied exhibited a decline in activity when subjected to familiar objects[9]. Although 33% of the neurons (33 of 101 classified as having learning-related activity) exhibited a declined and a de-modulation in activity during the presentation of a familiar object (e.g., Figure 17, familiar, top), this proportion is likely an underestimation, since many such neurons may have been missed given that unit recording is biased in favor of identifying responsive neurons. For example, a neuron that exhibited a burst of activity on just one trial could have been missed due to data averaging of adjacent trials, using a 3-point averaging method (Chen and Wise 1995ab).
For cells that had the properties shown in figure 16 (novel, top panel) for novel objects—i.e., showing an increase in activity with an increase in task performance—there was no delay in trials between the change in neural firing and the change in performance, as indicated by the downward arrow in the figure representing ‘0’ trials between the curves; this suggests that these cells were tracking the performance. Also, there was a group of cells that exhibited an increase and a decrease in unit firing such that their response to novel and familiar objects declined with the number of trials as well (Figure 16, bottom panels, novel and familiar). This indicates that the decline in activity was being replayed when the object became familiar. Finally, for neurons that exhibited an increase and decrease in spike activity over trials, the declining portion of the neural response (at 50% decline) always followed the increase in task performance by more than half a dozen trials, as indicated by the gap between the downward arrows of figure 16 (novel, bottom) and figure 17 (novel, top), illustrating that these neurons anticipated peak performance. Some have suggested that the short-term modulation in the frontal lobes is channels to the caudate nucleus for long-term storage (Hikosaka et al. 2014; Kim and Hikosaka 2013). More will be said about this in the next chapter.
Imaging experiments (using fMRI) have shown that as one learns a new task, the number of neurons modulated by the task declines. Human subjects were required to perform a novel association task (associate novel visual images with a particular finger response) and to perform a familiar association task (associate familiar visual images with a particular finger response) (Toni et al. 2001). It was found that as compared to the novel association task, the familiar association task activated less tissue in the following regions: the medial frontal cortex and anterior cingulate, the prefrontal cortex, the orbital cortex, the temporal cortex and hippocampal formation, and the caudate nucleus. Furthermore, the over-learning of a finger sequencing task by human subjects from training day 1 to training day 28 was associated with a decline in fMRI activity in the following subcortical areas: the substantia nigra, the caudate nucleus, and the cerebellar cortex and dentate nucleus (Lehericy et al. 2005). Also, there was a decrease in activity in the prefrontal and premotor cortices, as well as in the anterior cingulate.
Finally, it is well-known that a primary language as compared to a secondary language is more resistant to the effects of brain damage of the neocortex and cerebellum, and a primary language, unlike a secondary language, is more difficult to interrupt by focal electrical stimulation of the neocortex (Mariën et al. 2017; Ojemann 1983, 1991; Penfield and Roberts 1966). Accordingly, the more consolidated a behavior, the fewer essential neurons dedicated to that behavior. Once a behavior is automated, there is no need to recall the details: e.g., punching out a code on a keypad no longer requires an explicit recollection of the numbers. This is why a good scientist is also a good record keeper, which further minimizes the amount of information stored in the brain (Clark 1998). By freeing up neural space, the brain is free to learn about and be conscious of new things (Hebb 1949, 1968).
Summary:
1. Automaticity is mediated by parallel channels originating from the neocortex and passing to the motor generators in the brain stem; behaviors triggered by this process are context dependent and established through learning and consciousness.
2. Express saccades are an example of an automated response that depends on a pathway passing through V1 and the superior colliculus to access the saccade generator in the brain stem. The context for triggering this behavior is a single visual target presented with a gap between the termination of the fixation spot and the presentation of the target.
3. The rhythmical activity between express behavior and non-express activity across trials is indicative of the express behavior bypassing the cerebellar cortex and non-express behavior utilizing the cerebellar cortex to adjust the efference-copy code.
4. Express saccades or express fixations are too short in duration (< 125 ms) for a target to be consciously identified. It takes at least 125 ms for a signal to be transmitted between the frontal/temporal lobes and area V1 to facilitate identification.
5. Automaticity reduces the number of neurons participating in the execution of a behavioral response; this frees up central nervous system neurons for new learning and consciousness.
Footnotes:
[1] The long delay of 25 ms between V1 and the superior colliculus is partly due to the tonic inhibition of the colliculus by the substantia nigra reticulata, which originates from the frontal cortex (Schiller and Tehovnik 2015).
[2] Cooling area V1 of monkeys disables the deepest layers of the superior colliculus, thereby making it impossible for signals to be transmitted between V1 and the saccade generator in the brain stem (see figure 15-11 of Schiller and Tehovnik 2015).
[3] In rodents, the frontal eye field homologue is the anteromedial cortex, and the neurons in this region elicit ocular responses using eye and head movements (Tehovnik et al. 2021). In primates, the frontal eye fields control eye movements independently of head movements hence the name ‘frontal eye field’ (Chen and Tehovnik 2007).
[4] These short latencies are for highly automated vestibular responses. Astronauts returning from space have severe vestibular (and other) problems, and it takes about a week for full adaptation to zero-G conditions (Carriot et al. 2021; Demontis et al. 2017; Lawson et al. 2016). It would be expected that the latencies would far surpass 20 ms, since now vestibular centers of the neocortex (to engage learning and consciousness) would be recruited in the adaptation process (Gogolla 2017; Guldin and Grüsser 1998; Kahane, Berthoz et al. 2003). Patients suffering from vestibular agnosia would be unaware of the adaptation process, as experienced by astronauts (Calzolari et al. 2020; Hadi et al. 2022).
[5] The discharge of monkey fastigial neurons begins to fire 7.7 ms before the execution of a saccadic eye movement (Fuchs and Straube 1993). This nucleus is two synapses away from the ocular muscles.
[6] Presenting an unfamiliar object during an express fixation of an object (i.e., a fixation of less than 125 ms; fixations between electrically-evoked staircase saccades evoked from the superior colliculus are about 90 ms, Schiller and Tehovnik 2015) should fail to be identified consciously by a primate; on the other hand, the identification of a familiar object will only occur using ‘subconscious’ pathways during an express fixation, which are pathways at and below the superior colliculus/pretectum and the cerebellum (see: De Haan et al. 2020; Tehovnik et al. 2021).
[7] The conscious and unconscious states can never be totally independent, since the neocortex constantly monitors the behavior of an animal looking for ways to optimize a response in terms accuracy and latency (Schiller and Tehovnik 2015), and this interaction explains the variability of response latency across a succession of trials.
[8] Lots of aimless movements are generated when learning a new task (Skinner 1938), and when building knowledge, one must dissociate the nonsense from facts to better solve problems. This initially takes energy but in time automaticity saves energy.
[9] When we (Edward J. Tehovnik and Peter H. Schiller) first reviewed this result for publication, we were mystified by the decline of neural responsivity with object familiarity, even though we accepted the paper based on its behavioral sophistication and the challenges of recording from such a large number of neurons (i.e., 476) using a single electrode.
Figure 14. (A) The bimodal distribution of express saccades and regular saccades made to a single target by a rhesus monkey. (B) Before and after a unilateral lesion of the superior colliculus for saccades generated to a target located contralateral to the lesion. (C) Before and after a unilateral lesion of the frontal and medial eye fields for saccades generated to a target located contralateral to the lesion. Data from figure 15-12 of Schiller and Tehovnik (2015).
Figure 15. Parallel oculomotor pathways in the monkey and the mouse. Posterior regions of the neocortex innervate the brain stem oculomotor generator by way of the superior colliculus, and anterior regions of the neocortex innervate the brain stem oculomotor generator directly. For the monkey the following regions are defined: V1, V2, V3, V4, LIP (lateral intraparietal area), MT (medial temporal cortex), MST (medial superior temporal cortex), sts (superior temporal sulcus), IT (infratemporal cortex), Cs (central sulcus), M1, M2, FEF (frontal eye field), MEF (medial eye field), OF (olfactory bulb), SC (superior colliculus), and brain stem, which houses the ocular generator. For the mouse: V1, PM (area posteromedial), AM (area anteromedial), A (area anterior), RL (area rostrolateral), AL (area anterolateral), LM (area lateromedial), LI (area lateral intermediate), PR (area postrhinal), P (area posterior), M1, M2, AMC (anteromedial cortex), OB (olfactory bulb), SC (superior colliculus), and brain stem containing the ocular generator. The posterior neocortex mediates ‘what’ function, and the superior colliculus mediates ‘where’ functions.
Figure 16. Performance (percent correct) is plotted (solid black curve) as a function of number of correct trials on a trial-and-error object-saccade-direction association task. A monkey was required to fixate a spot on a monitor for 0.6 seconds, which was followed by a 0.6 second presentation of an object at the fixation location. Afterwards, there was an imposed 2-3 second delay, followed by a trigger signal to generate a response to one of the four target locations to obtain a juice reward; the termination of the fixation spot was the trigger signal (see inset in top-right panel: OB represents object, and the four squares indicate the target locations of the task, and Figure 17, bottom summarizes the events of the task). Chance performance was 25% correctness, and the maximal performance was always greater than 95% correctness established within 20 correct trials. The performance shown is the aggregate performance. In each panel, the normalized (aggregate) unit response is represented by a dashed line. The representations are based on figures 10 and 11 of Chen and Wise (1995a) for the medial eye field, and the neurons were modulated by learning novel object-saccade associations (N = 101 of 476 neurons classified). Some cells modulated by learning were also found in the frontal eye fields (N = 14 of 221 neurons classified, Chen and Wise 1995b). In the lower right panel, the familiar objects induced a decline in the neural response over the 20 trials. The illustrations are based on data from figures 11 and 12 of Chen and Wise (1995a).
Figure 17. Performance (percent correct) is plotted (solid black curve) as a function of number of correct trials on a trial-and-error object-saccade-direction association task carried out by a monkey. The dashed curves represent normalized aggregate unit responses. The inset in the right panel shows the task. For other details see the caption of figure 16. The bottom panel summarizes the events of the task. The illustrations are based on data from figures 3C, 4C, 5C, and 10D of Chen and Wise (1995a).
How Penrose - Hameroff theory is crucial to answer this question?
Although the neurons of the hippocampal formation have been studied in detail within the context of information storage, a better way of thinking about this subcortical region is that it is a pathway that transmits information between the outside world and the neocortex for long-term storage; this information once consolidated in the neocortex can be retrieved and combined with the real-time sensory information of an animal to execute optimal behavioral responses (Clark et al. 2002; Marr 1971; Mölle and Born 2009; Qin et al. 1997; Squire 1992, 2009). In adult humans, the volume of the grey matter (bilaterally) of the hippocampus ranges from ~ 6.0 to 7.0 cm3 and the volume of the grey matter of the neocortex ranges from ~ 600 to 860 cm3, a 100-fold difference (Suzuki et al. 2005). As well, the hippocampus has 0.1 billion neurons and the neocortex has 16 billion neurons, a 160-fold difference (Andrade-Moraes et al. 2013; Herculano-Houzel 2009; Simić et al. 1996). These differences certify that evolution has devoted much less neural space to the consolidation and retrieval of information by the hippocampus than to the long-term storage of information within the neocortex. A comparable difference is expected for other vertebrates (Footnote 1, Murray and Wise 2011; Murray et al. 2017), and this difference has likely existed since the Cambrian explosion half a billion years ago, which included fish (Ovsepian and Vesselkin 2014), during which time the ‘continents’ presumably had little terrestrial life.
Before the advent of GPS, taxi drivers who were required to memorize the road maps of a large metropolitan area, e.g., London, were found to have an enlarged grey-matter volume of the hippocampal formation (Maguire et al. 2000, 2009). The hippocampus transfers declarative information (e.g., sensory impressions, visual objects, read or written text, etc.) over a period of weeks and months of training before finalizing the consolidation process in the neocortex (Kim, Thompson et al. 1995; Maviel, Bontempi et al. 2004; Marr 1971; Squire 1992; Takahara et al. 2003; Wang, Frankland et al. 2009). Also, the extent of retrograde amnesia following hippocampal damage varies as a function of how much tissue is removed, such that removal of the entire hippocampus that includes the temporal lobes can produce a 40-to-50-year retrograde amnesia in humans (Squire, Knowlton et al. 2001). The idea that the hippocampus mediates the consolidation of declarative information is well-accepted, and it is understood that part of this process occurs during sleep and during quiet immobility (Berger and Thompson 1978; Clark et al. 2009; Corkin 2002; Chroback and Buzsaki 1994,1996; Girardeau et al. 2009; Girardeau and Zugaro 2011, 2014; Hoffman and Berry 2009; Kim et al. 1995; Kudrimoti and McNaughton 1999; Logothetis et al. 2012; Marr 1971; Maviel et al. 2004; O’Keefe and Nadel 1978; Ólafsdóttir et al. 2017; Pavlides and Winson 1989; Rolls 2004; Roy et al. 2017; Schacter et al. 2008; Scoville and Milner 1957; Siapas and Wilson 1998; Squire and Knowlton 2000; van Vugt et al. 2018; Wilson and McNaughton 1994). During both slow-wave sleep and REM sleep, learned events are replayed in sequence such that during slow-wave sleep they are replayed in compressed time, and during REM sleep they are replayed in real-time (Boyce et al. 2016; Girardeau et al. 2009; Louie and Wilson 2001; Wilson and McNaughton 1994), and this memory replay occurs throughout the neocortex (of humans, Dickey et al. 2022; Huber, Tononi et al. 2004), thereby globalizing the stored information. And not only is declarative information consolidated during sleep (and quiet wakefulness), but motor routines typically associated with cerebellar function are also consolidated (Schiller, Phelps et al. 2010; Søgård, Monoach et al. 2024; Vorster and Born 2015). This should not be surprising since, under normal circumstances, sensation and movement occur together (Schiller and Tehovnik 2015). While learning (which occurs during wakefulness), the neural activity of the hippocampus, neocortex, and the cerebellum are found to be rhythmically coordinated in animals including humans (i.e., theta activity is synchronized but not always in phase between the regions, Bush, Burgess et al. 2017; Hoffman and Berry 2009; Wikgren et al. 2010). During periods of memory replay, the hippocampus and neocortex of primates are very active, while the cerebellum is suppressed (Logothetis et al. 2012). Finally, the genes that mediate neural plasticity for myelination and synaptogenesis are turned on during sleep, but this activity occurs in both the neocortex and the cerebellum (as illustrated in rodents, Cirelli, Tononi et al. 2004). Summarized is a schematic of the hippocampal formation and its connections to the neocortex and subcortex (Figure 1).
So, just how does the hippocampus mediate consciousness? Patient HM, who had severe damage of his hippocampal formation, was conscious in terms of being able to perceive the outside world in its entirety using all his senses, and in being able to engage in continuous dialogue with another person (Corkin 2002). But when he was required to narrate his childhood story, he at best delivered incomplete fragments of information, as though to have just a ‘photo-album’ impression of his parents. This conclusion has now been bolstered by observations of many other patients with hippocampal damage. Hassabis et al. (2007b) had hippocampal patients construct an imagined experience as triggered by a visual cue. For example, when asked to imagine being on a beach, patient PO3 said, “Really, all I can see is the colour of the blue sky and the white sand, the rest of it, the sounds and things, obviously I’m just hearing.” By comparison a hippocampal-intact individual said, “It’s very hot and the sun is beating down on me. … I can hear the sounds of small wavelets lapping on the beach. … Behind me is a row of palm trees.”. This difference is reminiscent of the types of experiences evoked by electrical stimulation of the temporal lobes in human subjects, whereby the report is also very fragmented and devoid of detail and most importantly, always the same between different bouts of stimulation (Penfield 1975). Moreover, when narrating stories, the hippocampal formation becomes very active as verified with fMRI, suggesting that the hippocampus is engaged in the retrieval of information, namely, in the spatial reconstruction of scenes for storytelling (Hassabis et al. 2007a).
Much has been made by brain-computer interface enthusiasts about hooking up two neocortical hemispheres for the purpose of communicating information between two individuals (Pais-Vieira, Nicolelis et al. 2013; but also see Tehovnik, Teixeira-e-Silva 2014). What is clear is that for such communication to be non-fragmented consciously, the enthusiasts would first need to pass all the information through a hippocampal homologue to string together the information that is stored willy-nilly in each person’s neocortex. In fact, attempts have been made to produce a hippocampal interface in both rodents and primates that could serve such a function if extended to long-term information storage (see: Berger, Deadwyler et al. 2011; Deadwyler, Berger et al. 2016).
Summary:
1. The hippocampal formation is a pathway that consolidates and retrieves declarative information in the neocortex by linking the sensory world to the cognitive world as an animal learns.
2. The consolidation process occurs during sleep and during alert immobility.
3. Declarative information is consolidated at the same time as is motor information, which is consistent with the idea that sensation and movement occur together and therefore should not be partitioned.
4. The hippocampal formation’s role in consciousness is to string together the information housed in the neocortex, so that it can be communicated (in logical order) to the outside world.
5. Brain computer interfaces of the future will need to string together the information that is scattered through the neocortex, before it can be meaningfully interpreted by a receiver.
Footnote 1: In nonmammalian vertebrates, such as fish, amphibians, reptiles, and birds, the telencephalon contains a neocortical homologue and the hippocampus (Murray et al. 2017).
Figure 1: (A) The innervation pattern (external loops) of the hippocampal formation (of one side) is illustrated. The base of the hippocampus is composed of the CA1 pyramidal fibres and the subiculum. This region sends projections to the neocortex via the cingulate cortex and to the basal forebrain via the septum. From the septum, fibres are sent toward the prefrontal cortex, basal forebrain, and brain stem. The source of the hippocampal information originates from the neocortex, basal forebrain, and the brain stem carrying sensory (including interoceptive) information. (B) The internal loop of the hippocampal formation is composed of the entorhinal cortex (ER), the dentate gyrus (DG), CA3 pyramidal fibres, CA1 pyramidal fibres, and the subiculum (Sub). Afferent input and pyramidal output are indicated.
In human psychology, time is a conscious experience—a construct reflecting the sequence of existence and events. In cosmology and physical sciences, time is often defined as the indefinite, continuous progression of existence and events in a uniform and irreversible succession, extending from the past, through the present, and into the future. This progression is conceptualized as a fourth dimension that exists above the three spatial dimensions.
Time is fundamentally a measurement to quantify changes in material reality. The SI unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Time is also recognized as one of the seven fundamental physical quantities in both the International System of Units (SI) and the International System of Quantities.
In physics, time is commonly defined by its measurement—essentially, "what a clock reads."
This description suggests that time, in its conventional understanding across various scientific disciplines and human experience, is an abstract concept, not a real, tangible entity. While time provides a framework for understanding the succession of events, it does not have a direct physical existence as space does in three dimensions. Time is often viewed as a hyper-dimensional abstraction—imperceptible and unreachable beyond the three-dimensional spatial realm.
However, relativity challenges this interpretation by treating time as a real entity—integrated with space to form a four-dimensional space-time continuum where time becomes subject to physical modifications, such as time dilation. This relativistic concept implies that time is not only concrete but also malleable under the influence of velocity and gravity, leading to discrepancies with other scientific interpretations that consider time an abstract or imaginary concept.
One of the main contentions is that time dilation, a cornerstone of relativity, effectively violates the standardization of time by presenting it as something dilatable, thereby questioning the uniformity and constancy of time itself. The traditional time scale based on a 360-degree cycle—representing a consistent progression—is disrupted by the relativistic notion of time dilation, which converts abstract time into something perceived as "real" or "natural." This treatment of time also seems to ignore the conscious human experience, which understands time as a subjective, psychological construct.
Furthermore, if time is not directly reachable—being an abstract hyper-dimensional concept—what then is the "time" that a clock measures? Clocks are designed to provide a standardized approximation of cosmic time through calibrated frequency counts, such as the electronic transitions of caesium atoms. However, the physical manifestation of time in clocks is inherently subject to distortions, primarily due to gravitational effects. Gravity affects mass and energy, altering the oscillation rates of clocks and resulting in time distortions. Consequently, even the most accurate atomic clocks require periodic adjustments to compensate for these external influences.
The discrepancy between the "real time" measured by clocks and the "conceptual time" of cosmic progression raises further questions about the nature of time. Clocks, intended to represent a uniform progression of time, must contend with gravitational influences that disrupt this uniformity, necessitating ongoing corrections. This challenges the idea that time is a tangible, concrete entity and supports the view that it remains fundamentally an abstract concept—a conceptual framework through which we interpret the order of existence and events.
In short, while relativistic physics proposes that time is a real entity susceptible to physical modifications like time dilation, this interpretation remains contentious when viewed through the lens of broader scientific understanding. Time appears more consistent with an abstract or imaginary concept, a near-approximate representation that is susceptible to external influences, yet ultimately remains beyond the realm of tangible existence.
It's basically the matter of whether we move through space or the world moves around us. Philosophical matters maybe. Thanks
A lesson from Terry Laughlin on training to be an Olympic swimmer:
“The surest sign of failure (in swimming), Terry Laughlin reckoned, was bubbles, though he found that froth created by swimmers a useful guide when overtaking them.” When learning to swim, he surmised that “the starting point is unconscious incompetence—when a swimmer does not even realise what is amiss. Next comes conscious incompetence when you spot what is wrong and, try to stop it. Then comes conscious competence, when you do the right thing but only with effort, and finally unconscious competence: the equivalent of automatic pilot.” (Obituary: Terry Laughlin, The Economist, Nov 18, 2017)
Clearly, Terry Laughlin as a swim coach appreciated the importance of transitioning from Consciousness to Unconsciousness when training athletes by understanding that the two states are intimately connected, as are the neocortex and cerebellum for controlling the mind and the body, and together representing over 95% of the neurons in the brain (Tehovnik, Hasanbegović, Chen 2024).
On Terry Laughlin: An American swim coach that trained 24 National champions and authored many books on the psychophysics of swimming, especially for those interested in triathlons (see: Laughlin, T., 2004. Triathlon Swimming Made Easy: The Total Immersion Way for Anyone to Master Open-Water Swimming. Total Immersion Inc., USA).
Since the time of William James (1890), consciousness has been described as occurring in a stream. In 2013, Chomsky reminded an audience that consciousness is continuous and that it never seems to stop during wakefulness (Chomsky 2013). In all the years of giving lectures on cognition and language, he has never provided an explanation as to why this might be so. When declarative information is consolidated via the hippocampus in the neocortex, the neurons in the hippocampus transfer the information according to a stream or a replay of the same sequence that occurred during learning (Boyce et al. 2016; Girardeau et al. 2009; Louie and Wilson 2001; Wilson and McNaughton 1994). Once stored, the declarative information in the association areas of the neocortex is partitioned according to faces, places, and other sensory fragments, all imbedded within the context at the time of learning (Corkin 2002; Brecht and Freiwald 2012; Bruce et al. 1981; Freiwald and Tsao 2010; Lu and Golomb 2023; Ojemann 1991; Penfield and Robert 1966; Periera, Fedorenko et al. 2018; Rolls 2004; Schwarzlose, Kanwisher et al. 2005; Schwiedrzik, Freiwald et al. 2015; Scoville and Milner 1957; Squire et al. 2001). The order of streams of consciousness—which can lead to the execution of behaviors in a stream—is determined by the order imposed on the declarative information at the time of learning. Anyone who has prepared a university lecture will agree with this statement. And every time you deliver the lecture it will be modified according to feedback from the students to enhance communication. In short, minute details of a stream of consciousness is updated continuously, but the basic structure of the lecture will remain the same.
We would suggest that as one learns a new sequence of declarative information, declarative-conscious units (neurons) are activated in the order of memorization by the neocortex. Each declarative-conscious unit that is active during a stream of consciousness is connected to a global neocortical network that contains all the information pertaining to a stream. Indeed, every learned language (e.g., English, Portuguese, Hebrew, and so on) remains in a separate neocortical network (Ojemann 1983, 1991). If we could resurrect Wilder Penfield to continue his electrical stimulation experiments on the human neocortex (Penfield 1975; Penfield and Rasmussen 1952), we would have him stimulate a string of declarative-conscious units representing a particular stream of consciousness to see if subjects can accurately report on the stream. Two points need to be made here. First, each subject will have a unique collection of neurons found in different locations of the neocortex defining a specific stream, and once activated a subject will know that he/she did not generate the stream, but that it was produced outside a subject’s volition (Penfield 1975; see Footnote 1).
Now to address Chomsky’s comment on why consciousness is continuous and that it never seems to stop during wakefulness (Chomsky 2013). The neocortex has stored within it a lifetime of conscious streaming that is spontaneously generated throughout one’s life, and it is designed to be continuously active from a metabolic point of view, even when one is immobile (Herculano-Houzel 2011; see Footnote 2). The stream of consciousness produced must be related to what one is learning on a particular day (Hebb 1949, 1968), but two individuals confronted with the same problem will solve it differently, since each has a distinct neural constitution based on a differential history. That is why people like Einstein, Kasparov, Pelé, and Bolt are one of a kind, as are each of us.
Footnote 1: Rhesus monkeys discriminate between a phosphene generated by electrical activation of the visual cortex and the presentation of a visual target that similarly activates the neurons mediating the phosphene experience (Tehovnik and Slocum 2013). What this means is that when developing visual prosthetics for the blind, expect a period during which a patient will need to learn how to use the device. A patient with a new cochlear implant may require up to two years to re-learn a language, since the sounds induced by the electrical stimulation of the auditory nerve differ from the sound experienced by an intact auditory system.
Footnote 2: Motor neurons are only active when one is moving. Activity of the neocortex that is not directly related to motor activity (but related to conscious reflection) is a problem for brain machine interfaces as used to restore motor function to paralyzed patients. Nevertheless, during wakefulness (but not during sleep) there is a correlation between neural activity and muscle activity in M1 (Jackson et al. 2007), which explains why many brain-machine-interface studies implant M1 (Tehovnik et al. 2013).
First of all, the so-called algorithms in BI are algorithms in imitation of the AI algorithms. They belong properly to human consciousness, which is a complex of millions of mainly brain-based neurons (their sub-neurons, sub-sub-neurons, etc.) and their activities which, together, very much connect and coordinate the consciousness within the body as “embodied” and the world. We do not discuss the brain science of the neurons and their sub-sub-… parts. To a great extent, the activities of the BI and the consciousness that embodies BI as a minute part of it are connected and coordinated within the brain-body nexus and to some extent by the world. This coordination takes place in such a manner that the ontological, connotative, and denotative universals can be conceived only by consciousnesses and not even by BI, let alone AI. If BI may be isolated from consciousnesses, their algorithms and functions may be comparable to those of AI, but BI never exists in isolation from the other brain-and-body functions.
It may be claimed that advanced AI as in some robots and ChatGPT is not a result of memory but generalization. [Video: https://www.youtube.com/watch?v=OFS90-FX6pg] But here the claims of “generalization”, “sentiment neurons”, and “states of mind” are not enough. First of all, the neurons that AI specialists speak of are not living neurons. Secondly, it should be proved that the so-called procedures in neurons due to the results (‘state neurons’ and the unity of many of them called ‘state space’) of input memory (that is clearly learned as mechanically induced, not exactly learned in the manner in which consciousness learns) are themselves being termed generalization based on other imaginative names like sentiment neurons and states of mind, merely due to the generalizations involved in the very machine memory and the receptacles of such memory. Here, generalization is falsely being interpreted by claimants of fantastic AI as something done in consciousness by the intelligence alone.
The learning and recognition of patterns by decreasing entropy is also not a matter of generalization in the sense of what happens in animate objects. Even in children the manner of learning is not merely a result of an intelligence exercise; instead, many other brain functions are involved in this in consciousnesses. Thus, BI is not a prediction machine for AI to be termed so. AI algorithms and strictly BI (i.e., only intelligence, and not the other functions of the brain) algorithms never go beyond the quantitively processed, quantitively defined, and quantitatively interpretable properties of any of the data fed into its procedural memory. It becomes procedural memory and happen to be termed generalization merely because of the volume of state neurons involved in what is termed a state space (of course, it is not a “space”). When intelligence is isolated from all other functions of consciousness, the demerits of BI begin exactly from there and indicate regions far beyond.
For the above reasons, it is not acceptable to describe the demerits of BI in terms merely of the absence of willing, emotions, intentionality, intentions, love, social life, morality, etc. of persons. Unlike in the case of AI, BI has at least some direct organismically based connections to these and to the three theoretical functions of consciousness mentioned above: (1) to discover the foundational Categories and ontological universals behind the objects, phenomena, and data, (2) to find out the social or genetic causes of the abiding emotions, and (3) to imagine the possible non-answers or non-solutions for the problems. It is the mechanistic-scientistic thinking of the experts that delinks from BI these and other non-BI functions of consciousnesses.
WHY EXACTLY WAVE-PARTICLE DUALITY: Phenomenal Ontological Commitment (POC) as the Solution
Raphael Neelamkavil, Ph.D. (Quantum Causality), Dr. phil. (Gravitational Coalescence Cosmology)
The question of the connection between Reality-in-total and language is a question of justification for what we theorize. Justification is possible only via theory. The fact of attainment of grades of adequacy of theory with Reality means also that no theoretical attainment of justification is absolutely adequate – and the inadequate aspect is a mere virtuality without foundation in Reality. Whatever is not without foundation in Reality and its parts is a pure virtual world (PVW), and theory and related stuff based on Reality and its parts is a tenable virtual world (TVW).
Moreover, the working and results of both scientific and philosophic theory is always intertwined with (1) directly observable existents, (2) directly observable existents termed unobservables, and (3) even indirectly non-observable non-existents. Virtual worlds about (1) and (2) are TVWs and those about (3) are PVWs. How to establish the said foundation so as to distinguish between the two sorts of virtuality and to discern between (2) and (3)? I suggest some simple ways here. The general motive of all discourse being the best possible statement of truths of all that are the case and are possible in the future, there is nothing wrong here in evaluating the extent of attainment of adequacy in terms of PVW and TVW in quantum physics, cosmology, etc. in order to discover whether any theory is a PVW or a TVW. This will help establish the criteria of objectually tenable and intersubjectively accepted objectivity in science and all other sorts of discourse.
The discussion on virtual constructs and unobservables begins with a short rational introduction as to why science and its paradoxical or non-paradoxical postulations need an overhaul based on the concept of phenomena that they use. Thereafter will be shown why science has some successes even though science with its methods, procedures, and conclusions is never perfect enough and will constantly be revised. I suggest why there are successes in the quantum-physical system even though there are misplaced identifications of concreteness in quantum physics. Thereafter I proceed to define the concepts of virtual constructs (TVW and PVW) and unobservables (existent and non-existent) in terms of ontological, connotative, and denotative universals.
Just as in all thinking in general and in linguistically or symbolically formulated logical and mathematical expressions, so also in physics (and in other sciences in their own manner), there is a constant recourse to conceptual reification of modes of conceiving existent processes and their phenomena. The modes of conceiving and reification change both epochally and intra-epochally: This is a continuous process.
At times the sciences forget that, at any moment of data collection, conceptualization, hypothesis formation, experiment, and theorization, the “phenomena” in respect of sensation, data, thought, etc. are the showing-themselves of existent processes from within some – and not all – layers, parts, and aspects within the existent processes. It is totally out of place to substitute the realities with the phenomena, although the phenomena, insofar as they not nothing, are also existent processes. The phenomena are just a few selections from a few select layers of the reality considered. The said conceptual reification of phenomena into the whole object behind the phenomena happens by conclusions like the false identification of many statistically (or even imaginatively) constructed physical concepts of ways of explanation of phenomena, e.g., the wave function as representing at times mathematical waves and at times mathematical particles and then as the very reality of the external processes (not even of the layers of the processes) behind the actual phenomena.
Even today, many physicists and other scientists conveniently speak of observing phenomena and take phenomena variously as the objects observed or as the factual states of affairs behind the phenomena. They are not too much at fault, because they need not be in a position to distinguish between the ontological and epistemological aspects of reality, layers of reality, phenomena, data, sensation, perception, etc. For this reason, naturally, some philosophers and philosophers of science take phenomena as the objects observed. At times they commit a similar mistake by taking phenomena as reproducible factual features. If they are factual features, they should be either hypotheses or results from previous theory. If none of these, it would look as if the phenomena were closer to the theory to be produced than its data are:
phenomena are stable, reproducible, factual features of the world, for example:
. lead melts at 327.5°C., or
. pressure increases with temperature for most fixed-volume gases;
• data are records produced by measurement, that are intended to represent the target phenomena, for example:
. a series of temperature readings as a piece of lead is heated up, or
. a series of pressure readings as a gas confined in a container is heated up. [Le Bihan 2017: 113]
If factual features are phenomena, these are some of the conclusions or interim conclusions within some theories. That is, for Soazig Le Bihan, who declares that she follows James Woodward and Bogan and adopts this view of theirs as straightforward and uncontroversial, puts the data epistemologically as prior to the phenomena.
Armond Duwell, for example, speak of “understanding phenomena”, “gravitational phenomena”, “quantum phenomena”, “investigating phenomena”, “phenomena associated with the two-slit experiment”, “very different representations of those phenomena”, “how these representations represent the two-slit phenomena”, “correlation phenomena associated with the EPR situations”, “one wants to represent quantum phenomena well”, “could use Bell’s theorem to explain how the phenomena and the adequacy conditions bound the possibility space”, “the modal view of understanding that no-go theorem generally increase our understanding of phenomena”, “the general theory of relativity (GTR) affording better understanding of gravitational phenomena than Newtonian theory [of, sic.] gravitation”, “an account of gravitational phenomena”, “representations of phenomena”, “false theories can afford modal understanding of phenomena insofar as they meet the adequacy conditions under consideration”, etc. [Duwell 2018: 1-4] without first admitting that the phenomena are just a few showings that arise not from the whole of the object but only from some layers of it. we do not need any extra experiment to know this as the universal case.
Later, referring to Le Bihan [2017], Duwell says: “One might distinguish between understanding phenomena and understanding the world. Understanding the world entails understanding the corresponding phenomena, but not vice versa.” [Duwell 2018: 4]
Yet another fact: It is also a common tendency among scientists and philosophers to often refer to any fact that is the case in nature, any event, as a phenomenon. That is, for some existent “things” need not be phenomena, but instead, some facts, features, and events. This too is a highly imprecise use of the notion. Take, for example, a report about the work of Erwin Schrödinger and Ludwig Boltzmann: “To our knowledge, Schrödinger’s private notes from the late 1910s […]. He noted that fluctuation phenomena could provide “a new proof of the relative validity of Boltzmann’s conception as opposed to [general] thermodynamics. Absolutely valid theories do not exist.” (transcribed in Hanle, 1975, p. 268)” [footnote 5 in: Joas and Katzir 2011: 44] Strictly speaking, fluctuation is not a phenomenon. From some layers / parts / aspects of the fluctuation there emerge some phenomena, “showings”, which create and further continue to affect the sensation and perception by embodied consciousnesses and the same happens via sensation and perception helped by apparatuses. This works as feed for further feeling, thought, action, and theory. Instead of such a notion of phenomena, the various haphazard notions of phenomena will naturally make it difficult for theory to attain their desired result.
Now arise the questions: How can one observe phenomena before, and instead of being subject to or exposed to, the phenomena from the various layers of objects or events – directly or by use of apparatuses? What are observable and to what extent are they observable: objects and events or the phenomena that are just the showings of the objects? I hold that observation of anything existent is fully through the phenomena proper and to the extent that the phenomena permit sensations and understanding. The understanding over and above this is via theoretical apparatuses and methods. According to van Fraassen, observability does not have anything to do with existence. [van Fraassen 1980: 19] He may have meant it (1) broadly: that it is impossible to contain the existent thing or event within us in the name of observing an object or event, because the phenomena are both the objects or events mixed with what are already within us in relation to the objects or events (“is, indeed, too anthropomorphic for that”), or (2) narrowly: that some or a lot of what a thing is, could be captured in observation (“it may still have much to do with the proper epistemic attitude to science”). [van Fraassen 1980: 19]
With respect to the observational detection of theoretically predictable unobservables, beyond the objectivity of the objects derived from sufficient intersubjective acceptance, the objectual aspect should be sought, which should obtain in terms of the Extension-Change Categories of all existents. This is what is most necessary in order to navigate through the phenomena in the narrow sense and avoid the confusions that will certainly be caused by following the broad sense that involves in mistaking factual features, events, etc. for phenomena. In the very phenomena related to (behind) the theory there should be something existent, without which no theory and experiment can be realistic and sensibly differentiable from the theories engendered from within the broad sense of phenomena. This is a minimum condition for the ontological commitment that a theory can hold, because even after respecting this Phenomenal Ontological Commitment (POC), the theory should follow other logical and methodological guarantees for success. This is a clear first objectual condition for the possibility of tenability in theory. With the objectual conditions of Extension and Change, which should be present in the phenomena themselves, as given by POC, we discuss the quantum-physical case below.
If at two different experimental contexts the electromagnetic propagation exhibits either the wave nature or the particle nature, these two natures cannot be termed the phenomena, but instead, as some finalized or interim conclusions within the theory. One cannot call these conclusions as objective by presuming that “objective” conclusions indicate existence. For existence to be accessed, the objectual criteria should be fulfilled in the phenomena. It should be possible to objectually imagine the existence of the phenomena. This is possible in the given case only in the wave-like motion of the phenomena by elongated particles. Only such can exist within the phenomena. This too is a conclusion about the manner of existence of the phenomena, without which the phenomena cannot exist.
No physicist needs to take this as a violation of experimental results, but instead, as a confirmation of the fact that objectually no absolute wave or particle (as mathematical objects) can exist in nature, and that any wave-like or particle-like motion must only be partially a wave or point. No wave or point can exist partially in an existent phenomenon. Hence, let us term the real electromagnetic unit as a really existent wavicle. A wavicle can hit the sides of the double slit at one of the various stages of motion of the wave shape or form of the motion, if the aspect of motion alone is taken into consideration. At the hitting it can exhibit more of the particle shape. The wave nature will be exhibited at other times represented by the more elongated aspects of the wave form of motion of the energy-carrier. Instead, at all times it should be almost in wave form, because even when there can be alterations of thickness within it, it is already thick enough not to be termed an absolute wave.
The concepts of wave and point-particle in physics are purely mathematical concepts as they are treated in the quantum-physical context of mathematical representation and calculation. In one context we posit the mathematical, absolutely non-extended, wave nature and in another the purely point-like defined, non-extended, particle nature. The fight and the resulting dilemma therefrom continue to mislead even after more than a century and will continue so into yet another. The conclusion that some physicists settle for is that energy propagations have both the natures, or at times only the one nature and at times the other. This has always served to mystify fundamental physics. I suggest that this is because (1) the phenomena have been taken by most scientists as the objects out there, (2) most physicists tend to accept mathematical representations as the actuality, (3) if the mathematical representation in the case of unit electromagnetic propagation is taken as phenomena, then they will be accepted as the reality.
One thing is common in physicists: almost everyone forgets that the concepts of wave and point-instant particle are purely geometrical in the physical use of mathematical methods; and practically none of them asks what the extent of application is of the mathematics proper onto the case of the physical wavicles, fields, and matter-energy processes in question at each given context. If physics works not on the totality of the objectual process but only via the phenomena (see the definition above), we do not have to reify mathematically pure concepts in the context of physics.
Using a variation of the classical conditioning paradigm with electrical stimulation of neural tissue in behaving primates, Robert Doty (1965, 1969) was able to deduce the ‘sensational’ coding operations of the sensory maps of the neocortex by converting a classical conditioning task into an operant task (also see: Bartlett, Doty et al. 2005; Bartlett and Doty 1980; Doty et al. 1980). Monkeys were trained to depress or release a lever for reward, to signal the detection of electricity delivered to the neocortex. For sensory maps such as area V1, for example, if a monkey was trained to detect electricity delivered to one site and then the electrode was moved to another location within the map (whether ipsilateral or contralateral), the detection response was transferred immediately, much like what happens when a monkey is trained to detect a visual stimulus in one part of the visual field: it can afterwards generalize the response to any location within the visual field immediately (Schiller and Tehovnik 2015). But if the electrode is moved to extrastriate area V4, for example, the detection response acquired by stimulation of V1 is not transferred to V4. New training is required to associate the percept generated by electrical stimulation of V4 and the motor response to obtain a reward. This suggests that the percepts generated by neocortical stimulation are bound per map (Bartlett, Doty et al. 2005). This result concurs with the work of Penfield and colleagues who found that common sensations—i.e., qualitatively similar phosphenes—were evoked from a cortical topographic map (Penfield 1975; Penfield and Rasmussen 1952). Hence, individual maps of the neocortex define sensation or conscious experience, and this sensation depends on the connectivity between the neurons of a map for the immediate transfer of information.
Most significantly, when the forgoing experiment was done in the hippocampal formation, there was never any transfer of the detection response between the different stimulation sites (Knight 1964). This suggests that the hippocampal fibres transmit information independently to and from the neocortex, which is what one would want of a hippocampal pathway mediating the consolidation and retrieval of information vis-à-vis the neocortex (Corkin 2002; Rolls 2004; Penfield and Roberts 1967; Schwarzlose, Kanwisher et al. 2005; Scoville and Milner 1957; Squire et al. 2001). The neocortex contains information that is highly distributed and this information must be recomposed to drive behavior volitionally or a conscious state volitionally, e.g., thinking about biology and consciousness (Corkin 2002; Hebb 1949; Ibayashi et al. 2018; Kimura 1993; Ojemann 1991; Penfield 1975; Sacks 2012; Sereno et al. 2022; Squire et al. 2001; Vanderwolf 2007). The stream of consciousness, as introduced by James (1890), depends on recomposing (or unifying) the cortical information so that the outputs make sense. Schizophrenia is a condition whereby the outputs make no sense.
Summary:
(1) Individual maps of the neocortex define sensation or conscious experience, and this sensation depends on the connectivity between the neurons of a map.
(2) Pathways transmitting information to and from the neocortex to consolidate and retrieve information are composed of neurons that are independent, so that information is stored flexibly throughout the neocortex.
(3) Neocortical information must be recomposed correctly for one not to be diagnosed as a schizophrenic.
Damage of the neocortex that disconnects this structure from subcortical networks creates a condition in which behavioral routines that depend on the neocortex can no longer be modified. For example, paired lesions of the anterior and posterior ocular pathways of the neocortex by damage of the frontal eye fields and superior colliculi eliminate all visually guided saccadic eye movements (Figure 1), while sparing the vestibuloocular reflex and optokinetic nystagmus, two reflexes mediated by subcortical networks (Schiller and Tehovnik 2015). Significantly, following such damage these reflexes can no longer be modified, even though saccadic eye movements can still be generated, while performing the reflexes. This underscores how dependent subcortical mechanisms are on the neocortex for altering behavior (Hebb 1949), even though it has been found that reflexes based on eye blink conditioning that utilize robust but simplistic stimuli (electric shock, loud tones, or bright visual stimuli) can still be associated in the absence of the neocortex (Swain, Thompson et al. 2011), which could be referred to as ‘blind’ perception or sensation subthreshold to consciousness (Graziano et al. 2016; Tehovnik et al. 2021). Nevertheless, Pavlov (1929) observed that most classically conditioned reflexes in his dogs were abolished following neocortical removal. In short, any behavior that depends on the high-resolution computations of the neocortex—such as language or complex movement sequences—can never be modified following neocortical ablation (Kimura 1993; Vanderwolf 2006).
Hence, the neocortex is the command and control center of the brain by way of learning, and it makes sense that when the neocortex is disconnected from subcortical networks by damage of the pons and midbrain all consciousness is extinguished (Levy et al. 1987; Monti et al. 2010; Owen 2008; Owen et al. 2006; Plum and Posner 1980; Schiff, Llinas et al 2002; also see Arnts et al. 2020 on hydrocephalic patients).
Figure 1: Head-fixed rhesus monkeys were required to grasp food items positioned in a board spanning 60 by 60 degrees of visual angle (panel A), as their saccadic eye movements were measured. Normal subjects had no difficulty obtaining the food items and generating saccadic eye movements toward the targets (fixation location specified by the distribution of the dots, panel B). Following bilateral lesions of either the frontal eye fields or superior colliculus, subjects still grasped the food items and made saccades toward the targets (panels C, D, E, and F). In the absence of both the frontal eye fields and superior colliculi, the subjects could still grasp the food items, but they failed to generate visually guided saccades thereby fixing the eyes in central orbit (panels G and H). In primates, the frontal eye fields are located anterior to the forelimb representation of the motor cortex, and the superior colliculi receive projections from the entire neocortex but especially from the striate and extrastriate visual areas. The frontal eye field and superior colliculi represent the two neocortical channels that interconnect the neocortex and brain stem for the mediation of visually guided saccades (Schiller and Tehovnik 2015). From figure 15-14 of Schiller and Tehovnik (2015).
The idea that time is an illusion is a philosophical and scientific concept that has been discussed by various thinkers and physicists. Here are some key points to understand this perspective:
### 1. **Perception vs. Reality**
- **Subjective Experience**: We experience time as a sequence of events, but this perception may not reflect the true nature of reality. Our sense of time is shaped by our consciousness and cognitive processes.
- **Relativity of Time**: According to Einstein's theory of relativity, time is not absolute but relative. It can stretch or contract depending on the speed at which an object is moving and the gravitational field it is in. This challenges the notion of a single, universal time.
### 2. **Block Universe Theory**
- **Past, Present, and Future Coexist**: In the block universe theory, also known as eternalism, all points in time (past, present, and future) exist simultaneously. From this perspective, time does not "flow"; rather, all events are equally real, and our experience of moving through time is an illusion.
### 3. **Quantum Mechanics**
- **Uncertainty and Time**: In quantum mechanics, certain interpretations suggest that time at the quantum level behaves differently than we perceive it. The concept of superposition, where particles can exist in multiple states at once, challenges the traditional, linear view of time.
### 4. **Psychological Time**
- **Mental Construct**: Some philosophers argue that time is a mental construct, a way for our brains to organize experiences and make sense of the world. In this view, time is not a fundamental aspect of reality but a useful tool for humans.
### 5. **Timeless Physical Theories**
- **Fundamental Laws Without Time**: Some physical theories suggest that the fundamental laws of the universe do not actually require time as a variable. These theories describe the universe in a "timeless" state, where time emerges only as an apparent phenomenon at larger scales or under certain conditions.
### 6. **Philosophical Perspectives**
- **Idealism**: Some philosophical traditions, like certain forms of idealism, argue that time, like space, is a feature of our minds rather than an objective reality. According to this view, time is part of the way we structure our experiences rather than something that exists independently of us.
### Summary
The concept that "time is an illusion" is rooted in the idea that our common understanding of time as a linear progression of events does not fully capture the underlying reality. Time, as we experience it, may be a construct of our perception, and at a deeper level, the nature of time could be very different from how it appears in everyday life.
Could all matter be conscious on some level yet, to become living, an entity must be able to, in someway, reproduce? How?
If Panpsychism is true then all matter is conscious, at least on some level. Of course some living organisms are infertile but, they can still potentially be cloned, and maybe heal(SOMEWHAT regenerate) after injury.
Britannica, The Editors of Encyclopaedia. "panpsychism". Encyclopedia Britannica, 20 Aug. 2020, https://www.britannica.com/topic/panpsychism. Accessed 18 August 2024.
"All living things are capable of reproduction. Reproduction is the process by which living things give rise to offspring. Reproduction may be as simple as a single cell dividing into two cells.
“Here is a thought experiment. Let's place Rodolpho Llinas's jarred-brain on top of a body (Fig. 1). I bet Llinas would argue that his jarred-brain retains its own consciousness, and the android body is a simple machine. In other words, placing the same brain over different bodies will NOT change a thing in the exhibited android’s personality. Based on the importance of feedback, my prediction is the brain should change its ‘consciousness’ when its body is changed. That is, a brain will have to change to behave and think like an android because of the feedback through the body.” (Lewis L. Chen, Aug. 20, 2017)
Ergo, as discovered by Geoffrey Hinton, the father of AI, the brain is not a computer (Hinton 2024/YouTube). And this idea has received overwhelming support from work done on brain-computer interfaces (Birbaumer 2006; Fetz 1969; Tehovnik and Chen 2015). Feedback (through learning) shapes all organisms. This is why even identical twins are not identical (also see Footnote 1).
Footnote 1: “Yerkes (1912) trained an earthworm to choose one arm of a T-maze, using electric shock as punishment for error and the moist burrow as reward for correct choice. The habit was acquired in twenty trials, 2 days at ten trials per day, about what might be necessary for the laboratory rat. No errors were made on the third day, though the behaviour was somewhat inconsistent in the following week as between good days and bad days (even worms have them). Yerkes then removed the brain, or principal ganglia, by cutting off the head—the anterior four and a half segments. The animal continued to respond correctly, showing that there were sufficient synaptic modifications in the remaining ganglia to mediate the response—until the new head regenerated, at which time the habit was lost. The noise generated by the new ganglia, the irrelevant neural activity of the uneducated brain, was sufficient to disrupt learning completely.” (Hebb 1961, p. 78) In the foregoing, the new brain cells of the worm had to be reprogrammed by the new feedback coming through the body.
Figure 1: A brain that can be plugged into any body such that the body will assume the personality of the brain automatically. Many in the AI community (but not Geoffery Hinton; but see Nicolelis 2011, p. 61) believe that the brain works this way.
After writing a piece toI suggest that the information transfer rate of consolidation of children and adults is similar based on my back-of-the-envelope calculation for the consolidation of English words (at about 0.0006 bits per second in both children and adults), I spent several hours reviewing past lectures by Noam Chomsky. I listened to his 1.5-hour lecture on language as posted on YouTube and delivered at MIT in 2019 before a group of students and colleagues. Once his lecture was over, he was asked a question by a student about how neurons carry out the “genetically endowed” language computations [by the Language Acquisition Device] in children/adults (since such details were largely absent in his lecture). He suggested that the neurons of the brain are much too slow to execute the computation (called Merge) that he was talking about and that computations must be carried out at the subcellular/molecular level as described by Roger Penrose (and studied by Eric Kandel 2006).
On the point of speed of processing, we understand that the transmission duration of a chemical synapse is about one millisecond and that it can take tens of milliseconds for signals to be transmitted from caudal to rostral parts of the neocortex (Schiller and Tehovnik 2015; Yeomans and Tehovnik 1988). Therefore, Chomsky must be thinking in the micro-second range. Whether this microseconds duration of computation time would render the linguistic signal unconscious is unclear; note that the duration of consciousness/thinking is estimated to occur in the millisecond to second range (Varela 1999ab; Dwarakanath, Logothetis 2023).
If a neocortical hemisphere is damaged in children, language ability can be assumed by the undamaged hemisphere (Olulade et al. 2020), substantiating that there is enhanced ‘linguistic’ neuroplasticity in children that is lost in adults (Kumura 1993; Penfield and Roberts 1966). Also, many faculties of the brain need to be tuned by the environment during a critical period and if missed a faculty will not develop (Fine et al. 2003; Hubel and Wiesel 1977). In the case of language, the first year of life is critical for the establishment of syntax, and other language attributes develop before puberty (Friedmann and Rusou 2015).
As to whether the development of language faculties is largely an unconscious affair is now addressed. In all mammals, the neocortex mediates the storage of declarative information, which depends on consciousness (or thinking) to guide the learning process (Hebb 1949, 1968). Every day while learning to ride a bike my son would ask: “Daddy, are we going to train today?” This question was repeated for months until one day my son’s vestibular system was finally programmed to race around the neighborhood track; years after learning to ride a bike my son had no recollection of how this occurred, which makes one assume that consciousness had little to do with the learning process.
The establishment of an automated state (or an unconscious execution) requires extensive training, even for language. Formal, global education (since the 1950’s) has been central in elevating humankind out of poverty (the proof: travel to Brazil, India, or China today and compare your social experience with what it was like many decades ago, e.g., the 1980’s). To automate a behavior, requires daily training so that circuits between the neocortex and cerebellum can be programmed (which is what putatively happens while learning to ride a bike, Tehovnik, Hasanbegović, Chen 2024; also see Miles and Lisberger 1981). Unlike the neocortex, the cerebellum is responsible for developing the executable motor code for all behaviors, including language. Once the neocortex and cerebellum are tuned for a given behavior, a reduced amount of neural tissue is used to summon a correct response as when engaged in dialogue with a fellow interlocuter (Lehericy et al. 2005; Ojemann 1983). The exchange is so rapid that tens of milliseconds before the completion of a string of utterances, one is prepared to deliver a reply (Levinson and Torreira 2015). Interestingly, patient HM whose hippocampus was destroyed could still engage in rapid conversation, but in the absence of his hippocampus he was unable to update his declarative memories, i.e., he could not learn any new words or new facts and if asked to recall what his mother or father were like he could not narrate the history (Corkin 2002). In short, the unconscious execution of speech entails using a reduced amount of neocortical tissue with the remaining tissue activating cortico-cerebellar loops (Hasanbegović 2024) to maintain a particular state of automaticity. Any upgrades to the cortico-cerebellar loops, however, would require a modification of the efference-copy code at the level of the cerebellum (Bell et al. 1997; Chen 2019; Cullen 2015; De Zeeuw 2021; Fukutomi and Carlson 2020; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik, Patel, Tolias et al. 2021; Wang et al. 2023).
As for Chomsky’s ‘Merge’ to be expressed automatically in children (Chomsky 1965), it would require that the cortico-cerebellar loops be programmed genetically such that all the gains at the Purkinje neurons are able to anticipate a linguistic world with minimal adjustments of the gains once a child begins hearing and making sounds. That all this is finalized syntactically by the age of one, will need to be verified quantitatively using information theory (Tehovnik and Chen 2015).
Five physical problem, about which you never thought, listened, read or heard, and nobody dot not ask you to solve it!!!
Exotic Matter Interactions in High Gravity: Imagine a world with gravity so strong that it fundamentally alters the behavior of subatomic particles. What new forces or interactions might emerge under such extreme conditions? Could entirely new forms of matter exist, defying our current understanding of physics?
Time Dilation at the Quantum Level: We know that time slows down for objects traveling near the speed of light according to Einstein's theory of relativity. But what happens at the quantum level, where particles exhibit wave-like behavior? Does the concept of time dilation translate to the quantum realm, or do entirely new phenomena arise?
The Physics of Consciousness: How does the physical activity of the brain give rise to subjective experience, consciousness? Is consciousness a fundamental property of the universe, or an emergent phenomenon from complex neural interactions? Large language models like me deal with information processing, but can this ever truly replicate consciousness?
The Arrow of Time: The fundamental laws of physics seem to be time-symmetric, meaning they work the same forwards and backward in time. Yet, our universe clearly has a preferred direction of time, with a past, present, and future. What physical principle or mechanism underlies this "arrow of time"?
The Unification of Dark Matter and Dark Energy: Dark matter and dark energy are mysterious entities that make up most of the universe's mass and energy, yet we know very little about them. Could these phenomena be somehow connected, or even different aspects of the same underlying reality? Perhaps a new theory could unify our understanding of dark matter and dark energy.
These are just a few hypothetical problems that haven't been explicitly posed before. It's important to note that these are highly speculative and may not even have answers within our current framework of physics. However, they represent the kind of "out-of-the-box" thinking that can lead to groundbreaking discoveries.
Due to the decrease in sugar levels in the brain, many symptoms ranging from loss of consciousness to coma may occur. In addition, increased sugar levels in the brain also cause damage.
How to control the sugar level in the brain?
Is there a practical way to do this?
How do we measure?
maybe in the afterlife I've failed at... Uniqueness may be the arche...
Everyone is special:
What is consciousness? Are there stages of consciousness? Do dead organisms have consciousness?
Gestalt psychologists have proposed many ways in which items are concatenated within a single block of time so that perceptual elements can be combined into complete objects as defined in space (Kohler 1929; see Mollard, Roelfsema et al. 2024 for a modern view). Schiller and Carvey (2006) have demonstrated that perceptual grouping is abolished once the items within a group deviate beyond some size threshold. In this case, individual items may be grouped according to their common size as specified by the spatial resolution of visuotopic maps of V1 through V5, as defined by the retino-cortical magnification factor (Tehovnik, Hasanbegović, Chen 2024). Here, it is assumed that the neurophysiology of the brain (i.e., the retino-cortical magnification factor) imposes limits on perception.
We know that consciousness occurs in a stream of concatenated items linked over both space and time (James 1890; Dwarakanath, Logothetis et al. 2023; Sacks 1976, 2012). The linkage across space is facilitated by the sensory topographic maps before being consolidated into a complete form (with context) in the association areas of neocortex (Amita, Kunimatsu et al. 2024; Kimura 1993; Lu and Golomb 2023; Penfield and Roberts 1966; Ojemann 1991; Schiller and Tehovnik 2015). A percept that requires the linkage of events over time is visual motion perception (Schiller and Tehovnik 2015). The neurophysiology of motion perception as conceptualized by Reichardt and Macginitie (1962) provides us with a clue of how different frames of consciousness might be concatenated (see Fig. 1). That the neocortex is inundated with GABAergic circuits is well-established (see Fig. 2). Once a stream of consciousness is consolidated, as for example the memorization of a lecture, the retrieval of the lecture is done such that a sting of activity is executed via the neocortex and cerebellum whose order and direction of activity are mediated through the disinhibition of GABAergic circuits. The perception of motion entails an object moving externally in one direction over time to be registered by neocortical neurons with the same direction tuning; the representation of frames of consciousness, however, requires an internal serialization of neural pools whose order is defined by GABAergic disinhibition during memory consolidation. The retrieval of this information necessitates a replay of the consolidation process (as has been found to occur for the hippocampus during memory consolidation, Wilson and McNaughton 1994). The foregoing supposition will need to be verified empirically for the neocortex and cerebellum.
Figure 1: A neural model for the creation of direction selectivity as conceptualized by Reichardt and Macginitie (1962). Inhibitory connections are added to the circuit (shown in black) such that only one direction of motion activates the neurons. This model has been verified neurophysiologically using GABAergic agents. For details see pp. 194-195 of Schiller and Tehovnik (2015). Illustration from figure 11-4 of Schiller and Tehovnik (2015).
Figure 2: GABAergic circuits are ubiquitous throughout the neocortex. The illustration is for GABAergic connectivity generic to the visual cortex of mammals. Based on Froudarakis et al. (2019).
Vestibular agnosia is a cognitive disability that prevents a patient from detecting body motion that includes the head during periods of imbalance (Calzolari et al. 2020; Hadi et al. 2022), which can lead to falls that are denied by the patient. Consequently, this dysfunction is under reported (Hadi et al. 2024). It is suspected that this disorder is due to a degeneration of the inferior longitudinal fasciculus of the right hemisphere, as evidenced by diffusion tensor imaging (Hadi et al. 2024). This fasciculus interconnects the occipital cortex along with the medial temporal cortex (MT) and medial superior temporal cortex (MST) with anterior regions of the temporal lobe including the insular cortex (see Fig. 1), all of which are part of the cortico-vestibular network (Gogolla 2017; Guldin and Grüsser 1998). Electrical stimulation of the inferior longitudinal fasciculus induces a plethora of vestibular sensations (Kahane, Berthoz et al. 2003): the sensation of head and body rotation with mainly a contralateral bias, the sensation of head and body translation either forward or backward, and the sensation of gravitational force either of heaviness or lightness. Returning astronauts are very aware of these feelings when they must adapt to 1G (Carriot et al. 2021; Demontis et al. 2017; Lawson et al. 2016). See Figure 2 to understand how distance from the earth’s surface affects our experience of gravity.
Volitional behaviors such as reading, writing, speaking, walking, or running are accompanied by theta activity (i.e., 6 to 10 Hz; Tehovnik 2017; Vanderwolf 1969), which is omnipresent in the brain including both cortical and subcortical regions (Tehovnik, Hasanbegović, Chen 2024). As well, for both human and non-human mammals, theta activity is potentiated as subjects learn and acquire new skills (Asaka, Berry et al. 2005; Berry and Thompson 1978; Griffin, Berry et al. 2004; Hoffman and Berry 2009; Pu, Johnson et al. 2017). Of late, it has been discovered that before human subjects are cued to rotate the head and body (whether real or virtual) there is a burst of theta activity that precedes the start of rotation in the frontal and parietal lobes (as measured at Fz, F3, F4, C3, Cz, C4, P3, and P4; Hadi et al. 2024). This concurs with the observation that when there are changes in the stream of consciousness, as assessed with binocular rivalry, there is a burst of theta activity in the neocortex that precedes a transition in perception (Dwarkanath, Logothetis et al. 2023).
Figure 1: The inferior longitudinal fasciculus projecting between the occipital cortex and temporal cortex of a human subject. Image from figure 3 of Sobhani et al. (2015).
Figure 2: Acceleration is plotted as a function of distance from the earth’s surface (distance = 1) plotted as a function of the earth’s radius. Notice that at the center of the earth, the gravitational force is zero since all the forces cancel each other out. Plot derived from John Wo, an aircraft and spacecraft designer, as made available on Google on June 22, 2024.
The hippocampal formation is central to the consolidation and retrieval of long-term declarative memory, memories that are stored throughout the neocortex with putative subcortical participation (Berger et al. 2011; Corkin 2002; Deadwyler et al. 2016; Hikosaka et al. 2014; Kim and Hikosaka 2013; Scoville and Milner 1957; Squire and Knowlton 2000; Tehovnik, Hasanbegović, Chen 2024; Wilson and McNaughton 1994). Subjects that have hippocampal damage have great difficulty narrating stories (Hassabis et al. 2007ab), which can be viewed as a disruption of one’s stream of consciousness as it pertains to retrieving information. The retrieved stories, which are highly fragmented in hippocampal patients (Hassabis et al. 2007ab), are comparable to those evoked electrically by stimulating a single site in the parietal and temporal lobes (Penfield and Rasmussen 1952; Penfield 1958, 1959, 1975). Nevertheless, individuals with hippocampal damage can still engage others verbally, but the conversation is limited in that it is based on declarative memories that are not updated making the hippocampectomized interlocker seem out of touch (Corkin 2002; Knecht 2004). A rapid exchange of speech is dependent on an efference-copy representation, which is mediated through the cerebellum (Bell et al. 1997; Chen 2019; De Zeeuw 2021; Guell, Schmahmann et al. 2018; Loyola et al. 2019; Miles and Lisberger 1981; Noda et al. 1991; Shadmehr 2020; Tehovnik, Patel, Tolias et al. 2021; Wang et al. 2023).
Patient HM, who had bilateral damage of his hippocampal formation, had ‘blind memory’ (much like ‘blindsight’): when asked to name the president of the United States in the early 2000’s he failed to recall the name, but when given a choice of three names: George Burns, George Brown and George Bush he was able to select George Bush (Corkin 2002). Therefore, his unconscious stores of information were intact (which is also true of blindsight for detecting high-contrast spots of light, Tehovnik, Patel, Tolias et al. 2021). As well, HM had memory traces of his childhood (a time well before his hippocampectomy), but the specifics were lost such that he could not describe even one event about his mother or father (Corkin 2002). Although many presume that HM had memories of his childhood, these memories were so fragmented and lacking in content that referring to his childhood recollections as ‘long-term memories’ is questionable.
The idea that the brain becomes less active once a new task has been acquired through learning can be traced back to the experiments of Chen and Wise (1995ab) that were done in the supplementary motor area, Brodmann’s Area 6. Monkeys were trained to associate a visual image with a particular direction of saccadic eye movement, which could be up, down, left, or right of a centrally-located fixation of the eyes. For a significant proportion of neurons studied it was found that the activity of the cells decreased with overlearning an association. At the time of publication this counter-intuitive result was greeted with much skepticism. After reading the paper, Peter Schiller did not know what to make of the result since his results (seven years before) suggested that the supplementary motor area becomes more active and engaged once new tasks are learned (Mann, Schiller et al. 1988).
Years later, Hikosaka and colleagues continued this line of work to show that the diminution of activity with learning was a real neural phenomenon and that the diminished information was channeled to the caudate nucleus (Hikosaka 2019; Hikosaka et al. 2014; Kim and Hikosaka 2013), which is connected anatomically to the entire neocortex such that the head of the caudate innervates the frontal lobes whereas the tail of the caudate innervates the temporal lobes (Selemon and Goldman-Rakic 1985). Hikosaka (2019) has proposed that the memories of learned tasks are archived in the caudate nucleus, whereby new tasks are stored in the head of the caudate and old tasks are stored in the tail of the caudate—perhaps for immediate use by the temporal lobes which if damage disrupts long-term memories even of one’s childhood (Corkin 2002; Squire et al. 2001).
That neurons throughout the brain (i.e., the cortex and subcortex) become less responsive to task execution once overlearned is a well-established fact (Lehericy et al. 2005). We have argued that this diminution of responsivity is the brain’s way of consolidating learned information efficiently, while reducing the energy expended for the evocation of a learned behavior (Tehovnik, Hasanbegović, Chen 2024). We and others (Lu and Golomb 2023) believe that all memories are stored according to the context of the memorization, which requires that a given site in the neocortex that contains a memory fragment such as a word or visual image be networked with other neurons to recreate the context, which we refer to as a declarative/conscious unit (Tehovnik, Hasanbegović, Chen 2024). When someone narrates a story, declarative/conscious units are concatenated in a string much like the serialization of the images of a film and this process involves both the neocortex and the cerebellum (Hasanbegović 2024).
Furthermore, a primary language (as compared to secondary languages) is stored in the neocortex and cerebellum in such a way that any damage to either structure often preserves the primary language while degrading the secondary languages (Mariën et al. 2017; Ojemann 1983, 1991; Penfield and Roberts 1966). All languages are networked separately in the brain (Ojemann 1991): a unique neocortical-cerebellar loop is summoned during the delivery of a speech in the chosen language (Tehovnik, Hasanbegović, Chen 2024). The language one thinks in (i.e., one’s counting language) is the language that is well archived and highly distributed (including areas of the brain that mediate mathematics), thus making the language more resistant to the effects of brain damage.
In conclusion, information stored in the brain is no different from information stored in a university library: the ancient texts are all housed in a special climate-controlled chamber, while the remaining texts including the most recent publications are made available to all students and professors. Indeed, it is our childhood memories that define us and therefore they deserve to be archived and protected in the brain. The details of how this happens will need to be disclosed.
Mammals (from rodents to primates) can detect and therefore become aware of visual stimuli exhibiting luminance-contrast levels as low as 1 to 2% (Schiller and Tehovnik 2015; Histed et al. 2012). Nevertheless, in the absence of the visual cortex the threshold to detect stimulus luminance-contrast surpasses 95%, which has been defined as blindsight in mammals such that visual awareness is lost under these conditions (Tehovnik, Patel, Tolias et al. 2021). Therefore, an intact neocortex is paramount for normal vision and for experiencing visual awareness with high sensory acuity; this also seems to be true of the other senses (Börnstein 1940; Exner 1881; Grüsser and Landis 1991; Guldin and Grüsser 1998; Heffner and Heffner 1986; Juenger et al. 2011; Paillard et al.1983; Zatorre and Jones-Gotman 1991).
Masking has been used as an effective tool to study conscious awareness. A target stimulus such as a spot or a face is presented first which is then followed by a mask that can be in the form of an annulus (to interfere with the spot) or a scrambled image (to interfere with the face) (Graziano 2019; Web, Graziano et al. 2016). By shortening the delay between the target stimulus and the mask (e.g., to 50 ms), the target stimulus becomes silent to consciousness, but at longer delays (e.g., 100 ms) the target is restored to consciousness. Although large regions of the neocortex (including the parietal, temporal, and frontal cortices) are activated under both conditions, it has been reported that when the mask fails to obstruct consciousness the neurons are activated more robustly, as measured with fMRI (see Fig. 4A of Web, Graziano et al. 2016).
Even though Graziano and colleagues (Graziano 2019; Web, Graziano et al. 2016) have interpreted the neocortical enhancement as being due to consciousness, we would suggest that along with consciousness the brain has been double stimulated since both the target and the mask activate the brain during consciousness. Consciousness is no different from detecting a stimulus, it too has a threshold related to the strength of synaptic connectivity but between the association areas (Tononi et al. 2008ab) which allows for the binding of perceptual elements (Kohler 1929; Singer 2001).
Furthermore, Kelly, Graziano et al. (2014) found that task difficulty (for binding perception) is central to how robust the neocortex is activated during task performance. Human subjects were required to associate expressions on a face with an image representing danger or safety. For example, a burning building should be associated with a face exhibiting horror rather than happiness. The emotive elements within a face were manipulated to coincide or disagree with an image and a subject was required to judge the degree of correspondence. It was found that the more difficult the correspondence (i.e., the more ambiguous) the larger the region of neocortex activated, as assessed with fMRI. This coincides with the finding that when one performs an under-learned behavioral routine (i.e., sequence learning with the fingers) more neocortical tissue is activated than when one performs an overlearned routine (Lehericy et al. 2005). This also applies to language: executing a secondary language utilizes more neocortical neurons than executing a primary language (Ojemann 1991).
Therefore, the purpose of declarative learning is to automate a behavior, so as to minimize the amount of consciousness/neocortical tissue/neural energy required for task execution; once a task is overlearned the neocortex is released (in a relative sense) to concentrate on new issues that require new learning (Tehovnik, Hasanbegović, Chen 2024). In short, consciousness/more neocortical tissue directed toward a task is an acknowledgement that the task needs further learning—further binding (Kohler 1929; Singer 2001).
Good conscience leads to heaven. Bad conscience leads to hell. https://www.researchgate.net/post/Parsimoniouslyleast_complicated_explanation_with_greatest_evidence_the_afterlife_completely_resembles_a_dream_BUT_is_REAL
Modern physics because afterlife prediction is new. More specifically, exact and concrete quantum mechanics.
The afterlife is so unpredictable, empiricism is more accurate than rationalism. https://www.researchgate.net/publication/381108355_Quantum_mechanicsmore_exact_would_predict_the_afterlife_more_accurately_than_relativity_more_theoretical
How can we determine if newborn infants possess consciousness, and if so, what methods and measures can we use to identify and assess this awareness within their developing minds?
If you find this research proposal intriguing and are interested in joining forces, I would be delighted to discuss the project's finer details at your earliest convenience.
I have delved deeply into the realms of cognitive science and the profound teachings of Buddhist philosophy which is based on the "Abidhamma Marrgaya" link attached for your reference.📷අභිධර්ම මාර්ගය.pdf Inspired by these two fields, I am embarking on a unique research project that seeks to model the mind through the lens of Buddhist principles.
I am eager to invite you to join me in this research endeavor. Your valuable perspectives and academic expertise would greatly enrich the project, making it a truly collaborative and rewarding experience. The primary aim of this research is to explore the nature of consciousness and cognitive processes, drawing from the wisdom and contemplative practices found in Buddhist philosophy.
I believe that combining respective strengths in these areas can lead to novel insights and foster a deeper understanding of the mind.
In the mid-1980's at the University of Toronto, John Eccles delivered a lecture on consciousness. Eccles, being a dualist, declared that the interface to consciousness is in the supplementary motor area (Brodmann area 6) of the cerebral cortex. This hypothesis compelled Peter Schiller to determine whether this is so (Mann et al. 1988; Tehovnik and Slocum 2000). Looking for the inner man has a long history. As early as 1927, Ivan Pavlov who worked on dogs lesioned large chunks of the neocortex to identify the center of consciousness, defined as the site that mediates classical conditioning (Pavlov 1927). Pavlov concluded that the center was located somewhere in the neocortex and that the location was dependent on the sensory system being utilized to perform a conditioning task. Many years later Thompson revisited this question in lower mammals showing that eye-blink conditioning is dependent on neurons below the midbrain including the cerebellum (Swain, Thompson et al. 2011; also see Gallistel et al. 2022). Thus, classical conditioning was deemed an unconscious process with no relevance to the neocortex, thereby diminishing the significance of Pavlov's conclusions.
When both the frontal eye fields and superior colliculi are destroyed in monkeys, they can no longer generate visually guided saccadic eye movements (because the neocortex has been disconnected from the brain stem), but reflexive behaviors such as optokinetic nystagmus are spared (Schiller et al. 1980). Liu et al. (2016) have found that when they optogenetically inhibited visual corticofugal neurons that project to the terminal nuclei in the brainstem that mediate optokinetic nystagmus, the ability to change the gain was reduced and in some cases abolished; therefore, the neocortex ‘the putative conscious organ’ and the brainstem cannot be studies in isolation even for reflexive behaviors [also see: Hasan et al. (2013) who disrupted (trace) eye-blink conditioning by disabling the glutamate receptors of the motor cortex].
Accordingly, there is no neural center to consciousness, as already concluded by Wilder Penfield (1975), who observed that his patients always realized that electrical stimulation (and not them) was causing the evoked perception or movement [a similar tendency has been observed in monkeys subjected to frontal lobe electrical stimulation: the animals would actively avoid making eye movements in specific parts of the visual field to avoid being stimulated, particularly when the stimulation fixed their eyes in one orbital position against their will, Tehovnik et al. 1993]. Furthermore, consciousness is a network phenomenon that depends on both the neocortex and the cerebellum, even though declarative memories are stored in the neocortex and these memories are linked to the motor system by way of the cerebellum (Corkin 2002; Hasanbegović 2024; Mariën et al. 2017; Squire et al. 2001; Tehovnik, Patel, Tolias et al. 2021).
Ablation of the cerebellum does not abolish locomotion in mammals (Ioffe 2013); it merely induces atonia: body movements become clumsy with postural and vestibular deficits, which is related to the negation of both proprioceptive and vestibular input to the cerebellum, which encodes where the body is with respect to itself and the outside world, i.e., with respect to the gravitational axis (Carriot et al. 2021; Demontis et al. 2017; Fuchs and Kornhuber 1969; Lawson et al. 2016; Miles and Lisberger 1981). Animals have difficulty crossing a balance beam following complete cerebellar damage and the righting reflex is interrupted. Consciousness, which is a declarative attribute, is not affected following cerebellar damage (D’Angelo and Casali 2013; Petrosini et al. 1998; Tononi and Edelman 1998). As with cerebellar impairment, following neocortical ablation, locomotion is not eliminated but the sequencing of movement is severely affected (Vanderwolf 2007; Vanderwolf et al. 1978). Stepping responses can be evoked in spinal animals, but with a total loss of balance and muscular coordination since both cerebellar and neocortical support is now absent (Audet et al. 2022; Grillner 2003; Sherrington 1910).
Following a stroke that affected the left mediolateral and posterior lobes of the cerebellar cortex (including the left dentate nucleus), it was found that the subject (aged 72), a (right handed) war correspondent who had been versed in seven languages, could no longer communicate in his non-primary languages (see Fig. 1, Mariën et al. 2017): French, German, Slovenian, Serbo-Croatian, Hebrew, and Dutch (in the order of having learned the languages before the age of 40). Before the stroke, the subject used Dutch, French, and English regularly. After the stroke his primary language, English, remained intact. Most significantly, the day of the stroke, all thinking in the second languages was abolished (see Footnote 1). One day following the stroke, however, the French language returned. Nevertheless, the remaining secondary languages were abnormal. Reading was better preserved than oral and written language, likely because reading is dependent mainly on scanning a page with the eyes and having an intact neocortex for word comprehension (fMRI revealed language activations in neocortex and in the intact right cerebellar hemisphere, Mariën et al. 2017). Speaking and writing, on the other hand, are more dependent on the sequencing of multiple muscle groups, a task of the cerebellum (Heck and Sultan 2002; Sultan and Heck 2003; Thatch et al. 1992). When speaking or writing in a non-primary language, English words would intrude. The naming of objects and actions verbally was impaired, and writing was severely disrupted. When high-frequency visual stimuli (objects, animals, etc.) were presented visually (1 month after the stroke), identifying an object with the correct word surpassed 80% correctness for English, French, and Dutch, whereas it remained at under 20% correctness for German, Slovenian, Serbo-Croatian, and Hebrew. Since the execution of behavior depends on loop integrity between the neocortex and cerebellum (Hasanbegović 2024), it is highly likely that damage to the cerebellum undermined this integrity such that the least overlearned routines—German, Slovenian, Serbo-Croatian, and Hebrew—were disturbed. Note that a functional left neocortex (of the right-handed subject) with a preserved right cerebellum was sufficient to execute the overlearned languages—English, French, and Dutch.
Based on our understanding of cerebellar function, if the entire cerebellum (including the subjacent nuclei) were damaged in the subject, we would expect that even English, the primary language, would be compromised, and most importantly, the learning of a new language would be rendered impossible, given the dependence of behavioral executions (and learning) on intact neocortical-cerebellar loops (Hasanbegović 2024; also see: Sendhilnathan and Goldberg 2000b; Thach et al. 1992). Thus, thinking is affected by damage to neocortical-cerebellar loops, which concurs with the behavioral findings of Hasanbegović (2024).
Footnote 1: Self-report by the patient about the day of the cerebellar stroke: “I was watching television at my apartment in Antwerp when suddenly the room seemed to spin around violently. I tried to stand but was unable to do so. I felt a need to vomit and managed to crawl to the bathroom to take a plastic bowl. My next instinct was to call the emergency service, but the leaflet I have outlining the services was in Dutch and for some reason, I was unable to think (or speak) in any language other than my native English. I have lived in Antwerp for many years and use Dutch (Flemish) on a day-to-day basis. I called my son-in-law, who speaks fluent English and he drove me to Middelheim Hospital. We normally speak English when together. I understood none of the questions asked to me in Dutch by hospital staff and they had to be translated back to me in English. My speech was slurred. I had lost some words, I was aware of that, but I cannot recall which words. I made no attempt to speak any of the other languages I know, and in the first hours of my mishap happening, I do not think I realized that I had other languages.” (Mariën et al. 2017, p. 19)
Figure 1. Human cerebellar cortex. The mediolateral and posterior lobes are indicated. The mediolateral lobe of the cerebellum (right and left) is part of the cortico-frontal-cerebellar language loop (Stoodley and Schmahmann 2009), and cerebellar grey matter density in bilingual speakers is correlated with language proficiency (Pliatsikas et al. 2014). Typically, the innervation of the left neocortical language areas is strongest to the right cerebellum in right-handed subjects (Van Overwalle et al. 2023). Illustration from figure 8 of Tehovnik, Patel, Tolias et al. (2021).
The hippocampus is necessary for the consolidation of recent memories, but the neocortex is important for the archival storage of these memories. This is borne out by studies in both humans and other animals (Frankland and Bontempi 2005). Replay during sleep, particularly during slow-wave sleep, is believed to be involved in the consolidation process at the neocortex (Dickey et al. 2022; Wilson and McNaughton 1994). Recall by mice of remote memories activates the prefrontal cortex, the frontal cortex, the anterior cingulate, the retrosplenial cortex, and the temporal cortex as evidenced with 14C-2-deoxyglucose (Bontempi et al. 1999), and expression of activity by the genes, c-Fos and Zif268 (Maviel et al. 2004). Initially, the hippocampus is thought to integrate information from distributed and independent cortical modules that represent various features of a task and then these features are fused to produce a coherent memory trace by strengthening connections between areas (Frankland and Bontempi 2005).
Imaging studies in animals have shown that hippocampal activity is suppressed when spatial and contextual memories are being recalled (Bontempi et al. 1999; Maviel et al. 2004), i.e., when stored memories are being combined with ongoing behavior. As well, the retrieval process is believed to utilize non-NMDA/glutamatergic circuits in the hippocampus (Nakazawa and Tonegawa 2004; Riedel et al. 1999). When neocortical memory is inconsistent with a new hippocampal memory, the hippocampus is re-activated to upgrade the stored neocortical memory (Frankland and Bontempi 2005), much like what happens for memory at the cerebellum: when there is a mismatch between the sensory inputs flowing via the mossy fibres and the inputs from the inferior olive, the learning process is reinstated [i.e., the complex spike firing is put outside of the 0.5-2 Hz window to alter simple-spike discharge, Loyola, De Zeeuw et al. 2019; Mukamel, Schnitzer et al. 2009; Tehovnik, Patel, Tolias et al. 2021].
For the sake of simplicity, it is best to think of neocortical memory as ‘sensory’ and cerebellar memory as ‘motor’, with the former having a sensory bias (i.e., visual, auditory, somatosensory, vestibular, gustatory, olfactory, interoceptive) and the latter having a motor bias, represented by proprioception which is the sensory counterpart of ocular and skeletal movement (Chen 2019; Gibson et al. 2004; Tehovnik and Chen 2015). Cerebellar parallel fibres (from the granular neurons) interconnect the neocortex with the cerebellum via descending pontine information from the neocortex and the ascending thalamic information to the neocortex. Such loops of information are engaged for the execution of all behavior (Tehovnik, Hasanbegović, Chen 2024), even though reductionists have disconnected various component of the cerebellum to suggest that the cerebellar nuclei and the brain stem can operate independently of the cerebellar cortex once learning has been finalized (e.g., Miles and Lisberger 1981; Sendhilnathan and Goldberg 2000b). Such a contradiction (not that different from the ‘wave’ vs. ‘particle’ contradiction in physics) can best be appreciated by what happens during the generation of normal behavior: learning is never finalized since the minute you wake up in the morning, the brain is being subjected to new adaptive forces. And if you don’t believe me, try going to work with the exact same thoughts (Hebb 1949, 1961, 1968) as you had yesterday. We can all agree that on each day our brain experiences a newness that alters consciousness while preserving the historical record of self.
Has anyone perhaps got any information on how environmentally conscious consumers perceive the product innovation strategies of plastic manufacturers?