Science topic
Automatic Control - Science topic
Explore the latest questions and answers in Automatic Control, and find Automatic Control experts.
Questions related to Automatic Control
Dear colleagues, which do you think are the best books on teaching Automatic Control Systems as an undergraduate course? I believe it depends on the Department you are teaching of course. My views are in the video below. Please add more suggestions. The inclusion of MATLAB material in the syllabus is also important. https://www.youtube.com/watch?v=zmvuRZCx_XA
2025 4th International Conference on Computer, Artificial Intelligence and Control Engineering (CAICE 2025) is to be held in Hefei, China in January 10-12, 2025.
Conference Website: https://ais.cn/u/YNfu22
---Call for papers---
The topics of interest for submission include, but are not limited to:
(I) Computer
· Edge Computing and Distributed Cloud
· Architecture, Storage and Virtualization
· Cloud Computing Technologies
· Deep Learning and Big Data
· Computer networks
......
(II) Artificial Intelligence
· Artificial Intelligence Applications
· Pattern Recognition and Machine Learning
· AI Languages and Programming Techniques
· Cybersecurity and AI
· Artificial Intelligence and Evolutionary Algorithms
......
(III) Control Engineering
· Automatic Control Principles and Technology
· Design, modeling and control of precision motion systems
· Vibration analysis and control
· Fuzzy control and its applications
· Fractional order system and control
· Flexible robotics, soft robotics
· Smart automation
---Publication---
All papers will be reviewed by two or three expert reviewers from the conference committees. After a careful reviewing process, all accepted papers will be published in ACM conference proceeding, and submitted to EI Compendex, Scopus for indexing.
---Important Dates---
Full Paper Submission Date: December 19, 2024
Notification Date: December 25, 2024
Registration Date: December 29, 2024
Conference Dates: January 10-12, 2025
--- Paper Submission---
Please send the full paper(word+pdf) to Submission System:

IEEE 2024 2nd International Conference on Artificial Intelligence and Automation Control(AIAC 2024) will be held on October 25-27, 2024 in Guangzhou, China.
Conference Website: https://ais.cn/u/NJbuEn
---Call for papers---
The topics of interest for submission include, but are not limited to:
1. Artificial Intelligence
▨ Adaptive Control
▨ Agent and Multi-Agent Systems
▨ AI Algorithms
▨ Artificial Intelligence Tools and Applications
▨ Artificial Neural Networks
▨ Automatic Control
▨ Automatic Programming
▨ Bayesian Networks and Bayesian Reasoning
......
2. Control Science Engineering
▨ Operating Systems
▨ Modeling and Simulation of Emerging Technologies
▨ Nonlinear System Control
▨ Progress of Engineering Software Engineering
▨ Circuits, Electronics and Microelectronics
▨ Nonlinear Theory and Application
▨ Renewable Energy Conversion
▨ Fault Tolerant Control System
......
---Publication---
All accepted papers of AIAC 2024 will be published in IEEE and will be submitted to EI Compendex, Scopus for indexing. All conference proceedings paper can not be less than 4 pages.
---Important Dates---
Full Paper Submission Date: September 16, 2024
Registration Deadline: September 20, 2024
Final Paper Submission Date: October 21, 2024
Conference Dates: October 25-27, 2024
--- Paper Submission---
Please send the full paper(word+pdf) to Submission System:

Aficionados of brain machine interfaces (BMI) have the goal of hooking up the neocortical neurons of an individual with spinal cord or brain stem damage to have him or her control a device with the neurons to restore walking or communication (Birbaumer et al. 1999; Hochberg et al. 2006; Shenoy et al. 2003; Taylor et al. 2002; Wessberg et al. 2000). Since single-cell organisms can be conditioned (Saigusa et al. 2008), it should not be surprising that a single cell of the neocortex can also be conditioned for BMI development. In the study of Prsa et al. (2017), a single neuron of a head-fixed mouse was conditioned in the motor cortex (as measured with two-photon imaging), and feedback of successful conditioning was achieved by optogenetic activation of cells in the somatosensory cortex. The mouse was rewarded with a drop of water following the volitional discharge of a motor cell using the method of Fetz (1969). The conditioning was achieved after 5 minutes of practice, which highlights that the neocortex has a tremendous capacity for making associations (as already discussed), and this is why the neocortex has been the focus of BMI development (Tehovnik et al. 2013).
For an amoeba to learn it must be able to transmit information through its cell membrane so that the internal state of the cell can be modified and have the information stored for long-term use (Nakagaki et al. 2000; Saigusa et al. 2008). As mentioned, like single-cell organisms, multicellular organisms must also internalize and store changes to the environment for learning. The success of BMI, therefore, depends on the extent of feedback during learning (Birbaumer 2006). In the study of Prsa et al. (2017) the feedback came from two sources: from the activation of a population of neurons in the somatosensory cortex and from the delivery of reward which would have engaged the reward circuits of the brain (Olds and Milner 1954; Olds 1958; Pallikaras and Shizgal 2022; Yeomans et al. 1988). In the study of Fetz (1969), monkeys were conditioned by associating neural responses in the motor cortex with the delivery of a reward. It was found that this association could be abolished by cutting the proprioceptive input (Wyler and Burchiel 1978; Wyler et al. 1979), given that when learning the association monkeys often moved their limbs to drive the neurons in the motor cortex to facilitate reward delivery (Fetz 1969; Fetz and Baker 1973; Fetz and Finocchio 1971, 1972). It is noteworthy that following transection of the spinal cord to abolish proprioceptive input, a monkey was observed moving its flaccid arm by the functional arm to try to drive the cells in the motor cortex to obtain a reward (Wyler et al. 1979).
As the sensory feedback of a monkey is reduced on a BMI task, the extent of modulation of the neocortical neurons during task performance is reduced (Tehovnik et al 2013). A monkey was trained to use a manipulandum to move a cursor from the center of a computer monitor to acquire a peripherally located visual target in exchange for a reward (O’Doherty et al. 2011). Three conditions were considered as neurons in the neocortex were activated to move the cursor: (1) moving the hand-held manipulandum to acquire the target, (2) having the hand-held manipulandum fixed in place as the target was being acquired, and (3) having no manipulandum and allowing the animal to free-view the monitor to acquire the target. Going from condition 1 to condition 3, it was found that the modulation of the neocortical neurons dropped by 80%. Thus, as one reduces the number of feedback channels for BMI, expect the firing of the neocortical neurons to decline. This has direct implications for patients who are paralyzed and must therefore rely on the non-tactile and non-proprioceptive senses to engage the neurons of the neocortex.
Another factor that affects the BMI signal is that as the number of recorded neurons in the neocortex surpasses 40 using an electrode array, the information transmitted to drive an external device begins to saturate (Figure 32, Tehovnik and Chen 2015). The best area of the neocortex for getting an optimal BMI signal is in the motor cortex when trying to move an external device that is based on the movements of the forelimbs that engages the visual system, for example (Tehovnik et al. 2013). Also, primary cortical areas are superior to association areas for electrode implantation, since the best signals for BMI are found in areas M1, S1, and A1 (Lorach et al. 2023; Martin et al. 2014; Metzger et al. 2023; Tehovnik and Chen 2015; Tehovnik et al. 2013; Willett et al. 2021, 2023).
Furthermore, it has been known for some time that for a human subject to operate a BMI device using the neocortex, tremendous concentration is necessary and its seems that even with practice the amount of concentration is never reduced (Bublitz et al. 2018). As discussed, a central feature of learning via the neocortex is that with the learning of a task the behavior becomes automated, thereby bringing about the reduction of needed CNS (central nervous system) neurons to perform a task. There is no evidence that neocortically-based BMIs can be automated, since devices need to be calibrated daily (Ganguly and Carmena 2009). Finally, it is known that the neurons in the neocortex, e.g., in area M1, do not follow every behavior faithfully given that the signals are highly variable and prone to wandering when examined across days and months (Gallego et al. 2020; Rokni et al. 2007; Schaeffer and Aksenova 2018). If the neocortex is indeed the center of consciousness (as is presumed here) then one should expect that the neurons in this part of the brain do not discharge lawfully to every motor response as do the motor neurons in the brain stem and spinal cord (Schiller and Tehovnik 2015; Sherrington 1906; Vanderwolf 2007). Whether implanting neurons in the cerebellum might overcome some of the shortcomings found for the neocortex should be considered.
So, how much information is transmitted by a BMI device in bits per second when recording from the neocortex? In 2013, the amount of information transmitted averaged 0.2 bits per second, which was based on work done on behaving primates as well as human subjects (Tehovnik et al. 2013). This value is comparable to the amount of information transmitted by Stephen Hawking (who suffered from amyotrophic lateral sclerosis, ALS) using his cheek muscle at 0.1 bits per second (corrected for information redundancy and based on data from De Lange 2011).[1] This means that at this time there would not have been any advantage for Hawking to use a BMI.
Several studies have been done in recent years that have increased the information transfer rate of BMIs above 1 bit per second. Metzger et al. (2023) developed a BMI to recover language in a patient that had experienced a brain stem stroke that abolished speaking and eliminated the ability to type. A 253-channel electrocortical array was placed on the surface of the sensorimotor cortex over areas that mediate facial movements. It was found that as the subject engaged in the silent reading of sentences, signals could be extracted from the neocortex (with the assistance of artificial intelligence) to generate text at a rate of 78 words per minute at a percent correctness of 75%. This translates into 2.5 bits of information per second, or 5.7 possibilities per second [to derive the bit-rate corrections were made for information redundancy, Reed and Durlach (1998); see Tehovnik et al. (2013) for other details]. This value is consistent with what has been reported by others using depth electrodes implanted in the motor cortex of the face and hand area for silent reading and imagined writing (i.e., ranging from 1.2 to 2.1 bits per second while using the assistance of artificial intelligence, Willett et al. 2021, 2023; also see Metzger et al. 2022).[2] Overall, 1.2 to 2.5 bits per second is to predict 2 to 6 possibilities per second, which is far short of the performance of a cochlear implant (which can predict over 1,000 possibilities per second, Baranauskas 2014), and way short of normal language (which can predict over a trillion possibilities per second, Reed and Durlach 1998).
As for restoring locomotion to spinal cord patients, a major effort was made by Miguel Nicolelis to fit a paralyzed patient in an exoskeleton such that signals were collected from the subject’s neocortex to have him kick a soccer ball with the exoskeleton, which was used to open the 2014 FIFA World Cup (Nicolelis 2019). Realizing that the demonstration was not working, FIFA and the media networks cancelled the broadcast before having the failure transmitted throughout the world (Tehovnik 2017b). Nevertheless, since this time investigators have not given up on the idea of restoring locomotor functions to patients with spinal cord damage. Similar to the study of Ethier et al. (2012), who found that activity from the neocortex could be used to contract the skeletal muscles by discharging the cells in the spinal cord of a monkey, Lorach et al. (2023) found that signals from the sensorimotor cortex of a paralyzed patient could drive the muscles in the legs by having the cortical signals transmitted to the lumbar spinal cord. Recordings were made from each hemisphere using an array of 64 epidural electrodes positioned over each somatosensory cortex. When the patient thought about moving his legs the signal generated in the neocortex stimulated an array of 16 electrodes positioned over the dorsal lumbar spinal cord, such that some combination of 8 electrodes was activated over the dorsal roots of the left spinal cord and some combination of the remaining electrodes was activated over the dorsal roots of the right spinal cord. Consistent with the anatomy, the right neocortex (upon thinking to move) engaged the left spinal cord and the left neocortex engaged the right spinal cord, which elicited a stepping response at a latency of ~100 ms following the discharge of the neocortical neurons, which matches the normal latency.
The walking induced by the implants was slower than that found for an intact system and the patient typically had to walk with the assistance of crutches since postural support was impaired. The minimal number of muscles utilized to walk is eight per leg (including gluteus maximus, gluteus medius, vasti, rectus femoris, hamstrings, gastrocnemius, soleus, and dorsiflexors) for a total of sixteen muscles required (Lorach et al. 2023; Liu et al. 2008). Accordingly, if a ‘0’ and ‘1’ are assigned to the absence and presence of a muscle contraction, then a minimum of 16 bits of information is needed to perform a stepping response. It took the paralyzed patient 4.6 seconds to complete a step (derived from Figure 4f of Lorach et al. 2023), whereas a normal subject takes a 10th of this time to complete a step (based on the step duration of one of the authors). Therefore, the information transferred by the patient was 3.5 bits per second (16 bits/4.6 sec) and that transferred by a normal subject would be 35 bits per second (16 bits/0.46 sec), namely, one order of magnitude less for the patient. Finally, it was found that in the absence of the neocortical implant but having stimulation delivered to the spinal cord implant, the patient could still walk but at a bit-rate of 3 bits per second (derived from Figure 4f or Lorach et al. 2023). Thus, the neocortex added 0.5 bits per second to the information throughput.
Following from the foregoing ‘minimal’ analysis, if each skeletal muscle in the body represents 1 bit of information, then the entire collection of muscles in the body (totaling 700, Tortora and Grabowski 1996) represents 700 bits. We know that language generation requires a minimum of 20 or so skeletal muscles (Simonyan and Horwitz 2011) or 20 bits of information.[3] Generating a muscle contraction every 500 ms would put the bit-rate for language up to 40 bits per second. Accordingly, the skeleto-motor throughput by itself falls well-short of the trillion bits per second estimated for the neocortex or the cerebellum, suggesting further that the information transfer capacity of these structures is mainly for internal use.
Summary
1. A neocortically-based BMI—like a functional brain—is dependent on feedback from the senses to remain operative. The more feedback channels available, the better the signal.
2. An information ceiling occurs when recording from more than 40 neurons in the neocortex using implanted electrode arrays.
3. Neural signals derived from the neocortex are not good for long-term use, and therefore a device would need to be recalibrated daily. Furthermore, to operate such a device requires much concentration on the part of a patient, since the signal does not seem amenable to automation for long-term functionality.
4. In 2013, the amount of information transmitted by a BMI averaged 0.2 bits per second. At this time, it would not have made sense for Stephen Hawking to use such a device to overcome ALS.
5. When electrodes are centered on the writing or speech areas of the motor cortex the amount of information transmitted by the neurons ranges from 1.2 to 2.5 bits per second. This translates into accurately predicting 2 to 6 possibilities per second, which is far short of the performance of a cochlear implant—which can predict over 1,000 possibilities per second—and way short of normal language—which can predict over a trillion possibilities per second.
6. To elicit a stepping response with a BMI, a throughput of 3.5 bits per second has been achieved. This rate is one order of magnitude below the required rate of 35 bits per second to produce a stepping response.
Footnotes:
[1] To determine the information transfer rate from behavioral performance data, see Tehovnik and Chen 2015; Tehovnik et al. 2013).
[2] For the imagined writing, the hand contralateral to the implant was somewhat functional through movement, which could have contributed to the imagined writing (Willett et al. 2021).
[3] A total of 100 muscles are used for speech which control the voice, swallowing, and breathing (Simonyan and Horwitz 2011).
Figure 32. Normalized BMI signal is plotted as a function of neurons. See Tehovnik and Chen (2015) for details. (file: auto_003.gif)

[CFP]2024 2nd International Conference on Artificial Intelligence, Systems and Network Security (AISNS 2024) - December
AISNS 2024 is to bring together innovative academics and industrial experts in the field of Artificial Intelligence, Systems and Cyber Security to a common forum. The primary goal of the conference is to promote research and developmental activities in computer information science and application technology and another goal is to promote scientific information interchange between researchers, developers, engineers, students, and practitioners working all around the world. The conference will be held every year to make it an ideal platform for people to share views and experiences in computer information science and application technology and related areas.
Conference Link:
Topics of interest include, but are not limited to:
◕Artificial Intelligence
· AI Algorithms
· Natural Language Processing
· Fuzzy Logic
· Computer Vision and Image Understanding
· Signal and Image Processing
......
◕Network Security
· Active Defense Systems
· Adaptive Defense Systems
· Analysis, Benchmark of Security Systems
· Applied Cryptography
· Authentication
· Biometric Security
......
◕Computer Systems
· Operating Systems
· Distributed Systems
· Database Systems
Important dates:
Full Paper Submission Date: October 10, 2024
Registration Deadline: November 29, 2024
Conference Dates: December 20-22, 2024
Submission Link:

A major goal of learning and consciousness is to automate behavior--i.e., to transition from ‘thinking slow’ to ‘thinking fast’ (Kahneman 2011)--so that when an organism is subjected to a specific context that an automatic response will be executed with minimal participation from volitional circuits (i.e., in the neocortex). When one needs to enter a secure area, it is common for one to be confronted with a keypad upon which one must punch out the code to gain entry. At the beginning of learning the code, one is given a number, e.g., ‘3897’, which must be put to declarative memory. After having entered the facility on numerous occasions, one no longer needs to remember the number, but just the spatial sequence of the finger presses. Thus, the code has been automated by the brain. In fact, often the number is no longer required, since the nervous system automatically punches out the number using implicit memory (something like never needing to recall the rules of grammar to write correct sentences).
So, how does the brain automate behavior? The first clue to this question comes from studies on express, saccadic eye movements (Schiller and Tehovnik 2015). Express saccades are eye movements generated briskly to single targets at latencies between 80 and 125 ms. In contrast, regular saccades are saccadic eye movements generated to a single or to multiple targets (as used in discrimination learning such as match-to-sample) whose latencies vary from 125 to 200 ms, or greater depending on task difficulty (see Figure 14). The behavioral context for the elicitation of express saccades is to have a gap between the termination of the fixation spot and the onset of a single punctate visual target (Fischer and Boch 1983). The distributions of express saccades and regular saccades are bimodal, suggesting that two very different neural processes are in play when these eye movements are being evoked. After carrying out lesions of different parts of the visual system (i.e., the lateral geniculate nucleus parvocellular, the lateral geniculated nucleus magnocellular, area V4, the middle temporal cortex, the frontal eye fields, the medial eye fields, or the superior colliculus) it was found that lesions of the superior colliculus abolished express saccades, and for all other lesion types the express saccades were spared. Thus, a posterior channel starting in V1 and passing through the superior colliculus mediates express saccades (Schiller and Tehovnik 2015). Furthermore, the minimal latency for express saccades (i.e., 80 ms) is accounted for by the summed, signal latency between the retina and area V1 (i.e., 30 ms), the signal latency between area V1 and the superior colliculus (i.e., 25 ms), and the signal latency between the superior colliculus, the saccade generator, and the ocular muscles (i.e., 25 ms, Tehovnik et al. 2003)[1]. What this indicates is that express saccade behavior bypasses the frontal cortex and the posterior association areas of the neocortex (i.e., V4 and the medial temporal cortex), and is transmitted directly from V1 to the brain stem[2].
For oculomotor control, parallel pathways occur between (1) the posterior and the anterior regions of the neocortex (i.e., including, respectively, V1 and the frontal eye fields[3]) and (2) the brain stem ocular generator, which mediates ocular responses in mammals (Figure 15, Tehovnik et al. 2021). The idea that parallel pathways between the neocortex and brain stem mediate specific responses, such as the V1-collicular pathway subserving ocular automaticity, is not new. Ojemann (1983, 1991) has proposed that a multitude of parallel pathways subserves language, since once a language is mastered, it becomes a highly automated act, and electrical perturbation of a focal neocortical site affects a specific component of a language, but not an entire language string, as long as the remaining parallel pathways are intact. Global aphasia occurs when all the parallel pathways of Wernicke’s and Broca’s areas are damaged (Kimura 1993; Ojemann 1991; Penfield and Roberts 1966).
Why is it that express saccades and regular saccades alternate across trials in a quasi-random order (Schiller and Tehovnik 2015)? Lisberger (1984) has studied latency oscillations across trials for the vestibuloocular reflex by measuring the onset of an eye movement after the beginning of a head displacement. He found latency values as low as 12 ms and as high as 20 ms (Lisberger 1984; Miles and Lisberger 1981). At a 12-ms latency, the signal would need to bypass the cerebellar cortex and be transmitted from the vestibular nerve through the vestibular nucleus (which is a cerebellar nucleus) to the abducens (oculomotor) nucleus to contract the eye muscles within 12 ms (Lisberger 1984). At a 20-ms latency, the signal would pass from the vestibular nerve to the cerebellar cortex by way of the granular-Purkinje synapses and then to the vestibular and abducens nuclei to arrive at the muscles within 20 ms. The difference between the fast and slow pathway is 8 ms, and it is the additional 8 ms through the cerebellar cortex that allows for any corrections to be made to the efference-copy code[4].
In the case of regular versus express saccades, the minimal latency difference is 45 ms (i.e., 125 ms – 80 ms = 45 ms, Schiller and Tehovnik 2015). So, what could explain this difference? Regular saccades utilize both posterior and anterior channels in the neocortex, for paired lesions of the superior colliculus and the frontal eye fields are required to abolish all visually guided saccades (Schiller et al. 1980). Perhaps, the longer latency of regular saccades as compared to express saccades is due to transmission by way of the frontal eye fields for regular saccades, as well as having the signal sent through the cerebellar cortex via the pontine nuclei and inferior olive to update any changes to the efference-copy code. Express saccades, on the other hand, utilize a direct pathway between V1 and the saccade generator, with access to the cerebellar nuclei (i.e., the fastigial nuclei[5], Noda et al. 1991; Ohtsuka and Noda 1991) for completion of a response at a latency approaching 80 ms—a latency that is too short for frontal lobe/temporal lobe participation and the conscious evaluation of the stimulus (at least 125 ms is required for a frontal/temporal lobe signal to arrive in V1, Ito, Maldonado et al. 2023)[6]. Utilizing the fast pathway would not permit any changes to the efference-copy code and furthermore there would be no time for the conscious evaluation of the stimulus conditions. This general scheme for slow versus fast ‘thinking’ (Kahneman 2011) can be applied to any behavior, as the behavior changes from a state of learning and consciousness to a state of automaticity and unconsciousness[7].
While thinking slow, the human cerebellum can update as many as 50,000 independent efference-copy representations (Heck and Sultan 2002; Sultan and Heck 2003). And we know that during task execution the entire cerebellar cortex is engaged including circuits not necessary for task execution (Hasanbegović 2024). This global reach assures that all aspects of a behavior are perfected through continuous sensory feedback; hence, evolution left nothing to chance.
The number of neurons dedicated to a behavioral response decreases as a function of automaticity. This translates into a reduction in energy expenditure per response for the neurons as well as for the muscles[8]. The first evidence for this idea came from the work of Chen and Wise (1995ab) on their studies of neurons in the medial and frontal eye fields of primates (see Figure 15, monkey). Monkeys were trained on a trial-and-error association task, whereby an animal fixated a central spot on a TV monitor, and arbitrarily associated a visual object with a specific saccade direction by evoking a saccadic eye movement to one of four potential targets (up, down, left, or right) to get a reward (see Figure 16, left-top panel, the inset). An association was learned to over 95% correctness within 20 trials; unit recordings were made of the neurons in the medial and frontal eye fields during this time. The performance of an animal improved on a novel object-saccade association, such that the neurons exhibited either an increase in unit spike rate over an increase in the proportion of correct trials (Figure 16, novel, top panel), or an increase followed by a decrease in unit spike rate as the proportion of correct trials increased (Figures 16, novel, bottom panel, and Figure 17, novel, top panel). When the neurons were subjected to a familiar association, the discharge often assumed the same level of firing achieved following the asymptotic performance on novel associations: namely, high discharge and modulated (Figure 16, familiar, top panel) or low discharge and unmodulated (Figure 16, familiar, bottom panel; Figure 17, familiar, top panel). Accordingly, many neurons studied exhibited a decline in activity when subjected to familiar objects[9]. Although 33% of the neurons (33 of 101 classified as having learning-related activity) exhibited a declined and a de-modulation in activity during the presentation of a familiar object (e.g., Figure 17, familiar, top), this proportion is likely an underestimation, since many such neurons may have been missed given that unit recording is biased in favor of identifying responsive neurons. For example, a neuron that exhibited a burst of activity on just one trial could have been missed due to data averaging of adjacent trials, using a 3-point averaging method (Chen and Wise 1995ab).
For cells that had the properties shown in figure 16 (novel, top panel) for novel objects—i.e., showing an increase in activity with an increase in task performance—there was no delay in trials between the change in neural firing and the change in performance, as indicated by the downward arrow in the figure representing ‘0’ trials between the curves; this suggests that these cells were tracking the performance. Also, there was a group of cells that exhibited an increase and a decrease in unit firing such that their response to novel and familiar objects declined with the number of trials as well (Figure 16, bottom panels, novel and familiar). This indicates that the decline in activity was being replayed when the object became familiar. Finally, for neurons that exhibited an increase and decrease in spike activity over trials, the declining portion of the neural response (at 50% decline) always followed the increase in task performance by more than half a dozen trials, as indicated by the gap between the downward arrows of figure 16 (novel, bottom) and figure 17 (novel, top), illustrating that these neurons anticipated peak performance. Some have suggested that the short-term modulation in the frontal lobes is channels to the caudate nucleus for long-term storage (Hikosaka et al. 2014; Kim and Hikosaka 2013). More will be said about this in the next chapter.
Imaging experiments (using fMRI) have shown that as one learns a new task, the number of neurons modulated by the task declines. Human subjects were required to perform a novel association task (associate novel visual images with a particular finger response) and to perform a familiar association task (associate familiar visual images with a particular finger response) (Toni et al. 2001). It was found that as compared to the novel association task, the familiar association task activated less tissue in the following regions: the medial frontal cortex and anterior cingulate, the prefrontal cortex, the orbital cortex, the temporal cortex and hippocampal formation, and the caudate nucleus. Furthermore, the over-learning of a finger sequencing task by human subjects from training day 1 to training day 28 was associated with a decline in fMRI activity in the following subcortical areas: the substantia nigra, the caudate nucleus, and the cerebellar cortex and dentate nucleus (Lehericy et al. 2005). Also, there was a decrease in activity in the prefrontal and premotor cortices, as well as in the anterior cingulate.
Finally, it is well-known that a primary language as compared to a secondary language is more resistant to the effects of brain damage of the neocortex and cerebellum, and a primary language, unlike a secondary language, is more difficult to interrupt by focal electrical stimulation of the neocortex (Mariën et al. 2017; Ojemann 1983, 1991; Penfield and Roberts 1966). Accordingly, the more consolidated a behavior, the fewer essential neurons dedicated to that behavior. Once a behavior is automated, there is no need to recall the details: e.g., punching out a code on a keypad no longer requires an explicit recollection of the numbers. This is why a good scientist is also a good record keeper, which further minimizes the amount of information stored in the brain (Clark 1998). By freeing up neural space, the brain is free to learn about and be conscious of new things (Hebb 1949, 1968).
Summary:
1. Automaticity is mediated by parallel channels originating from the neocortex and passing to the motor generators in the brain stem; behaviors triggered by this process are context dependent and established through learning and consciousness.
2. Express saccades are an example of an automated response that depends on a pathway passing through V1 and the superior colliculus to access the saccade generator in the brain stem. The context for triggering this behavior is a single visual target presented with a gap between the termination of the fixation spot and the presentation of the target.
3. The rhythmical activity between express behavior and non-express activity across trials is indicative of the express behavior bypassing the cerebellar cortex and non-express behavior utilizing the cerebellar cortex to adjust the efference-copy code.
4. Express saccades or express fixations are too short in duration (< 125 ms) for a target to be consciously identified. It takes at least 125 ms for a signal to be transmitted between the frontal/temporal lobes and area V1 to facilitate identification.
5. Automaticity reduces the number of neurons participating in the execution of a behavioral response; this frees up central nervous system neurons for new learning and consciousness.
Footnotes:
[1] The long delay of 25 ms between V1 and the superior colliculus is partly due to the tonic inhibition of the colliculus by the substantia nigra reticulata, which originates from the frontal cortex (Schiller and Tehovnik 2015).
[2] Cooling area V1 of monkeys disables the deepest layers of the superior colliculus, thereby making it impossible for signals to be transmitted between V1 and the saccade generator in the brain stem (see figure 15-11 of Schiller and Tehovnik 2015).
[3] In rodents, the frontal eye field homologue is the anteromedial cortex, and the neurons in this region elicit ocular responses using eye and head movements (Tehovnik et al. 2021). In primates, the frontal eye fields control eye movements independently of head movements hence the name ‘frontal eye field’ (Chen and Tehovnik 2007).
[4] These short latencies are for highly automated vestibular responses. Astronauts returning from space have severe vestibular (and other) problems, and it takes about a week for full adaptation to zero-G conditions (Carriot et al. 2021; Demontis et al. 2017; Lawson et al. 2016). It would be expected that the latencies would far surpass 20 ms, since now vestibular centers of the neocortex (to engage learning and consciousness) would be recruited in the adaptation process (Gogolla 2017; Guldin and Grüsser 1998; Kahane, Berthoz et al. 2003). Patients suffering from vestibular agnosia would be unaware of the adaptation process, as experienced by astronauts (Calzolari et al. 2020; Hadi et al. 2022).
[5] The discharge of monkey fastigial neurons begins to fire 7.7 ms before the execution of a saccadic eye movement (Fuchs and Straube 1993). This nucleus is two synapses away from the ocular muscles.
[6] Presenting an unfamiliar object during an express fixation of an object (i.e., a fixation of less than 125 ms; fixations between electrically-evoked staircase saccades evoked from the superior colliculus are about 90 ms, Schiller and Tehovnik 2015) should fail to be identified consciously by a primate; on the other hand, the identification of a familiar object will only occur using ‘subconscious’ pathways during an express fixation, which are pathways at and below the superior colliculus/pretectum and the cerebellum (see: De Haan et al. 2020; Tehovnik et al. 2021).
[7] The conscious and unconscious states can never be totally independent, since the neocortex constantly monitors the behavior of an animal looking for ways to optimize a response in terms accuracy and latency (Schiller and Tehovnik 2015), and this interaction explains the variability of response latency across a succession of trials.
[8] Lots of aimless movements are generated when learning a new task (Skinner 1938), and when building knowledge, one must dissociate the nonsense from facts to better solve problems. This initially takes energy but in time automaticity saves energy.
[9] When we (Edward J. Tehovnik and Peter H. Schiller) first reviewed this result for publication, we were mystified by the decline of neural responsivity with object familiarity, even though we accepted the paper based on its behavioral sophistication and the challenges of recording from such a large number of neurons (i.e., 476) using a single electrode.
Figure 14. (A) The bimodal distribution of express saccades and regular saccades made to a single target by a rhesus monkey. (B) Before and after a unilateral lesion of the superior colliculus for saccades generated to a target located contralateral to the lesion. (C) Before and after a unilateral lesion of the frontal and medial eye fields for saccades generated to a target located contralateral to the lesion. Data from figure 15-12 of Schiller and Tehovnik (2015).
Figure 15. Parallel oculomotor pathways in the monkey and the mouse. Posterior regions of the neocortex innervate the brain stem oculomotor generator by way of the superior colliculus, and anterior regions of the neocortex innervate the brain stem oculomotor generator directly. For the monkey the following regions are defined: V1, V2, V3, V4, LIP (lateral intraparietal area), MT (medial temporal cortex), MST (medial superior temporal cortex), sts (superior temporal sulcus), IT (infratemporal cortex), Cs (central sulcus), M1, M2, FEF (frontal eye field), MEF (medial eye field), OF (olfactory bulb), SC (superior colliculus), and brain stem, which houses the ocular generator. For the mouse: V1, PM (area posteromedial), AM (area anteromedial), A (area anterior), RL (area rostrolateral), AL (area anterolateral), LM (area lateromedial), LI (area lateral intermediate), PR (area postrhinal), P (area posterior), M1, M2, AMC (anteromedial cortex), OB (olfactory bulb), SC (superior colliculus), and brain stem containing the ocular generator. The posterior neocortex mediates ‘what’ function, and the superior colliculus mediates ‘where’ functions.
Figure 16. Performance (percent correct) is plotted (solid black curve) as a function of number of correct trials on a trial-and-error object-saccade-direction association task. A monkey was required to fixate a spot on a monitor for 0.6 seconds, which was followed by a 0.6 second presentation of an object at the fixation location. Afterwards, there was an imposed 2-3 second delay, followed by a trigger signal to generate a response to one of the four target locations to obtain a juice reward; the termination of the fixation spot was the trigger signal (see inset in top-right panel: OB represents object, and the four squares indicate the target locations of the task, and Figure 17, bottom summarizes the events of the task). Chance performance was 25% correctness, and the maximal performance was always greater than 95% correctness established within 20 correct trials. The performance shown is the aggregate performance. In each panel, the normalized (aggregate) unit response is represented by a dashed line. The representations are based on figures 10 and 11 of Chen and Wise (1995a) for the medial eye field, and the neurons were modulated by learning novel object-saccade associations (N = 101 of 476 neurons classified). Some cells modulated by learning were also found in the frontal eye fields (N = 14 of 221 neurons classified, Chen and Wise 1995b). In the lower right panel, the familiar objects induced a decline in the neural response over the 20 trials. The illustrations are based on data from figures 11 and 12 of Chen and Wise (1995a).
Figure 17. Performance (percent correct) is plotted (solid black curve) as a function of number of correct trials on a trial-and-error object-saccade-direction association task carried out by a monkey. The dashed curves represent normalized aggregate unit responses. The inset in the right panel shows the task. For other details see the caption of figure 16. The bottom panel summarizes the events of the task. The illustrations are based on data from figures 3C, 4C, 5C, and 10D of Chen and Wise (1995a).




Tononi and associates (2016) believe that different neurons control consciousness over unconsciousness, subjecting the brain to a dualism that can be traced back to René Descartes of the 17 Century. This idea conflicts with the observations of Oliver Sacks, who found that bilateral damage of the dopaminergic fibres that innervate the neocortex disrupts both the flow of movements (which can be done consciously as well as unconsciously) and the flow of thinking (a very conscious process):
Parkinson’s patients while immobile and comatose are unable to schedule their movements and thoughts. As described by Parkinson’s patient, Miss D: “…my essential symptom is that I cannot start and I cannot stop. Either I am held still or I am forced to accelerate. ” [Sacks 2012, pp. 40] As well, perceptions, words, phrases, or thoughts can be locked, either brought to a standstill or continuously repeated [Sacks 2012, pp. 15-16]. All volitional, introspective, and automatic states are interrupted in Parkinson’s patients, suggesting that dopamine must mediate the smooth transition of events for these states and in the absence of dopamine subjects are put into a perpetual ‘sleep’ as evidenced by their EEG, delta activity which is prevalent during slow-wave sleep.
Furthermore, when considering the preparatory activity preceding a movement (which can be thought of as ‘thinking to move’, James 1890), the preparatory activity has the same predictive power for a future movement irrespective of whether there is a movement or not (Darlington and Lisberger 2020; also see Nasibullina, Lebedev et al. 2023), and the preparatory activity is present throughout neocortex as well as subcortex including the thalamus, the pons, and the cerebellar cortex and nuclei, for instance (Darlington and Lisberger 2020; Hasanbegović 2024). Accordingly, the same neurons in neocortex mediate both consciousness and unconsciousness with the difference being in the nature of the pathways utilized to accomplish each: e.g., visual consciousness (which is for visual learning, Hebb 1949, 1961, 1968) would depend on both posterior and anterior neocortical sites, whereas visual unconsciousness would depend mainly on posterior neocortical sites since the learning of new routines has been finalized via the frontal lobes (Chen and Wise 1995ab; Schiller and Tehovnik 2001, 2005, 2015; Tehovnik 2024; Tehovnik, Hasanbegović, Chen 2024).
When mammals including humans are involved in volitional behaviors such as walking, running, or swimming, the neocortex assumes a low-voltage fast EEG activity, which characterizes the waking state of the brain (Vanderwolf 1969). When one swims lengths in a pool, one is very aware of two states of consciousness: a first state that is anchored to current sensations, especially when approaching the end of a pool length, which requires a flip turn initiated by vision, touch, sound, proprioception, and a change in vestibular head-orientation. To enhance one’s linkage to current sensations while swimming, one must swim with determination to reach the end of the pool as fast as possible, as would be the case by someone engaged in a swimming competition.
A second state of consciousness assumed during swimming is to be disconnected from one’s sensations, and instead be thinking about events of the day which depends both on information stored in the neocortex (e.g., in the parietal, temporal, and orbital cortices) and on the unconscious rhythmicity of swimming via subcortical circuits (e.g., as mediated by the cerebellum, Hasanbegović 2024). The unconscious rhythmicity is triggered by a visual impression at the end of a pool length to induce a flip turn; this is transmitted via the neocortex to subcortical channels (Tehovnik, Hasanbegović, Chen 2024). Once a flip turn is completed and swimming resumed, one can continue to contemplate the events of the day, consciously.
It is noteworthy that when the dopaminergic system of Parkinson’s patients whose dopamine levels have been reduced by 99% (Sacks 2012, pp. 335) is recovered using amantadine (a dopaminergic agonist), their neocortical EEG resembles low-voltage fast activity [Fig. 2 and 3 of Sacks 2012, pp. 329, 331], which is evidenced during waking state and volitional and automatic movements, as well as during introspective thinking (Sacks 2012). And recall that having one’s movements and thinking locked-in due to dopamine depletion is accompanied by neocortical slow-wave activity, which also occurs during sleep. It is for this reason that Parkinsonism has often been referred to as a sleeping sickness (Sacks 1976).
So, using EEG monitoring of athletes during swimming is a fast way to pilot if neocortical low-voltage fast activity undergoes a change depending on whether one is swimming volitionally as during a competition (which means all consciousness is dedicated to current sensations and the motor act) or whether one is swimming contemplatively (thinking about events of the day while executing an automated act). According to Tonini et al. (2016) these two states should generate different forms of activity over the neocortex if different neurons are engaged in the performance of each. According to our scheme (detailed in Tehovnik, Hasanbegović, Chen 2024), the activity of posterior neocortex should remain unchanged for both conditions, while the frontal lobes will only be engaged when new routines are being learned (or contemplated), which requires consciousness (or thinking, Hebb 1949, 1961, 1968).
** I would suggest that the bet between Christof Koch and David Chalmers [In: A 25-year-old bet about Consciousness has finally been settled, 2023] be extended for one or two years so that Christof can finally collect his reward for being correct about consciousness. But not to be too hasty, maybe we should wait for the empirical results to roll in based on our new conceptualization of consciousness being a neurophysiological/behavioral (rather than a philosophical/computational, Tononi et al. 2016) problem. The latter is the same error made by supporters of the Blue Brain Project, as spearheaded by Henry Markram which cost Europe over a billion dollars. **
Is synergetic control a model-free or model-based approach? Please tell me the reasons.
How about PID control?
Unlike structures in the brain that control body movements such as the brain stem, cerebellum, and spinal cord, whose energy consumption is related to movement execution and the maintenance of posture, the neocortex consumes energy at a high-level all the time irrespective of whether there are body movements or not (Herculano-Houzel 2011). Indeed, consciousness via neocortex cannot be turned off during one’s waking hours (Chomsky 2023) and at this time sensory-motor information is constantly being updated (i.e., through declarative learning: Hebb 1949, 1961, 1968) and consolidated during immobility and sleep (Wilson and McNaughton 1994). This process assures that the information can be used to trigger appropriate responses automatically based on environmental demands such as the need to get up in the morning, to shower and have a coffee, to perform one’s duties at work, to take a lunch break, to complete the remaining tasks of the day, and to finish up the day over dinner with family before going to bed to start the process once again.
It is most significant that the information transferred by neocortical brain-machine interfaces saturates after collecting signals from more than 40 neuron situated in the parietal, motor, and/or premotor cortices, as monkeys perform a forelimb task guided by vision (see Fig. 1). This explains, in part, why the information transfer rate of brain-machine interfaces is so low [i.e., typically less than 2.5 bits per second (under 8 possibilities per second); language on the other hand can deliver 40 bits per second (some trillion possibilities per second) using the entire brain, Tehovnik, Hasanbegović, Chen 2024]. Furthermore, unlike alpha motor neurons that are engaged reliably (and without exhaustion) for the execution of repetitive movements, this is not at all true of neocortical neurons which exhibit a high degree of discharge variability during movement repetition (Rokni et al. 2007; Sartori et al. 2017; Schaeffer and Aksenova 2018). This suggests that something else in addition to commanding movements (Evarts 1966, 1968; Robinson and Fuchs 1969) is going on in the neocortex. This something else is consciousness to update the information stores of the neocortex by way of thinking (Chomsky 1965; also see Darlington and Lisberger 2020). In fact, the business of inducing movements once automaticity sets in is relegated to as few neurons as possible at the level of neocortex and subcortex (Lehericy et al. 2005). The purpose of this reduction in neural participation is so that maximal neural effort can be dedicated to consciousness/learning to promote the storage of new information (e.g., Chen and Wise 1995ab), which is the main reason for having a brain in the first place (Hebb 1949, 1961, 1968).
Figure 1: The brain signals for the transfer of information to drive a brain-machine interface begin to saturate after collecting signals from more than 40 neurons using implanted electrode arrays in the neocortex of behaving monkeys performing, for example, a center-out task using the forelimbs. Single or multiple arrays surpassing 100 microwires in total were implanted throughout the neocortex often including the parietal, the motor, and the premotor cortices, all areas of the brain that contain neurons that respond to visually-guided forelimb movements. Data from figure 4 of Tehovnik and Chen (2015).

2024 3rd International Conference on Aerospace, Aerodynamics and Mechatronics Engineering (AAME 2024) will be held in Nanjing, China from April 12 to14, 2024.
AAME is an annual conference providing a yearly platform for delegates and members to present and discuss the latest research, and our delegates and members will have many opportunities engage in dialogues about Materials Science and Intelligent Manufacturing. It also provides new insights and bring together scholars, scientists, engineers and students from universities and industry all over the world under one roof.
We warmly invite you to participate in AAME 2024 and look forward to seeing you in AAME 2024!
---Call For Papers---
The topics of interest for submission include, but are not limited to:
- Rocket Theory and Design
- Avionics Engineering
- Communication Systems and Technologies
- New applications
- Higher frequencies and bandwidths
- Navigation and Precise Positioning
- UVA and MAV
- Aircraft navigation and positioning technology
- Radar detection and imaging technology
- Aviation navigation systems and new technologies
- Synthetic aperture radar technology
- Navigation guidance and control
- Analog and digital circuits
- Microelectronics manufacturing engineering signal processing
- Circuits and Systems
- Vacuum electronic technology
- Automatic Control Systems
- Sensors and Sensor Systems
- Aerospace Science and Technology
- Mechatronics Systems
- Electrical and electronic technology
- Microelectronic Technology Circuit analysis
All accepted full papers will be published in the conference proceedings by Journal of Physics: Conference Series (JPCS) (ISSN:1742-6596) and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Registration Deadline: April 10, 2024
Final Paper Submission Date: April 08, 2024
Conference Dates: April 12-14, 2024
For More Details please visit:

The neural segregation of declarative and procedural memory is based on an outdated idea that the mind is separate from the body [but for a recent example see Fig. 3 of Sendhilnathan, Goldberg et al. 2020a]. Both the neocortex and cerebellum participate in the creation of declarative and procedural memories during learning (Tehovnik, Hasanbegović, Chen 2024): the neocortex via the hippocampus stores the declarative/procedural memories (as conscious memories) in the association areas (e.g., the retrosplenial, parietal, infratemporal, and orbito-frontal cortices) and the association areas transfer the declarative/procedural information to the cerebellum for storage in the form of executable code (as unconscious memories) for the rapid execution of a learned behavior using a rate code, which is the language of the muscles (Tehovnik, Patel, Tolias et al. 2021). The neocortex and the cerebellum have an extraordinary capacity for the storage of information (either declaratively or procedurally) estimated to be 1.6 x 10^14 bits (2^1.6 x 10^14 possibilities) and 2.8 x 10^14 bits (2^2 x 10^14 possibilities), respectively, and the individual elements of a memory are concatenated (both in the neocortex and cerebellum) by linking the elements to generate a sequence of movements or words or a string of notes (Tehovnik, Hasanbegović, Chen 2024).
The evidence that declarative and procedural memories are processed together comes from the following experiments. As early as 1927, Pavlov showed in dogs that if he ablated the neocortex that the animals failed to learn new associations using the classical conditioning paradigm, which has been associated with procedural learning. This at first might appear to contradict the findings of Mauk and Thompson (1987) but in their conditioning experiments (on rabbits) transection of the cerebellum from the neocortex abolished previously learned associations only. They never tested the acquisition of new associations.
Indeed, the conditioning studies of Takahara et al. (2003) are instructive in this regard. They tested rats on a conditioning task in which ablations were done early or late in training (i.e., on day one after conditioning or on day 30 after conditioning). It was found that lesions of the cerebellum that included the cerebellar nuclei abolished an animal’s ability to learn the task early as well as late in training. When the same test was done with hippocampal or neocortical ablations it was found that hippocampal ablations abolished early learning but not late learning, and that neocortical ablations spared early learning but abolished late learning. The hippocampal/neocortical findings concur with the accepted view today.
Some might argue that classical conditioning is a highly non-cognitive task and therefore cannot be generalized to cognitive and other task types. It has been found that an object association task (by monkeys) was affected early in learning but not late in learning if the cerebellar cortex was lesioned while sparing the cerebellar nuclei (Sendhilnathan and Goldberg 2020b; also see Ignashchenkova, Thier et al. 2009). A similar result was found for adapting VOR (the vestibulo-ocular reflex) to a prism (by cats, Kassardjian et al. 2005): early learning but not late learning was affected by a cerebellar cortex lesion while sparing the cerebellar nuclei. It is speculated that late learning is mediated by the cerebellar nuclei (Kassardjian et al. 2005), which is supported by the findings of Takahara et al. (2003).
Finally, there is overwhelming evidence that declarative learning, as well as procedural learning (as just mentioned: Takahara et al. 2003), is abolished in the short-term once the hippocampus is lesioned, which prevents information from being consolidated at the level of the neocortex for long-term storage (Corkin 2002; Knecht 2004; Morrison and Hof 1997; Munoz-Lopez et al. 2010; Roux et al. 2021; Scoville and Milner 1957; Squire et al. 2001; Squire and Zola-Morgan 1991; Xu et al. 2016).
Just how far the foregoing scheme can be generalized across different vertebrates, all of which contain a neocortex (or its homologue) and a cerebellum is not known. Nevertheless, we now expect that these structures subserve both declarative and procedural learning (with no mind-body duality); the neocortex manages the storage of explicit information (i.e., of objects, words, odors, tastes, and so on) and the cerebellum manages the storage of implicit information for the grasping of objects, pronunciation of words, and for responding to odors and tastes (i.e., during eating, drinking, fleeing, fighting, or mating).
We know that the human neocortex has a tremendous capacity to store information at some 1.6 x 10^14 bits—or 2 ^(1.6 x 10^14) possibilities (Tehovnik, Hasanbegović, Chen 2024). What is unclear is where this information is specifically stored and in what form since the foregoing estimate is based on all the neurons and synapses in the human neocortex. We have speculated that the temporal and orbital cortices store objects from images of faces, to sounds of words, to the touch of a fresh breeze, and to the smells and tastes of one’s favorite coffee or tea (Tehovnik, Hasanbegović, Chen 2024).
A recent study conducted by Shan et al. (2022) that used EEG recordings from the entire neocortex suggests that during unconscious versus conscious states (as established using a continuous-flash suppression paradigm) the orbital and temporal cortices mediate consciousness (see Fig. 1). An important question remains, however: how do the remaining association areas of the neocortex that mediates spatial processing such as the retrosplenial, lateral intraparietal, supplementary frontal, and prefrontal cortices contribute to the conscious process? In the experiments of Shen et al. (2022), the behavior examined was the presentation of visual objects (that were stationary) followed by a keypress to signal detection, which activated both V1 and motor cortex. If the behavior had emphasized spatial vision (e.g., a rotating three-dimensional sphere that activated neurons in the middle superior temporal cortex, MST, see Fig. 2) would now the area declared to be ‘conscious’ shift toward the parietal and medial frontal cortices?
Figure 1: The main point of this figure is to show that consciousness was attributed to the temporal and orbital cortices and not to the parietal and medial frontal cortices. Note that an object detection paradigm was used on subjects as EEG recordings were made from a total of 662 sites over the left neocortex of seven subjects for the probe, object stimuli presented in the right visual field. For further details see Shan et al. (2022).
Figure 2: When the dots are in motion ‘apparently’ a sphere appears.


The neurophysiology of express oculo- and skeleto-motor behavior of primates has received renewed interest in systems neuroscience (e.g., Cecala, Corneil et al. 2023; Ito, Maldonado et al. 2022; Mekhaiel, Goodale et al. 2023). This interest is driven by the need to determine the critical brain circuits involved in establishing automaticity. The notion that the subcortical structure, the superior colliculus, is necessary for this process and that it relays sensory-cortical information from posterior regions of the neocortex to the brain stem is not in dispute (Schiller and Tehovnik 2015; for a recent review see Tehovnik 2024). The subcortical circuits that finalize all automated behavior awaits clarification, however.
The cerebellar Purkinje neurons establish an efference copy representation of automated motor responses for such behaviors as movement of the head to trigger VOR or presentation of a visual stimulus for triggering express saccades (Lisberger 1984; Miles and Lisberger 1981; Tehovnik 2024; Tehovnik, Hasanbegović, Chen 2024; Tehovnik, Patel, Tolias et al. 2021). Indeed, the Purkinje channel is important to the mediation of automated motor responses. The best evidence for this comes from the establishment of associations using classical conditioning, after an animal is overtrained on conditioning (Gallistel et al. 2022).
A brief review of the circuit mediating classical conditioning is required (see Fig. 1, from Fig. 1B of Gallistel et al. 2022). Once conditioning is established between a conditioned stimulus (CS) and an unconditioned stimulus (UCS) there is an immediate drop in discharge by the Purkinje neurons; this drop evokes an eyeblink well before the delivery of an unconditioned stimulus. Figure 2 shows the pattern of Purkinje firing once conditioning has been well programmed, such that the conditioned stimulus alone evokes an eyeblink (Fig. 2, from Fig. 2AB of Gallistel et al. 2022). Across the 20 trials illustrated, notice that the latency to evoke a drop in firing oscillates from a low value of 30 ms to a high value of about 100 ms. The figure shows at least three cycles of oscillation, such that every 8 trials a full range of latencies is exhibited by the neuron. The reason this oscillatory pattern is important (a pattern exhibited by overtrained, conditioned Purkinje neurons) is that it maps on to the latency of the behavioral response. As well, when subjects are overtrained on either VOR or on express saccades it is noteworthy that a similar oscillation of behavioral latencies occurs across trials, which can be grouped in blocks of 8 trials or so (Lisberger 1984; Tehovnik, Hasanbegović, Chen 2024). For example, in the case of express saccades (with latencies < 125 ms) versus regular saccades (with latencies > 125 ms), the two saccade types (which are bimodally distributed) are mixed across trials thereby revealing an oscillatory pattern (Schiller and Tehovnik 2015).
We propose that the oscillatory pattern reflects the utilization of a minimal and maximal circuit through the cerebellum used to reinforce automaticity. For the minimal-latency circuit (i.e., short-latency Purkinje discharge), there is a synchronization of transmission through the Purkinje-interpositus and direct-interpositus synapses to induce a combined EPSP at an interpositus neuron. For the maximal-latency circuit, there is no synchronization: the Purkinje-interpositus EPSP will follow the direct interpositus EPSP. The timing of the latter, the direct interpositus EPSP should remain invariant once an animal is overtrained. You might ask: why build a system with such checks and balances? This is to make sure that the Purkinje circuit is always up to date about the latest change to any behavioral program. The worst thing that can happen is to be executing an automated act once that act becomes behaviorally obsolete. This can lead to the death of an animal or fellow animal, e.g., when Alec Baldwin discharged a gun that he believed was not loaded, which resulted in the death of his colleague.
The idea that a minimal circuit (i.e., a cerebellar nuclear circuit only) is sufficient for the execution of automated acts comes from three studies. Disruption of just the Purkinje neurons (while sparing the cerebellar nuclei) affects only the learning of a new VOR gain or a new visual-object-response association while sparing the execution of all old associations (Kassardjian et al. 2005; Sendhilnathan and Goldberg 2020b). However, damage of both the Purkinje neurons and the cerebellar nuclei abolishes all learned behavior, new and old, using an eyeblink conditioning paradigm (Takahara et al. 2003).
The foregoing needs to be verified beyond a doubt. This will require the simultaneous recording of both Purkinje neurons and cerebellar nuclear neurons, as an animal learns new routines and as it is made to express overlearned routines. Furthermore, the effects of paired Purkinje lesions and nuclear lesions will need to be compared to those of individual Purkinje or nuclear lesions for an assortment of behavioral paradigms.
Figure 1: The basic circuit for the conditioning of an eyeblink response in mammals (i.e., in the ferret). Top inset illustrates what happens to a Purkinje neuron (Pc) before and after conditioning. After conditioning with a 300-ms conditioning pulse a Purkinje neuron stops firing; this event causes a cerebellar interpositus (AIN) neuron to increase in discharge. This increase is conveyed polysynaptically to the eyelids to induce an eyeblink. Other labels in the figure: climbing fibres (cf), anterior interpositus neurons (AIN), mossy fibres (mf), granular cells (Grc), parallel fibres (pf), and golgi cell (Gc). From Fig. 1B of Gallistel et al. (2022).
Figure 2: Raster data are shown for one Purkinje neuron whose discharge is abruptly terminated after a conditioned stimulus (the delivery of an electric shock to the ipsilateral paw) in the absence of the unconditioned stimulus (i.e., stimulation of the climbing fibres—see Fig. 1, fibre in red—that innervate both Purkinje neurons and interpositus neurons). (A) A total of 20 trials are illustrated. The first blue bar indicates the onset of the conditioned stimulus, and the second blue bar indicates its offset. The green marker indicates the termination of discharge, and the red marker indicates the resumption of discharge. (B) Show the cumulative histogram of the discharge above for the 20 trials. From Fig. 2AB of Gallistel et al. (2022).

Over 50% of human neocortex is devoted to three main sensory modalities—vision, audition, and somatosensation/proprioception—which are topographic senses (Sereno et al. 2022); olfaction and taste have a lesser representation and they are largely non-topographic (Kandel et al. 2013). Thus, even though the neocortex contains 16 billion neurons with 1.6 x 10^14 synapses (Herculano-Houzel 2009; Tehovnik, Hasanbegović, Chen 2024), at least half of these fibres are involved in the transmission of information, along with its eventual storage at a final destination. Neurons of the parietal, temporal, and fronto-orbital cortices house object information as conveyed by the senses (Brecht and Freiwald 2012; Bruce et al. 1981; Kimura 1993; Ojemann 1991; Penfield and Roberts 1966; Rolls 2004; Schwarzlose et al. 2005). The neurons in these areas are devoid of a topography, which is an attribute of the retrosplenial, lateral intraparietal, infratemporal, and orbital cortices all of which are association areas (Sereno et al. 2022). These areas are important for the integration of information before it is sent to the cerebellum (for further storage and efference-copy updating) and to the motor nuclei for task execution (Schiller and Tehovnik 2015; Tehovnik, Hasanbegović, Chen 2024; Tehovnik, Patel, Tolias et al. 2021).
If the first station for a given sense is ablated in neocortex of humans, then all ability to work with that sense is lost as it pertains to consciousness (Tehovnik, Hasanbegović, Chen 2024). For sensory information to be stored in the neocortex, the primary sensory areas must be intact and therefore cannot rely on subcortical channels to replace this function. For example, when V1 is damaged in human subjects they experience blindsight by utilizing residual subcortical pathways through the superior colliculus, pretectum, and lateral geniculate nucleus to transfer information to extrastriate cortex. Under such conditions human and non-human subjects, including rodents, respond only to high-contrast punctate targets or high-contrast barriers such that a human subject will declare that they are unaware of the visual stimuli, namely, they are only aware of their blindness (Tehovnik, Hasanbegović, Chen 2024; Tehovnik, Patel, Tolias et al. 2021). The same occurs for the other senses, but less work has been done in this regard; somatosensation has been investigated and confirmed to exhibit properties akin to ‘blindsight’ when S1 and S2 are damaged in human subjects (per. com., Jeffry M. Yau, Baylor College of Medicine, 2021).
Based on the recent fMRI work of Vigotsky et al. (2022), consciousness is stored in the association/non-topographic areas of neocortex, such that lesions of just the association areas would be expected to abolish all consciousness, which is normally supported by a continuous flow of declarative information by way of the hippocampus (Corkin 2002). Furthermore, it is the association areas that have priority access to the cerebellum (as verified with resting-state fMRI, as reviewed in Tehovnik, Patel, Tolias et al. 2021) for the long-term storage of consciousness after being converted into executable code so that motor routines can be evoked at the shortest latencies after being triggered by minimal signaling by the neocortex, which we believe is what happens in the generation of express saccades and other automated behaviors (Tehovnik, Hasanbegović, Chen 2024).
**The foregoing is an excerpt from a book we (Tehovnik, Hasanbegović, Chen 2024) are writing entitled ‘Automaticity, Consciousness, and the Transfer of Information’ which explores the relationship of the neocortex and cerebellum from fishes to mammals using Shannon’s information theory. It is notable that at the level of the cerebellum, a similar efference-copy mechanism is operative across vertebrates, so that the transition from consciousness to automaticity can be achieved using a common circuit with a long evolutionary history**
Greeting researchers ,
For successful research in the field of controlling the DC-DC convertor at PV systems and implementing and achieving MPPT.If any (PhD, Dr, and Pr) researchers who know about the devices needed to be in the lab to implement this research???.
I will be thankful to tell and support me.
Regards
i am searching for new sensors that is used in cars for effective performance of motor.
can you give me a book or paper or introduce me a site?
I am interested to know how the integral term in the PID control can be realized with mechanical components.
For example:
-- The proportional term corresponds to a spring.
-- The derivative term corresponds to a damper.
Is there a mechanical component that can simulate the integral term?
Dear colleagues in System Engineering and Automatic Control:
I used to apply classic controller on industrial processes such as PID controllers
Do you have an idea about what are the recent and modern techniques that scientists and researchers are using ?
I want to know more about the basic knowledge of automatic control, discrete-time system and Information Physics fusion system. I hope to have some courses to guide me.
A tank turret is to be controlled for angular positioning.A rotary actuation system is used which its internal dynamics can be neglected.The total inertia that is to be moved is 𝐽 = 400 𝑘𝑔. 𝑚2 . The friction on its beddings can be modeled as a viscous friction, 𝑏 = 100 𝑘𝑔. 𝑚2 /𝑠𝑒𝑐. The sensor selected for this system is of first-order delay system with a time constant, 𝜏𝑠 = 0.1 𝑠𝑒𝑐. a) If an integral type (I-type) of control is applied to control the output position of the system, plot the root locus diagram of the system and discuss on the stability of the system. (hint: 𝐾𝑖 = 1/𝜏𝑖 ) b) Change the controller type to proportional type (P-type). By using the Routh-Hurwitz criterion, find the K >0 values that will guarantee stability.

Hello everybody,
Is it possible to control (adjust) the charge generated by the PC or laptop USB port?
I mean using for instance a matlab code
Has prescribed convergence superiority compared to fixed-time convergence? In light of the fact that fixed-time convergence can do so.
Given that the 4th-order non-strict feedback nonlinear dynamics (also, including the transcendental terms)
x_dot=f(x)+g(x).u(x), ..........................................(1)
where x=[x1,x2,x3,x4],
f(x)=[ f11(x2,x4) ; f22(x2,x3,x4); f33(x4); f44(x2,x3,x4) ],
where f22(x2,x3,x4)=atan(x2,x3,x4), and f44(x2,x3,x4)=atan(x2,x3,x4),
g(x)=[0;b1;0;b2].
I am trying to find the solution for above differential equation using the following steps
step 1: convert the dynamics (1) into linear strict feedback using feedback linearization.
z_dot=p(z)+m(z).v(z) ..........................................(2)
where z=[z1;z2;z3;z4], p(z)=[p11(z1,z2); p22(z1,z2,z3);p33(z1,z2,z3,z4); p44(z1,z2,z3,z4)],
m(z)=[0;0;0;v].
step 2: apply the backstepping, v(z), where v is the function of u(x).
Note: elements (in order) in closed-loop system are consist of (a) input; then (b) controller, v(z); (c) original dynamics (1); (d) converted dynamics.
step 3: But the result is showing steady-state error in converted dynamics i.e., especially for z2, z4 (also, very sensitive to initial conditions). but all states of original dynamics are converging very well on zero equilibrium for any initial conditions.
Please help me about some questions as given below
(1) So what should I do for removing this error?
(However I am thinking about applying the Integral backstepping, adaptive backstepping, Dynamic Surface Control (DSC) ).
(2) Should I only concentrate on original dynamics (because converging the original dynamics with the help of converted dynamics)? But my aim is that I want to converge both of the dynamics.
(3) Will it be useful if apply backstepping for converted dynamics (2) or it is better to apply sliding mode directly on original dynamics (1)?
So I'm in my first year Masters program, with option in Automatic Control systems. When I look up research topics, I see plenty ambiguous topics , twisted and I wonder how this topics are formulated or is there a pattern and people are improving on them.
I currently still cannot come up with a topic to work on.
My interest is in control systems engineering and it should address industrial challenges.
I don't want to choose what won't be of importance and impact.
Your input will go a long way to help me decide on what to work on.
The problem is this: there is model of robot, created in Simscape.
And knee of this robot is made of "pin-slot-joint", which allows one transnational and one rotational degree of freedom.
In transnational motion, it is imposed the stiffness and damping factor, which gives influence also to rotational torque.
My aim is to write optimization or control algorithm in such a way that this algorithm should provide such stiffness(in linear direction), which will reduce the rotational torque.
By the way, rotational reference motion of knee is provided in advance(as input), and appropriate torque is computed inside of the joint by inverse dynamics.
But to create such algorithm, I have no deep information about block dynamics, because block is provided by simscape, and the source code and other information is hidden.
By having signals of input stiffness, input motion, and output torque, I need to optimize the torque.
I will be truly grateful if you suggest me something.
(I tried to obtain equation, by using my knowledge in mechanics, but there are lots of details are needed such as the mass of the joint actuator, it's radius, the length of the spring and etc. AND I HAVE NO THIS INFORMATION.)
If you suggest me something, I will be truly grateful.
I am designing a flight controller for a quadrotor.
At first I am designing a nested/cascaded controller consisting of only proportional controllers Kp . Now, if i tune the rate controller for 10 rad/s cross-over frequency, what should be the cross-over frequency for the angle, velocity and then the position loops. Also, what else do i need to know while designing a flight controller for practical implementation purposes?
Secondly, How do we implement a own flight controller such as observer based via arducopter?
I am trying to discretize a continuous time state space model using the following code
s=tf('s');
G=1/(Iyy*(s^2))
Gs=ss(G)
Gd=c2d(Gs,0.01,'zoh');
Now, when i use this discretized model 'Discrete State-space Model' in simulink, my close loop system goes unstable. Same is happening with observer, like discretized observer is making close loop system unstable. Can someone help me here?
Hi
Is there any linearized model for insulin injection pump?
The intention is to use this model for analyzing the dynamic of the pump and use it in automatic control.
Dear All
There is 1-DOF Series Elastic Actuator(the scheme and state equation is given in the image). A=[0 1; -K/m 0] B= [0 -1/m]*F
where K is the stifness constant of the spring, m is the mass of joint, x is the distance, done by spring.
Could you give me the suggestions on how to write the controller, which gives proper value of spring constant(K) according to the desired(reference) distance, which must be passed by spring. It is necessary to highlight that the control input force is constant and distance can be controlled only by modifying the value of spring constant.
My previous idea was to apply adaptive MPC controller(with Kalman Filter or System Identification).
But the problem is that the value of stiffness constant, is located in "A" matrix, and it must be treated as "INPUT" variable, and not as variable with disturbance or nonlinear one.
P.S. It also didn't work with Linear Function.
Thank You in advance.

I need examples of nonlinear lipschitz systems in Electrical systems. please refer me some links
Dear colleagues, friends, and professors,
As we know, we have very strong analytical approaches to control theory. Any dynamic decision-making process that its variables change in time could be characterized by state-space and/or state-action representations. However, we see very few control viewpoints for solving electricity market problems. I would like to invite you to share your thoughts about the opportunities, and limitations of such a viewpoint.
Thank you and kind regards,
Reza.
Is there any direct correspondence between delay and error?
If yes, how we can benefit from one to recompense for the other?
I am waiting for your viewpoints to discuss this subject.
Thanks.

I need plants with unknown terms in the mathematical model. Systems can be linear or nonlinear.
If we fabricated a bench scale chemical process, will it give the same characteristics as of the production scale one?
We are plannig to make a graduation project that is based on the application of advanced control (fuzzy,neural) in the process engineering (specially, Ethanol fermentation from molasses). We thought that fabricating a bench scale process will give similar data, charactristics and measurement as the production scale, correct us please?
I know in the field of fluid mechanics, we mostly deal with PDEs such as Navier-Stokes equation.
But what I want to do is to design a controller for flow control of a system consisting of a centrifugal pump and a pneumatic valve. So in this case I need the dynamical model of the system and by dynamical I mean the ODE equation of the system.
Is there any existing method, technique, or sensors to control the flow of water in a pipeline?
As i want to automatically stop and open the water pipeline after some fixed time intervals during an indoor experiment.
In other words, i want to control the pipeline valve automatically instead of manual control.
Thank you
Haibat Ali
In most of factories we have different systems and devices, as an example, in a procelain factory we have different furnaces, ball mills, presses and etc...
Also in most of factories the proceture of production is not completely automated. Some of them are done by human labor forces.
What I need to know is that how can we design a mpc controller in order to improve the overall production of the company?
I am using a high DC output voltage generator up to a range of 1kV for my research. (The output current is very low.) The output voltage can be either +1kV or -1kV. So if I want to change the polarity I have to do it manually. I am looking for a commercial part that can add to the circuit which can automatically change the output voltage polarity and give +/- 1kV in an adjustable timely manner (i.e. adjustable frequency).
Smart campus came out of the concept of smart cities by applying the principles of smart cities to the operation of the campus. Smart campus implies that institutions will adopt advanced technologies to automatically control and monitor facilities on campus and provide high quality services to the campus community. Any suggestions how and what to do re applying smart campus applications? Thank you
Many researchers suggested that i need at least 7000 positive samples and more than that for negative samples. Any recommendation?
I am student of Automatic Control and Robotics and I would like to take part in international competitions associated with Automation, desiging of laboratory stands and for example some competitions for young enginners, constructors, some competitions with engineering thesis etc.
Does anyone know where can I find this kind of contests? If yes, please provide me an information about locations and dates. Thank you for reply.
The Phalanx CIWS is a computer-controlled cannon system. This system, which is deployed on many American warships, is designed at the flick of a switch to detect, track, engage, and confirm kills using self-contained radar. A human operator cannot match the performance of such a device. But the duration of its automaticity is regulated by a human operator. Will a day come when such automaticity is controlled by another automaton due to its superior performance? How many layers of automaticity should be tolerated when fighting a war? Some have suggested using block-chain computer code to better regulate autonomous systems (Husain 2017).
Reference
Husain A (2017) The Sentient Machine. The Coming Age of Artificial Intelligence. Simon & Schuster, New York.
i need references for induction motor control circuits which include push buttons for start and stop , contactors , timers , relays , i searched a lot by different keywords automation , induction motor control , automatic control , i found a lot of books but not what i need
A checking sequence is defined as follows: "Given a specification finite state machine, for which we have its transition diagram, and an implementation, which is a ‘‘black box’’ for which we can only observe its I/O behavior, we want to test whether the implementation conforms to the specification. This is called the conformance testing or fault detection problem and a test sequence that solves this problem is called a checking sequence."
What is the current state-of-the-art algorithm to find the shortest checking sequence, given a specification finite state machine? Are there some open-source implementations?
I have to design a biped robot i need a simulation software which is free or can get licensed copy easily .
I am confused as not sound in Mechanical engineering part
Hello,
I am trying to use a Kalman filter to smooth a noisy signal. My signal is actually comprised by two components, i.e., my signal has two dimensions, since it represents the coordinates of a device in movement inside a room. By means of some algorithm, I can estimate the coordinates (x, y) of the device. However, I get some "jitter" that I want to remove in order to get smoother data.
I have employed a simple model for the motion of the device:
r(t2) = r(t1) + u(t1)*(t2-t1),
which in a discrete-time domain is equivalent to:
r[k] = r[k-1] + u[k-1],
where:
"r" is the current position (x, y),
"u" is the instantaneous velocity (ux, uy), measured as: u(k-1) = r(k-1) - r(k-2).
If the time step between consecutive samples is small enough, this model should be rather accurate, even if the motion is not uniform, except for a model noise: "w" with variance "Sw".
Starting from the state and measurement equations:
r[k] = Ar[k-1] + Bu[k-1] + w[k-1] {State equation}
z[k] = Hx[k] + v[k] {Measurements}
where "v" is the measurement noise, with variance "Sv", I have derived the time and measurement update equations, from the theory of Kalman filter.
The results obtained so far are good enough, but I am not sure if I have applied correctly the Kalman filter theory for this particular problem.
My questions are:
- Is this model appropriate?
- Is it correct to say that the velocity "u" is my "driving function" or "model input"?
- Since my measurements are already the current positions except for the noises, i.e., there are no transformations, is it correct to assume matrices A, B and H as identity or unit matrices (A = I, B = I, H = I)?
I am getting better results setting, for example, B = 0.7 * I for some particular "Sw" and "Sv", which means that my model should be actually r[k] = r[k-1] + 0.7*u[k-1].
Please, I am not looking for Kalman filter theory books or papers, since I have plenty of them, I need recommendations for this specific problem.
Thank you very much in advance.
Best regards,
Luis M. Gato
Could anybody point out trusty and high rank journal discuss that Multiple Model Control with hard switching is applicable to system with fast dynamics ?
Thanks.
Hi,
I have problems with understanding the problem of underctuation and nonholonomy in Quadcopter system. I cannot find a clear and precise definition of what nonholonomin and underactuated systems are. Could anyone explain to me exactly if a classical 4-rotor Quadcopter is an underactuated and nonholonomic system?
should i crop the positive images to contain only the objects to be detected without any background?
For Continuous Rotation Servos you can control the speed and position.
When implementing a PID controller for a continuous rotation servo then do you use a constant speed or do you have variable speed?
Thank you.
Question 1:What is the reason for a BIBO stable LTI system that all the roots of the characteristic equation have negative real parts?
Y(s)=T(s)*R(s)= N(s)/D(s), where T(s) is closed loop transfer function and R(s) is input to the system. For y(t) to be bounded, all the roots of D(s) should lie in left half of the s plane. It can have roots on imaginary axis (but not multiple root on same point). According to BIBO output it should be bounded for every bounded input. So poles of system transfer function should lie in the left half of the s plane(excluding imaginary axis as bounded input can have poles on imaginary axis). Is the laplace transform table only proof for this?
Question 2:
It is the same thing extended for Polar and Bode also.They both are not complete plots. For an unstable open loop system, they cannot predict the stability of a closed loop system. In books they mention polar and bode can be applied to minimum phase systems, but stable open loop systems may have zeros in the right half of the plane. So in polar and bode open loop transfer function restricted to minimum phase systems or systems only with poles only in the left half of the plane?
In Nyquist it is clear that it is an extension as it comes from Cauchy's argument principle.
For closed loop systems having repeated roots on the imaginary axis is the Nyquist theorem valid?
Please elaborate and correct me if I'm wrong somewhere.
it is well known that the controllability assumption is used for stabilizability reasons. Can we guarantee the stability of a feedback controlled system without using controllability assumption ?
for exemple (see image attached)
I have found a gain K such that the LMI is well checked even The pair (A,B) is not controllable... the controlled system is asymptotically stable ... for this case is it necessary to said that the pair (A,B) must be controllable ?

Hello all!
I would like to know that when we plot the frequency response of the given control system using bode plot or nyquist plot, we plot the graph of amplitude vs. phase.Can anybody explain that for which parameter we perform the frequency response analysis? Is it for the effect of reference input signal or sample of feedback of output signal on the given system? And what do we conclude in reality from the frequency response analysis when we say that system is stable since gain margin and phase margin is positive? Thank you.
In literature, PI (Proportional Integral) controller is the most used controller for automatic control of open water system(irrigation system and/or water transfer system) using the distant downstream control, however, the canal that I study is a small narrow steep canal(b=0.8m, L=3000m, i≈0.005), resulting in large delay time compared to time constant and making it even impossible to control the water level with distant downstream control concept, so can anyone give some proper advise for designing the control system for this canal? Thanks in advance.
I was reading an article on Euler-Lagrange systems. It is stated there that since M(q) and C(q,q') depend on q, it is not autonomous. As a result, we cannot use LaSalle's theorem. I have uploaded that page of the article and highlighted the sentence. (ren.pdf)
Then, I read Spong's book on robotics, and he had used LaSalle's theorem. I am confused. (spong.pdf)
I did some research, and found out that non-autonomous means it should not explicitly depend on the independent variable. Isn't independent variable time in these systems?
I would like to gain knowledge on MODEL PREDICTIVE CONTROL STRATEGY for implementing it in vehicle path following problem. As being from mechanical background Is there any good material to understand the crux of this control strategy. However there are numerous good material on internet but experiencing difficulty in finding the right/effective one for me which could act as threshold for gaining my confidence and start over.
Control theory, Frequency domain, Laplace Transform
Thanks,
I'm trying to make a reactive compensation in distribution networks but with an automatic control system that can make decisions based on the variables and perform an action on one device for compensation. I need to know which is the most relevant control system for this case and the mathematic model.
i wrote the matlab code of LMI toolbox for equation algebraic Lyapunov inequality on the pic .where Phi and C and d are known.the program run when the d is integer,but when d is fraction they give the error .my question is ?is any way to solve this equation when d is fraction ,please help me if you know any thing about it.?thanks alot

Hi everyone,
I am trying to apply MPC for voltage or reactive power control. i have distribution system which is not dynamical so i dont have dynamic system for MPC prediction. the system is static. if I have forecast data (load forecast or forecast data of PV or Wind), can i apply MPC with forecast data and how?
Thanks
Hello everyone
I am using a state space to represent the behavior of a MDOF system. i tried before that to convert a transfer function to a state-space (SDOF) but i could't found the desired state space.
I read some thing about loosing information but it's not clear for me till now.
My questions are:
- is it possible to convert a State-space to a transfer function without loosing information?
-if yes how ?
Thank you
Suppose, a simplified model of the roll dynamics of an aircraft, which is represented by xdot(t) = A x(t) + B u(t), where the states are (phi, p) and the u is the aileron command **(neglecting the dutch-roll mode).
Due the fact that, the controller contains an integral term, the system dynamics is augmented to contain the state related to the integral of the error signal. Let's call it of Tau(t). Then, we have [xdot Tau_dot] = A_aug x_aug(t) + B_aug u(t).
So, the control law is given by u(t) = - K x_aug + K(1) Phi(ref), where K is the static gain of the controller. Which we can "interpreted" as K = [Kp Kd Ki] = [Proportional, Derivative , Integral] (I have all states to feedback).
As the output of actuator is limited and the controller has an integral term, I should implement an Anti-windup scheme to avoid the windup of integral term. So, what type of anti-windup do you suggest?
Thanks in advance.
May professors or lecturers engaged in educating Automatic control to undergraduate level advise me on the best up-to-date text/reference to use in preparing lectures' and assignments, it is preferable that the reference uses MATLAB tools and to provide instructor's manual and solutions manual?
Theses days, I am doing literature review about the new trends in control of nonlinear systems, but it looks like this is a very broad subject. What are the new research areas in the control of such systems?
consider a r-integrator chain
\dot z_1 = z_2
...
\dot z_r = \phi + \gamma u.
where \gamma is bounded positive (to ensure the controlability), and the TIME DERIVATIVE of \phi and \gamma are also bounded. contrary to problem formulation of HOSM where \phi is assumed to be bounded, not its time derivative.
I'm looking for a controller u which ensure r+1 sliding mode and can be called r-order supertwisting.
Does aneyone have a bibliography of such controller u, where u is continuous or quasicontinuous?
I am looking for references where I could find examples for the minimum time problem but with bounded and NON-POSITIVE (or non-negative) inputs. I have a second order LTI system (and the problem is of regulation type, i.e. to brings states from X0 to 0)
Hi,
I am currently exploring the design of APRBS (amplitude modulated pseudo random signal) sequence. At present the one I have developed was based on literal word of signal, i.e. using the PRBS and performing amplitude modulation in order to achieve an APRBS sequence.
However, I am hoping to find a substantial methodology which allow to manipulate the amplitude and frequency at different regions.
The purpose for signal design is to create training data for neuro-fuzzy model which is able to excite system of interest at all realistically possible frequencies.
As I have failed to find appropriate and relevant resources, I was hoping if anyone could point me in the right direction.
I appreciate any comments and help.
Kind Regards
Gaurav
Voltage regulator in power station is normally a composite feedforward and feedback regulator. This regulator has the best performace and stability at this application. Is there any theory for composite feedforward and feedback regulators? What's the difference between pure feedforward and pure feedback regulator?
Hi to all,
I construct a plant including a way for passing of air. I can measure pressure at the start and end of tube and flow rate and maybe temperature. I add an extra artificial orifice as resistance Intentionally. How can i calculate airway resistance of this combination with these measures ?
As you know, the relation between Pressure, flow and airway resistance are obtained as below : R = (P_end - P_start) / (Flow_av) where Flow_av=(Flow_end + Flow_start)/2.
1- I have done numerous experiments with different flow rate. In every flow rate, resistance is different but i expect the airway resistance (based on electrical analogous) is constant ! isn't it ?!
2- Also i have done numerous experiments with constant flow rate and different artificial orifice. When i add a orifice with R=5 cmH2O/lit/s, the airway resistance (obtained from above eq.) was 6 cmH2O/lit/s. Then I add another orifice with R=5 cmH2O/lit/s and repeated the experiment again, but this time the value of airway resistance was 7 cmH2O/lit/s. What is wrong ?!!
I have a discrete model of a DC motor and I want to design a PID controller for it using ZN method
How can I get the PID parameters (kp, ki, kd) using ZN method?
The code is in the attached file
Using stereovision camera in uav navigation.How can it can create a navigation waypoint
I have several large discrete-time Simulink models, built from fairy simple components, with vector/matrix ops, and I would like to quantify and compare their arithmetic complexity (e.g. the number of ADD, SUB, MUL & DIV operations, and perhaps delays) per input sample. Is there an in-built analysis tool that will do this automatically for me?
Thanks,
Hugh
In one dimension, it says whether the time series is anti persistent, persistent, or uncorrelated. How do we extend this interpretation to two different time series when we perform 2D DFA?
for example system "A" has
rise time=0.0464 Sec, settling time=1.3609, overshoot= 0.332 , peak time=0.81, steady state error=0.1325
system "A" has: rise time=0.0248, settling time=1.6091, overshoot= 0.626, peak time=0.87, steady state error= 0.6231
which system will be faster and more stable.
How can the proper controller adjustments be quickly determined on any control application?
We are working on the incremental timetabling problem. We found a timetable that is satisfies hard constraints and optimizes soft ones. After accepting the timetable, new constraints appear.
Is there a solution that suggests minimum change to have the new table with the incremental constraints without destroying the old table in order to minimize the disturbance of the stakeholders?
Why in frequency domain analysis of control systems, bandwidth is defined as 3 dB down from the zero frequency value of magnitude and not from maximum value of magnitude?
I'm struggling to find information on the use of the lambda norm; i.e. if I'm not mistaken
||x||\lambda= (\int0T e-\lambda t ||x(t)||2 dt)1/2,
where \lambda>0 and ||.|| is the 2 norm.
What advantage does it have over the Lp norm topology and why is it preferred over Lpnorms in certain contexts? For example, in [1], I have stumbled upon the following phrase which confuses me: "However, it is well known that the lambda norm leads generally to low convergence rates."
I find this similar to exponential forgetting in identification/estimation schemes, except in this case we're interested in the signal norm; the way it is used in [1] is to prove uniform convergence of a function sequence {f0,f1,f2,...} with bounded support ([0,T]); i.e. limk\to\infty ||fk-f*||\lambda = 0.
Why wouldn't we consider, say L2 convergence? Is it because it is easier to work with, or does it have some other physical meaning (like the energy interpretation of L2) that makes it useful?
Thanks!
------------------------------------
Ref.
[1] A. Tayebi - Adaptive iterative learning control for robot manipulators, Automatica, 2014.
[2] W.-J. Cao and J.-X. Xu - On Functional Approximation of the Equivalent Control Using Learning Variable Structure Control, IEEE Trans. on Automatic Control, 2002.
I want to model a cart with wheels in MATLAB SimMechanics and it is important for me that use wheels, not a prismatic joint instead of wheels.
And then I can apply torque by a DC motor to wheels.
Latest trends show that the favor of many researchers towards self tuning controller design. Do we have any reference at entry level which will explain the basics?
The order of the numerator polynomial is less than that of the denominator. Why?
Sensitivity of Multi Agent system.
Controller is an essential part of microgrid for smooth transition between grid connected and island mode. What is difference between these two modes and how can we decide the optimum parameters for design of controller such that it works seamless and there shouldn't any problem of blackout?
Suppose an unstable plant has a transfer function 1/(s-a) and we can stabilize it by cascading a controller of transfer function (s-a)/(s+b), where b is positive and hence overall system is now stable. In this system there is no need of negative feedback to stabilize it, then why do we use negative feedback and unnecessarily increases the complexity of system ?
If a system is represented in state space form, what do the eigenvectors and eigenvalues of the system matrix indicate?
Suppose we have a second order system. How do we relate eigenvectors and eigenvalues with the state trajectory?
The mems accelerometer is mounted on a quadcopter. Because the quadcopter is very small and flying indoors, it is basically impossible to use GPS to do sensor fusion. The position information is mainly used as feedback signal for autopilot mode.
I haven't tried Kalman filter or other filter technologies yet. And because of the thermal mechanical error I understand the accuracy couldn't last for a long time. Currently my target is to maintain a relatively accurate position signal in one minute.
Thanks.
When we convert a transfer function of LTI system into state space model, is there any loss of information due to the cancellation of pole zero? Transfer function doesn't support unobservable poles.
I'm looking for manufacturers, similar to Faulhaber. They produce ADM0620-2R-V3-21 stepper motors with M1,2 x 0,25x15 lead screws. Similar products that I need for comparison should be stepper micro motors, with weight around 1,5 grams and similar precision.
The required design parameters of a control system generally given in time domain format (like maximum overshoot, response time and settling time ). So if we analysed a control system in time domain then it will be more convenient. But the problem occurs for higher order system, because damping ratio and natural frequency only define for second order prototype system, therefore to realize higher order system in time domain we have to approximate it into equivalent second order system and this approximation sometimes leads an stable system into unstable system if loop gain is too high.
To alleviate this problem we can analyse higher order system into frequency domain. but here the problems in correlating the time domain specification with frequency domain specification. For example how one can relate gain margin and phase margin with maximum overshoot, rise time and settling time. like if maximum overshoot is given 12%, then how much gain margin and phase margin fulfill this requirement?
I didn't find an exact difference between a compensator and a controller.
While designing a PID controller using active components i.e. OPAMP and passive components like resistors and capacitors, how can one adjust the value of Kp, Ki & Kd by varying these passive elements.
I am currently developing a 14-week course for computer science undergraduate students that focuses more on the application of control systems and the design and implementation of simple controllers using microcontrollers. The ability to simulate such controllers using Matlab is thought to be essential as well. Can you recommend the core topics needed in order to develop such course? Assumptions are that students have basic knowledge on 1) computer science and engineering mathematics; 2) basics on signals and systems, and 3) basics on microcontroller system design.
Fizzy logic, ANN, PSO, NF,...
I am just curious to know how would one design a PID controller or any other form of controller for a system which has a reference signal that varies with time. The controller I have known and have designed (for many applications) works on the basis of fixed reference value, but now I want to know the design approach for a controller with a system that has time changing reference value ?
One of the most commonly used performance metrics in controllers synthesis is the H-inf norm. Suppose that the B and C matrices are identity (all the states are output variables and can also be controlled) and D is an all zero matrix. Then the H-inf norm of the system tells you the worst case amplitude gain of the transfer matrix H(s)=C(sI-A)^{-1}B . On the other hand, the damping ratio of the eigenvalues of A are usually considered as a measure of how large are the transients and is measured as -Re(\lambda_i)/abs(\lambda_i).
In industrial applications, many mechanical systems do not have any direct feedthrough term, i.e., the D matrix in the state-space model. Is it true? And why is that?
How can I find efficiency comparison, and how I can achieve variable valve timing?