ChapterPDF Available

Toward a Frequency-based Theory of Neurofeedback

Authors:
  • The EEG Institute, a dba of EEG Info
  • EEG Institute

Abstract and Figures

This chapter is intended to serve as a counter-point to the other chapters of this book in that it presents neurofeedback as both an alternative and a complement to stimulation-based methods of neurorehabilitation. Neurofeedback is based on learning or a training model, and generally relies on one of two basic approaches: the specific targeting of dysfunction or the more general promotion of functional competence. Both appeal to the frequency-based organization of cerebral network function. The novel finding is that this frequency-based organization reaches deep into the infra-low frequency (ILF) region.
Content may be subject to copyright.
1
Toward a Frequency-based Theory of Neurofeedback
By Siegfried Othmer, Ph.D. and Susan F. Othmer
“Rhythmical oscillations are archetypes of time-dependent behavior in nature.”
J.A. Scott-Kelso
This monograph is a slightly augmented version of our chapter in the book “Rhythmic
Stimulation Procedures in Neuromodulation,” James R. Evans and Robert A. Turner,
editors, Academic Press, San Diego (2017)
Abstract
This chapter is intended to serve as a counter-point to the other chapters of this book in
that it presents neurofeedback as both an alternative and a complement to stimulation-
based methods of neuro-rehabilitation. Neurofeedback is based on a learning or a
training model, and generally relies on one of two basic approaches: the specific
targeting of dysfunction or the more general promotion of functional competence. Both
appeal to the frequency-based organization of cerebral network function. The novel
finding is that this frequency-based organization reaches deep into the infra-low
frequency (ILF) region.
Through ILF neurofeedback the existence of specific frequency relationships has been
established. These govern the inter-hemispheric and intra-hemispheric coordination in
the frequency domain, and they present a unified picture of the relationship of ILF and
EEG phenomenology. A regulatory hierarchy is implied in which the right hemisphere
bears the principal burden for core regulation and early development. ILF
neurofeedback presents an attractive option for the restoration of regulatory
competence and the enhancement of function. The chapter begins by laying the
foundation for the frequency basis of cerebral regulation.
Keywords
Neurofeedback, Self-regulation, Infra-Low Frequency, Synergetics, Self-Organized
Criticality, Scale-Free Distribution, Small-world Model, Optimum Response Frequency,
Resonance, Regulatory Hierarchy, Developmental Hierarchy, Frequency Hierarchy
2
On the authors
Siegfried Othmer has been involved in the development, research, and clinical practice
of neurofeedback since 1985, along with his wife Susan F. Othmer. Siegfried obtained
his Ph.D. in physics at Cornell University in 1970, and subsequently was active in
aerospace research until 1989. Susan obtained a BA in physics at Cornell and then
pursued a Ph.D. degree in neurobiology at Cornell and at the Brain Research Institute
at UCLA. The epilepsy of their first son motivated their involvement in neurofeedback,
starting in 1985. Their work is currently being conducted at the EEG Institute in Los
Angeles. Along with their son Kurt, Othmers have to date inspired a world-wide network
of more than 5,000 practitioners of ILF training in some forty countries.
Introduction
This book is mainly concerned with techniques of low-level stimulation to effect
neuromodulation for therapeutic purposes. Similar objectives can be achieved through
the use of a learning paradigm, one that is typically based on a reinforcement strategy
although that is not essential. That general approach is now called neurofeedback,
which is the topic of this chapter for a complementary perspective. Neurofeedback
belongs in the domain of biofeedback, with the distinction that it utilizes the EEG as a
training variable. In fact, it used to be called EEG biofeedback in its early days. It will be
seen that it is just as strongly frequency-based as the stimulation technologies.
There is a great deal of methodological commonality among the different methods of
biofeedback, and there is a great deal of overlap in what they accomplish clinically. On
the other hand, there are also quite significant differences, and the two disciplines have
developed somewhat independently over the last half-century. Familiarity with
neurofeedback makes this unsurprising. That is because it is difficult to overestimate the
advantages conferred on neurofeedbackwith respect to other biofeedback
techniquesby virtue of its appeal to the frequency basis of the organization of cerebral
communication.
It is the frequency basis of neurofeedback that gives it its extraordinary sensitivity, its
specificity, its flexibility, its breadth of application, and its ability to reveal aspects of
brain functional organization. This feature also gives neurofeedback a substantial
advantage vis-a-vis the neuroscientist in the brain laboratory. By putting the brain in the
feedback loop, the brain becomes an active agent in the process. It is both an
exquisitely sensitive detector of frequency-based signals and a strong responder to
brain-derived, spectrally specific information. What serves us well in the task of
3
remediation also serves us well when it comes to the scientific exploration of brain
functionand of its dysfunction.
Throughout history, we have learned about function by way of dysfunction. Sometimes
the dysfunction is even deliberately introduced, as in lesion studies. Sometimes we are
simply opportunistic. Surgery in cases of epilepsy provided opportunities for probing
cortical function in human beings that were not otherwise available. Hans Berger took
advantage of trepanation of head-injured patients to observe the EEG without
attenuation by the intervening skull. Sigmund Freud’s core interest was the functioning
organism, but it was revealed to him largely through dysfunction. Oliver Sacks relished
his odd-ball cases for the light they shed on how brain function must be organized. In
neurofeedback, by contrast, we get to observe the brain under the most favorable of
circumstances, the quiescent state, one that calls for mere engagement rather than
overt challenge.
The Frequency Basis of the Organization of Natural Systems
As for the frequency basis of brain functional organization, matters could hardly be
otherwise. Perhaps this signifies a lack of imagination, but once one obtains insight into
how the brain manages its affairs, one wonders how it could have been done differently.
Albert Einstein considered the comprehensibility of nature as non-obvious, which is
surprising, as many things were obvious to him that weren’t obvious to his
contemporaries. In any event, nature becomes comprehensible to us through the
language of mathematics. The laws of nature rest upon a mathematical foundation, one
that can lay the strongest claim to obviousness among all of the sciences.
Nature favors us in yet another respect, which is that the laws of nature tend to be
simpleeven though the actual execution may be complex. Nature exploits the simplest
concepts to the fullest. Unsurprisingly, it was a physicist who thought nature’s laws were
simple, because physicists found their bearings with simple systems. A similar sense of
simple geometrical order in nature had already inspired the ancients, the Pythagoreans,
more than two millennia ago.
Our understanding of nature grew with our understanding of mathematics, and was
always constrained by it. Jointly these understandings even impinged on social change.
We stand before such an age of rapid social change presently as we begin to exploit the
potential of recovery and enhancement of brain function, a notion that has only recently
come to be accepted. As we stand at this major threshold of our human experience, it is
worthwhile to recapitulate how we reached this point.
4
With Isaac Newton we got the law of inertia. A single body (assumed rigid,
homogeneous, and spherically symmetric) moving in an empty universe continues on its
path indefinitely without alteration. That gave us stasis, or at least immutability.
Introducing a second body into the mix gives us one of two situations. Either the two
pass each other in the night, never to meet again, or they go into orbit around their
mutual center of gravity. This orbit closes on itself and repeats endlessly and identically.
With only two bodies, we have periodicity, the endless repetition of the same pattern.
We have frequency.
The orbits obey a simple mathematical or geometrical description. They are ellipses, as
first shown by Johannes Kepler, and ellipses are conic sections. These are the
intersections of a plane with a cone. If the plane is parallel to the base of the cone, we
have a circular orbit. But for a given size there is only one way to get a circular orbit,
whereas there is an infinity of ways we can have an elliptic orbit with a given axis. So
that is what we actually get to see in naturethe infinite variety rather than the singular
exception. The periodicity, however, is there in either case, and it is maintained in
perpetuity.
The Newtonian view of space is the one bequeathed to us by Euclid. The universe is
flat. The orbit of a two-body system lies in a plane, and Descartes is given credit for
placing a coordinate system on such a plane, which brought together geometry and
algebra. In the orbital motion we see both movement and perpetual repetition, or stasis.
But until Newton’s lifetime, the defining feature of orbital motion was its stability, its
stasis, rather than its movement. And that was reflected strongly in the society of the
day, one that was deemed to reflect the natural orderwith the emphasis on stasis.
It was the parallel but independent development of calculus by Leibniz and Newton,
following on the heels of Descartes, that first gave mathematicians the ability to handle
dynamical relationships. This development was massively resisted by the power
structure of the day because it loosened the moorings underpinning the static political,
social, and religious hierarchy as an ideal. The reality on the ground was already in
turmoil, with the Reformation and the Thirty Years’ War, and now theory became
accommodating. The geometers weren’t happy either. The story of the ‘dangerous
infinitesimal’ has been recently told by Amir Alexander (2014).
With this tool in hand, a circular orbit could now be described in terms of two sinusoidal
signals, the one a quarter cycle out of phase with respect to the other. A simple
description of a single-frequency waveform had been found. Two centuries later
electromagnetic radiation would be discovered that had precisely this form, the
sinusoidal fluctuation of orthogonal electric and magnetic components. It is in
5
electromagnetic radiation that the mathematical ideal of a purely sinusoidal signal came
closest to expression in nature.
The nineteenth century also saw the emergence of non-Euclidean geometry with
Gauss, although he didn’t publish because he feared backlash from the “blockheads”
that invariably beset the frontiers of human inquiry. The universe did not need to be flat.
This further undermined the classical model. And we are just now celebrating the
centenary of general relativity, which described our universe as non-Euclidean. Under
gravitational influence, even two-body orbits migrate over time, but they do remain
predictable and thus calculable.
However, with the mere introduction of just one more body into the mix, such a system
is perturbed and enters into a non-repeatable orbit. With a mere three bodies, we
already have a chaotic trajectory for all three! We have already lost long-term
predictability. The orbits are still bounded, but they no longer close on themselves. The
mathematical simplicity of nature gets fuzzy and unpredictable in its actualization. We
call these orbits limit cycles, and the whole system would be called a limit cycle
attractor. Although capable of occasional rapid and substantial alteration of orbits, such
systems are still short-term predictable. We are entering the domain on what is called
deterministic chaos.
With the interaction of many-body systems, we inevitably find ourselves in a world of
complexity that our traditional tools of analysis are not suited for. We were already in
trouble at three. The brain confronts us with the ultimate many-body problem, in the
sense of uncountable degrees of freedom. And we came to the task with analytical tools
that were suited to a very different set of problems.
Wherever we see many-body problems in nature, we tend to see clustering and the
emergence of patterns. Often these patterns are periodic or wave-like. We see
clustering at all spatial scales in the galaxy, even out to the largest. In galaxies we may
see both periodicity and the formation of density waves in the arms. We are only too
familiar with hurricanes, tornados, and closer to home, the vortex in the bathtub drain.
Living systems have more degrees of freedom, and they exhibit an even greater
propensity to organize into patterns. For example, we see the formation of toroidal
patterns in schooling fish. Even mitochondria have been observed to oscillate.
Once again progress in mathematics was needed to probe this new realm, and these
methods have come along just in the last fifty or so years. Collectively, they illuminate
the rules governing self-organizing systems; they find order within the general disorder;
they surface general organizing principles. This demanded a new theory of networks,
6
new ways of doing frequency-based analysis of transient states, and new thinking on
how living systems dynamically organize their states. Networks in living systems are
almost never actually random in their connection scheme, although that remains the
mathematician’s ideal. Our attachment to the Gaussian distribution is coming to a
messy dissolutionbut our fondness lingers. The complex realm of self-organizing
systems is populated with broad, scale-free distributions, which are about as far
removed from the narrowly distributed, tightly constrained Gaussian as one can get.
The Fourier transform was suitable for stationary processes that are infinitely repeating,
not the lively dynamics that now must be characterized. The new frontier calls for
understanding sudden state transitions and macroscopic changes in large-scale state
configurations. These confound our predictive models and appear to contradict the
sense of continuity of our lived experience. But it is of such events that our life
experience is constructed.
Additionally, the neurosciences benefited significantly from the emergence of real-time
imagery of brain function with functional magnetic resonance. First and foremost, this
brought the discussion back to the living brain. Many of the questions that could now be
fruitfully raised and answered, however, were also issues for us in neurofeedback
decades earlier, when we were still flying blind. It was in those days that the value of
placing the brain in a feedback loop was solidly established, and that made for the
breakthrough even in the face of our conceptual blindness.
In particular, neurofeedback has allowed us to explore the frequency basis of neural
organization in a way that could not readily be done through brain imagery. The brain’s
manifest competencies, as revealed through neurofeedback, take us to the very edge of
believability. That story will be told in this chapter.
Viewing the Brain as a Control System
What is the brain’s burden? The brain has to be organized as a control system, which
means that it has to obey all the rules that apply to such a system. The prime directive
for a self-regulating control system is to maintain its own stability unconditionally. In the
case of the brain, the concern is firstly with conditions such as seizures, migraines,
panic, narcolepsy, cataplexy, and coma. However, at another level, it must be
appreciated that the brain even manages to sustain itself through such states rather
consistently, so viability is being maintained at another level. With brain stability
assured, the secondary burden is to regulate its own states with the requisite subtlety.
The maintenance of proper operating points is termed homeostasis, but is perhaps
more appropriately referred to as homeodynamic equilibrium.
7
With its own housekeeping in order, the brain then also regulates the rest of bodily
functions. And the brain’s final concern is engagement with the outside world. As
observers of our own brain function, we tend to view this hierarchy the other way up, in
line with our personal objectives in life, but that does not match up with the brain’s own
priorities. Our engagement with the outside world is a mere perturbation on what the
brain is managing on an ongoing basis. For the brain, there is no respite from its core
duties of self-maintenance and self-regulation.
This hierarchy is dramatically confirmed for us in both evoked potential research and the
current fMRI imaging. Evoked potentials are those features in the EEG that are explicitly
related to input or output functions being executed by the brain. These signals tend to
be either comparable to, or smaller than, the background EEG! And in fMRI we observe
that the difference in signal level between an activated and a passive background state
is on the order of half to one percent typicallyand at most about 5%. This is just a
small increment on what the brain is doing from moment to moment in the absence of a
challenge.
In neuroscience research, the bias is toward investigation of the engaged brain. We’ve
been looking at evoked potentials seriously since the mid-sixties. Both PET and SPECT
imagery involve comparison of an active state with a passive baseline state, and now
fMRI research has trod the same path. All are looking at metabolic activity that is
associated with a particular function, which therefore must be distinguishable from
baseline.
In neurofeedback, by contrast, we are interacting with the non-engaged brain, or at least
as close to that as we can get. This is a preoccupation that is largely complementary to
conventional neuroscience. We are concerned with the brain that is ‘merely managing
itself, which just happens to be its primary responsibility. We are witness to the self-
regulating brain conducting its core activity. It should not surprise anyone to find that
this has been very fertile ground indeed.
The EEG is our window into brain function, and the EEG reflects neuronal activity at
cortex within the EEG spectral band from nominally 0.5 Hz to 30 or 40 Hz and beyond.
We, as the outside observer, get to see a spectrum that is packed densely with spindle-
burst activity that is sharply delimited in frequency, variable in frequency over time, and
highly dynamic in terms of amplitude. This is illustrated in Figure 1. For the brain that is
observing its own EEG, however, this is a new window into its own self-regulatory
activity. The brain has a very different experience with this real-time signal than the
outside observer. For the brain, it is a matter of recognition, by virtue of the correlations
that exist between the signal and its own internal activity. The distinction perhaps
8
Figure 1. EEG Spectrum-High Frequency Resolution (0-17 Hz): EEG spindle-burst activity fills
frequency space in the range up to 20 Hz and beyond. Each spindle-burst is narrowly defined in
terms of instantaneous frequency, and the space between the spindles appears to be entirely
devoid of collective neuronal activity. The time course of spindle-burst amplitude exhibits high
dynamics.
becomes more apparent in Figure 2, which shows an epoch of EEG spectral response
over a small range of frequencies under three different conditions. On the left, we see
the spectral with low frequency resolution and high time resolution; on the right, we see
the same signal under conditions of high frequency resolution and low time resolution;
the middle shows an intermediate condition.
9
Figure 2. The same temporal window into EEG spectral activity is presented with three different
choices of tradeoff between frequency resolution and time resolution. All three representations
refer to the same information subject to different signal processing. Particular aspects of the
underlying ‘reality’ are revealed in each of the three screens, and the actual reality can be
intimated from a composite of the three. The brain would have no difficulty recognizing its
authorship of all three if presented with the information in real time.
The signal looks very different under the three conditions, and yet they all represent a
comparable ‘truth.’ One trace is not more ‘correct’ than the other. The underlying EEG
must have at least the degree of frequency specificity that is implied in the right panel,
and it must have at least the temporal dynamics that are displayed in the left panel. The
actual reality, irrespective of whether we are able to reveal it or illustrate it, must be a
combination of both.
Ironically, it is the Heisenberg Uncertainty Principle that prevents us from displaying the
signal in its full, glorious complexity. We cannot have both high frequency resolution and
high time resolution at the same time. Note, however, that the observing brain does not
have the same problem. The brain merely has to get enough information to ‘recognize’
its own agencyits controlling rolewith respect to the signal. And it can do that
regardless of which of the above representations it is exposed to in real time.
10
There is yet another issue, however. The Fourier transform that is used to display these
data has fixed frequency bins, which imposes an apparent order on the EEG signal that
does not correspond to reality. The frequency spindles migrate in frequency, and they
experience discontinuities in phase. The Fourier transform is basically unsuitable for
dealing with highly dynamic signals. This calls for different analysis schemes, including
different transforms (Gabor or Hilbert), or wavelets, or other forms of time-frequency
analysis.
In any event, irrespective of how the information is delivered, the brain is not confused
because the signal refers to its own lived experience. The brain then utilizes this
information to inform its own ongoing regulatory response. This process requires that
the brain assign meaning to the signal it is observing, which follows directly from the
correlations it is detecting with respect to its own internal states and state transitions.
The process here described is indistinguishable from what goes on in ordinary skill
learning, except for the novelty that in this case the relevant information is derived from
the EEG rather than from the brain acting upon the environment and observing the
response. As outsiders to this process, we now confront a similar challenge: to extract
meaning from the EEG signal that sheds light on the brain’s regulatory mechanisms.
We want to know what role the frequency-based ordering serves in the overall cortical
regulatory schema. This question has existed ever since Hans Berger first published on
the human EEG, calling it the Elektrenkephalogramm (Berger H, 1929).
Mass Action in the Spatial and Timing Domains
For Hans Berger, the EEG manifested the collective activity of neuronal assemblies at a
time when such collective role was already being considered. As far back as 1906,
Camillo Golgi was persuaded that cortical neurons functioned collectively rather than
individually: “Far from being able to accept the idea of the individuality and
independence of each nerve element, I have never had reason, up to now, to give up
the attempt that I have always stressed, that nerve cells, instead of working individually,
act together…. However opposed it may seem to the popular tendency to individualize
the elements, I cannot abandon the idea of a unitary action of the nervous system….”
And it was in 1906 that Sherrington published his book titled “The Integrative Action of
the Nervous System.” A new conception was starting to take hold.
The discovery of the EEG, then, gave substance to this conjecture, as the EEG clearly
reflected collective behavior, and such collectives illustrated the regulatory role of the
brain in organizing the neuronal assemblies into functional entities. The EEG continued
to be studied and characterized, and by 1974 more than 1000 papers had been
published on the alpha rhythm alone (Brown & Klug, 1974).
11
However, the EEG did not reveal its secrets easily. The complexity of the signal was
discouraging, as was the lack of testability. A more fruitful area of study was the related
question of evoked potentials, which were also a manifestation of neuronal group
behavior, but in this case, it was possible to identify functional relationships more
readily. For these purposes, the background EEG was seen as an irrelevance at best,
and a nuisance at worst. This was true until recently, when it was realized that these
phenomena are not entirely independent. The main body of neuroscience, however,
remained preoccupied with the study of the individual neuron as the presumed key to
the understanding of brain function. This was called the “single neuron doctrine, and it
succeeded in bringing about collective behavior on the part of neuroscientists: many of
them chose to study the behavior of individual neurons!
Eric Kandel recalls in his memoir the instruction from his mentor Harry Grundfest during
his graduate career at Columbia University: “Study the brain one cell at a time.” It was of
course quite necessary and appropriate that this be done, but the secret to brain
function was not ultimately to be revealed there. Even worse, the rhythmic properties of
neuronal firing in groups are not necessarily apparent in the individual firing streams of
neurons. Rhythmicity arises out of correlations, and these may involve larger spatial
scales than those under inspection. In a given neuron, the rhythmicity to which it
contributes may account for only a small part of the variance in the firing stream. In
consequence, there was little cross-talk between those who studied neuronal firing
streams and those who were working at the level of the EEG or evoked potentials.
The study of what happens in the near neighborhood of neurons in cortex also got an
early start, despite severe challenges on the instrumentation front. It was recognized
early on that the most obvious anatomical feature of the interaction zone of cortex, the
gray matter, was its organization as a two-dimensional surface with fairly homogeneous
structural organization. The essential characteristics of the neuronal system were also
largely conserved over the course of mammalian evolutionary development. Cortex
merely got larger, ultimately necessitating the cortical convolutions to simultaneously
accommodate more cortical surface area as well as the associated long-distance axons,
the white matter. The relationship of gray matter surface area to white matter volume
followed a fixed scaling law over the entire mammalian class, from the smallest shrew to
the largest whale (Zhang & Sejnowski, 2000). The global optimization process at work
here was driven by geometrical constraints. With increased cortical volume requiring
longer axons, white matter volume increases more than the gray. The relationship is a
power-law with an exponent just larger than unity: 4/3.
12
By now we also know that considerable evolutionary development did in fact occur
within the gray matter over mammalian evolutionary history. It occurred primarily within
the glial system, albeit with obvious implications for the neuronal system as well. That
was missed until recently in the face of our preoccupation with the neuronal system.
With respect to the latter, we are right to assume that neuronal function has been
serving the same purposes, and subject to the same organizational principles,
throughout vertebrate development. It is in the glial system that we humans truly
differentiate from our primate relatives.
The Spatial Mapping of Patterns in Cortex
Neurons needed to be studied in their native habitat, namely in relationship to other
neurons in a functioning organism. Areal mappings of neuronal activity go back to the
1940’s, to John C. Lilly, who wired up a square array of 25 probes, each with its own
amplifier, to drive glow tubes that would reflect the collective activity of the array. By
1955, some 50 probes were sequentially sampled with a single amplifier and the results
mapped topographically onto a CRT. And by 1970 deMott had demonstrated a 400-
channel topographic mapper with solid-state amplification. It was challenging work
under non-ideal conditions, yielding only limited findings. But the path forward was clear
(DeMott, D.W.,1970).
By 1975, Walter Freeman published his book titled “Mass Action in the Nervous
System(Freeman, WJ,1975). He considered the implications of collective neuronal
activity as not simply an extension of what was already known, but rather as a
revolutionary new departure. In some generality, the signal being processed in cortex is
not the property of individual neurons, but rather is encoded in an ensemble of neurons.
Such ensembles are distributed spatially over the cortical surface layer. Their existence
is also transient, serving just the immediate need and then dissipating, only to be
replaced by a new pattern. One can think of this organization as cinematographic, in the
words of Oliver Sacks, one spatial representation successively replacing another. This
sequence of ‘frames’ may be those of a ‘movie’, with successive frames incorporating
incremental change, such as would be required to implement sequential activity. Or it
could also be the frames of a slide show, with successive frames representing different
processing tasks. Or it could be something involving both, like the scene change in a
movie.
This kind of spatial mapping requires an organizational schema to facilitate the
encoding of the signal and any processing of the signal during its brief life span. Every
neuron participating in the feature representation must retain its individuality as well as
express its group membership. The problem is solved if the criterion of commonality lies
13
in the domain of timing rather than in the spatial domain. A spatial criterion is
problematic as there are many activities going on simultaneously in any cortical region.
It is also apparent that much of brain function is time-critical. Any cortical activity sub-
serving time-critical processes must be highly ordered in the time domain. This
manifests in the pulse waveforms of evoked potentials, for example.
The Organization of Feature Binding
Animal studies in the late eighties and early nineties at the Max Planck Institute in
Frankfurt on visual processing led to a proposed solution to the problem of neuronal
assemblies distinguishing themselves from others, the problem of feature binding. It
was based on a simultaneity criterion, asserting that those neuronal events occurring
with a common timing signature are recognized as belonging to the relevant ensemble.
The theory that simultaneity of firing specifies membership in the relevant neuronal
assembly was called ‘Time Binding.’ The theory was the subject of controversy for
some time. In the face of such general skepticism Christoph von der Malsburg, a
participant in the ground-breaking research, said at the time: “We are in the middle of a
scientific revolution, the result of which will be establishment of [time] binding as a
fundamental aspect of the neural code.” The term scientific revolution is used sparingly
by scientists, and even more rarely by scientists about their own work. After all, referring
to one’s own discovery as a scientific revolution is the equivalent of Napoleon crowning
his own head. Most scientists are more humble than Napoleon. However, just as these
scientists had the sense of a radical new departure toward a new organizing principle,
so did everyone else. Eventually, a theory such as this enters one’s comfort zone, and a
while later one wonders just how things could have been otherwise.
The critical experiment at the Max Planck Institute involved a determination of the
correlation of firing events in cat visual cortex as an object was moved across the visual
field (Engel et al., 1991). When neurons in striate cortex were being “illuminated” by the
passing object, a correlation in firing events became apparent even across the
hemispheric fissure, a correlation that was not apparent otherwise. One other thing: the
cat was electrically stimulated in the mesencephalic reticular formation in order to find
interest in that bland, featureless visual presentation. This tells us that the correlation
was not due to passive visual processing of the object in its narrow sense, but rather
was a feature of the visually attentive brain. The object in the scene had been stamped
with significance. Visual processing occurs at a nominal rate of 40 Hz, or a period of 25
msec, and within that window, elevated correlations could be detected down to the
millisecond level. Here was a basis for feature binding.
14
The finding was all the more remarkable when one considers that it is quite a task to
organize timing down to the millisecond level across the hemispheric fissure with a brain
architecture that is largely lateralized all the way down to the brainstem. The inter-
hemispheric connections that do exist in cortex inevitably involve transport delay in their
communications. How, then, is simultaneity in neuronal firing to be organized across the
hemispheric divide? Here we must fall back upon what we know about periodic
systems. They can entrain each other, and synchronous operation is achievable, even if
there are transport delays in the communication between them. Alternatively, bilateral
synchrony could be organized at the brainstem via the bilateral connections that exist
there. This might also explain the substantial inter-hemispheric coherence that exists
even in cases of agenesis of the corpus callosum.
What we have, then, is a basic frequency (40 Hz) at which visual information is
processed in packets, and this periodicity is tightly coordinated between the
hemispheres. Within this general periodicity, specific timing relationships also matter.
These are organized with respect to the common 40 Hz periodicity serving as a timing
reference.
The critical importance of timing relationships in the brain traces back to a fundamental
characteristic of the action potential mechanism. This is the observation that a single
excitatory input to a neuron is not capable of generating an action potential in the target
neuron. Instead, the target neuron must be ‘primed’ through the arrival of other
excitatory inputs within the relevant window of time. This time interval is given roughly
by the width of the excitatory post-synaptic potential (EPSP). In the simplest possible
realization, there must be a second EPSP within the window of opportunity, or about 10
msec, in order for the combined signal to cross the threshold for the generation of an
action potential.
A mathematician regarding such an arrangement would say that the critical function
performed by the neuron in this connection is that of coincidence detection. Since the
function of an action potential is its contribution to the generation of yet another action
potential downstream, we observe that the criticality of timing at the level of the
individual neuron makes brain timing a critical consideration for the quality of brain
function in general. It follows that the integrity of brain timing relationships must be
maintained globally within the nominal ten-millisecond benchmark or function degrades.
The critical findings by Wolf Singer and his research group at the Max Planck Institute
place the individual neuronal firing event into its larger context of ensemble collective
activity. Coincidence at the level of the neuron translates to simultaneity of firing at the
level of the ensemble, which is then observable as local synchrony at the level of the
15
EEG. We find, then, that the EEG at a given frequency provides us with exquisitely
detailed and specific information on the behavior of neuronal assemblies with respect to
timing and frequency.
By virtue of having regulatory function managed via neuronal assemblies, the brain has
minimized the risk of single-point failures. As John Eccles has pointed out, “the firing or
non-firing a single pyramidal cell cannot have any consequence for the brain. The
ensemble basis of state regulation turns a hard failure at the single neuron level into a
soft failure at the ensemble level. The integrity of ensemble activity is maintained intact
despite numerous dropouts. One is rarely confronted with a complete abolition of
functional capacity. Rather, we are confronted with partial functional loss. The task of
neurofeedback, then, is to restore functional integrity to a mechanism that is still
functional at some level. The process always builds on what already works. We do not
expect the neurofeedback technique to enable function de novo.
The action potential mechanism can only have been a late feature of neuronal
evolution. At the outset, neurons must have developed to implement direct chemical
communication. Particularly if one views the neuronal/glial system jointly, direct
chemical communication remains the dominant feature of neuronal existence, the major
mechanism of information transport. And action potential formation is itself orchestrated
entirely by means of chemistry in the analog domain. In this regard, the neuron can be
understood as an analog-to-digital converter. In the analog domain we have the benefit
of continuous variation of the parameter so that regulation can be conducted with great
subtlety and precision. Digital signal transport is subject to graininess, and to limitations
on the rate at which information can be communicated. On the other hand, the process
is robust and relatively noise-immuneideal for long-distance transport of information in
cortex and beyond. And it is capable of great precision in the timing domain.
The action potential mechanism initially served the purpose of rapid responding to
environmental demands by way of a motor response. This is perhaps best illustrated
with reference to the sea squirt, which possesses a modest nervous system while it
remains in a larval stage, seeking a place to settle on the ocean floor. Once it is sessile,
it has no further need for its nervous system, and proceeds to resorb it. Significantly, it
does not take advantage of its nervous system to retain broader sensory awareness, or
to luxuriate in the existence of life itself just because it can. The higher form of sentience
has lost its reason for being. If one wants to make the case that life is basically
organized to permit genes to replicate themselves, then the sea squirt presents a good
argument. Nature is a minimalist composer, despite appearances.
16
The neural system served to mediate between the sensory and the motor realms, where
timely responsiveness was of the essence. In the nematode C. elegans we already see
an elaborated neural system of some 302 neurons that function with an action potential
mechanism. Every neuron is differentiable from all others. And every such nematode
ends up with the same complement of 302 neurons, all similarly organized into
functional networks. These already display some of the features of more fully developed
nervous systems, showing signs of the emergence of hierarchical order, the hallmark of
complex biological networks.
The complete genomic prescription of the neuronal network that we have in the
nematode is out of the question for the human brain, and indeed, our human genome is
substantially smaller than that of C. elegans. We also get along with a smaller number
of neuronal types. Our genome manages with more general prescriptions, and it is able
to do so because of greater hierarchical organization.
The Human Visual System
Whereas the problem of limited digital signal bandwidth is not an issue with the
nematode, by the time we get to the human visual system we do confront that limitation,
and yet the brain seems to surmount the challenge skillfully. Here’s the problem: When
we gaze upon a 4K resolution large-screen TV displaying a high-resolution image, we
recognize instantly that this resolution exceeds that of ‘ordinary HD.’ And yet we
possess ‘only’ about 128M optical sensors in our retina (to the nearest power of two),
which are serviced by an optic nerve of only a million axons. The data rate that leaves
the retina is only about six megabits/sec, and by the time the signal reaches layer IV of
V1 (striate cortex), the data rate is down to 10,000/sec. This is clearly inadequate to
represent the high spatial frequencies present in a 4K image in real time. In the words of
Marcus Raichle, “These data leave the clear impression that visual cortex receives an
impoverished representation of the world(Raichle, ME, 2010).
We have tried to understand perception as an extrapolation of sensation, as a
sequential bottom-up process, but that project falters immediately as one confronts the
particulars. The paucity of the signal stream means that only a fraction of the cortical
neurons receive input on each exposure, leading to high variability over the receptor
field even with an invariant stimulus. And yet we perceive a stable image that
represents the stimulus with startling fidelity. Increasingly it has become evident that
visual processing is dominated by the brain’s internal representations, as informed by
sensory inputs. The evidence for this proposition is collectively compelling. As a matter
of fact, even William James was persuaded of this proposition just based on the
evidence available to him late in the nineteenth century! Striate cortex is the last stop in
17
the chain of events in visual signal processing at which the information is mapped
topographically. Beyond V1, tracing the circuitry reveals that visual information is
delivered to some 40 different loci in posterior cortex.
Even at the lateral geniculate, the thalamic way station for information from the retina to
visual cortex, a mere ten percent of synapses are recipients of visual information.
Already at this juncture, it appears that context dominates the processing that occurs. A
similar ratio applies at layer IV of V1, which drives one to the same conclusion.
Traditional experimental approaches to sensory processing have relied majorly on
evoked potential studies, rendered more discriminating lately via independent
component analysis. These studies share an implicit bias toward the sensory input-
dominated perspective. The complementary view is that of top-down influences, by
which are meant those that involve frontal-lobe directed functions such as selective
attention.
It is becoming apparent that the largest burden of sensory signal processing is borne by
intrinsic brain activity that is modulated via both bottom-up and top-down pathways. In
fact, this proposition is not new. The initial foray into a more realistic model of sensory
processing in the mammalian brain was presented by Walter Freeman in 1975 in the
book already referred to, Mass Action in the Nervous System. Freeman chose to study
the olfactory system of the rabbit as a paradigm for sensory processing in vertebrates,
given the primacy of chemoreception and hence the likelihood that this had precedence
in our early evolutionary development.
The Olfactory System
Odorant receptors are responsive to specific odorants, and one estimates some
100,000 receptors per odorant, some fraction of which will be excited on each nasal
inhale. The action potentials of the sensory receptors are projected to the olfactory bulb,
which then directly reflects the variability of the original signal. Nevertheless, the
emergence of a stable spatial pattern of excitation is observed over the olfactory bulb,
one that is specific to the odorant being detected. Significantly, every neuron in the bulb
participates in this pattern, irrespective of whether it had received an input pulse
relevant to the odorant.
The original signal representing the odorant had short life expectancy within the
processing sequence. It lost its identity at the bulb, where the sensory neuron first
encounters brain. But the brain had already made the signal its own. It had established
a pattern of firing that represented the odorant uniquely and also persisted over time.
18
This periodic repetition of the firing pattern takes place at a characteristic frequency, by
analogy to the refresh cycle of dynamic RAMs. This frequency falls into the gamma
range of nominally 40 Hz (but it can range widely). Significantly, this rhythmic pattern
had to have been self-organized by the bulbar neurons themselves. The pattern is
synchronous over the entire bulb, even as it varies over time or from one inhale to the
next. The amplitude distribution of the gamma-band signal over the bulbar surface is the
unique identifier of the odorant. It is also unique to the particular rabbit, having arisen
out of the rabbit’s life experience. It is also subject to change with subsequent life
experience, on top of an intrinsic variability.
With each inhale, the olfactory system undergoes a state transition from an initial input-
dominated mode to the brain-dominated pattern characteristic of the odorant. In
between inhales, the system prevails in a state of high variability. Cortex is informed of
both the stable, recursive pattern as well as input-dependent signals, but the system
response is limited to the recursive pattern. What the brain pays attention to with regard
to olfaction is exclusively the pattern it has itself created out of the sparse and highly
variable input stream. This turns out to be a general property of our primary sensory
systems.
With this background, we return to consideration of the visual system. According to
Fiser et al. (2004), who tracked the visual system response pattern of ferrets through
their entire course of development from eye opening to maturity, visual cortical neurons
fire with a large degree of variability, even with presentation of a stable image. However,
on the larger scale, a rather stable spatiotemporal pattern may be discerned. At all
ages, the observed correlations in neuronal firings were only slightly affected by visual
stimulation. Once again one has the impression that input signals serve to modulate
established patterns that integrate over the variations in the signal stream, thus
rendering a stable pattern that informs our experience of the visual field.
For our present purposes, the salient observation is that this recursive pattern occurs at
a nominally 40-Hz repetition rate. It is temporally coherent with the incoming signal
stream, and is co-located in the same cortical real estate. Significantly, visual
processing cannot be merely a matter of coming to terms with incoming information as it
happens. Because of signal transport and processing delays, relevant visual experience
may well be shaped after a critical event such as watching a fastball from the batter’s
box. The brain manages to give us the experience of living in real time despite such
processing delays, and batters do manage to hit fastballs. This means that the brain is
actually organizing a prediction model on the basis of the meager visual information
stream that is available to it. In order for us to live successfully in real time, the brain has
to anticipate the likely trajectory of events.
19
In sum, the incoming information shapes the visual experience that the brain organizes
cumulatively on the basis of the flow of input, combined with expectancy factors for the
likely scope of subsequent inputs, predictions for the probable trajectory of events in the
scene, and projections forward of our own motor responses. Visual processing can
therefore only be understood as a system response in which the brain itself is the
principal generator of the scene we get to observe, placed within the context of the
unfolding pageantry of our lives. Further, the arguments that have just been made also
suffice to make the case that our vaunted executive control system cannot be playing
much of a role in this process because of the delays involved and because of the
distributed nature of the processing involved. Top-down control is not an option except
at the margins. Visual processing must be actualized and governed by control schema
that are largely self-organized and largely buffered from explicit top-down control.
On the Sense of Hearing and on our Sense of Place
Given the centrality of our concern with frequency-based organization of brain function,
we cannot overlook the sense of hearing. This too has an early evolutionary origin, and
it is the sensory modality that gives us our most immediate awareness of the
environment. The time delay from brainstem to cortex is a mere millisecond or so. At
low audio frequencies, up to about 300 Hz, frequency is mapped as actual waveforms,
and at higher frequencies, frequency is mapped tonotopically in primary auditory cortex.
Our exquisite frequency discrimination bears testimony to the brain’s ability to organize
fine distinctions. This makes it easier to accept that the brain takes advantage of such a
capability in other ways, and that may even be directly relevant to neurofeedback.
The sense of hearing also exhibits the brain’s limiting performance in terms of temporal
discrimination. Determination of the direction of sound requires a comparison of relative
arrival time, or equivalently, relative phase of the auditory signal at the two ears. This
comparison is done in the digital domain in the midbrain, and the brain has been shown
capable of discriminating time differences of less than a millisecond—a time interval that
is smaller than the width of an action potential.
Precise timing is also an issue in the maintenance of our sense of space. Take, for
example, the case of a runner on an oval track. A mental representation of where the
person is on the track is maintained by place cells in the hippocampus. A basic theta
rhythm sets the pace, and the place cells map the space in terms of the phase within
the theta periodicity at which they fire. It is as if the brain occupies two worlds, and is
equally at home in the frequency domain and in the time domain. Given the tight timing
constraints, and the large-scale organization involved here, it is perhaps no surprise that
20
the sense of where we are in space is the first to be lost in Alzheimer’s dementia. By the
same token, one can understand the visual disturbances and the loss of the sense of
smell after minor head injury in terms of disturbance of the frequency-based
organization of sensory perception, a mechanism that need not presuppose any
structural injury.
Organization of the Motor Response
Whereas with respect to the visual system the brain retains its greatest secret, namely
how it is that we actually get to ‘see’ the visual imagery that we do, given the fact that
our conversation merely relates to the behavior of neural assemblies and their firing
patterns, when it comes to the motor system we at least get to see the output. The
organization of motor responses is a significant pre-occupation of the brain. Charles
Sherrington once put it this way: “The motor act is the cradle of the mind.” Johann
Wolfgang von Goethe put it poetically: “In the beginning was the act.” (“Am Anfang war
die Tat.”) With respect to the allocation of cortical resources, output refers to motor
function almost exclusively. The entire frontal lobe can be viewed in terms of the
hierarchy of motor control. If one includes the somatosensory system, which is so
strongly interwoven with the somatomotor system, the dominance of motor control in
cortical real estate is impressive.
In fact, it is helpful in this context to recognize the primacy of the somatosensory system
among our primary sensory modalities. Sigmund Freud recognized that “The self is first
and foremost a body self.” And Oliver Sacks has shed light on the catastrophic
consequences of the loss of somatosensory awareness, which entails a substantial loss
of the sense of self. The somatosensory system is the only one that presents us with
such a hazard. This condition tends to afflict highly intellectual people preferentially, and
it can subside as readily as it arrived. A functional mechanism is therefore indicated.
This places it in the class of conditions that are expected to respond to a functionally
based intervention such as neurofeedback.
This part of the discussion is particularly relevant to neurofeedback, because that
process is best understood by analogy to the learning of a motor skill. Historically there
has been very little interest among psychologists in motor skill learning. In the
behaviorist era the attempt was made to explain motor skill learning in terms of
stimulus-response models, and that effort was not very productive. Similarly, operant
conditioning models have served as the principal explanatory model of neurofeedback,
and we will see that that is not entirely satisfactory either.
21
On the other hand, thinking of the problem in terms of a conventional control loop, one
in which comparison is made between the desired and the present state, and an error
correction scheme is mobilized, turns out to be similarly unavailing as a comprehensive
description. The execution of a golf swing is best accomplished without interference
from the executive control system. The playing of Rachmaninoff’s Second Piano
Concerto offers no opportunity for error correction on the relevant timescales. On the
other hand, this error-correction model does have its zone of applicability, and it will also
be relevant to our understanding of neurofeedback.
The two counter-examples offered illustrate the importance of skill learning. What is
being learned is a sequential process that involves mostly ‘local’ control, with little input
from the executive control networks. Pianists may still prefer to have notes in front of
them during a performance, but they serve a purpose of general cueing rather than
moment-to-moment instructions. In the above examples, one can readily talk in terms of
training to mastery, because the skill is exercised in relative isolation, i.e. without
environmental interference.
More generally, skill learning must involve acquisition of a response capability that
integrates incoming information with motor output. Life is more like tennis than golf.
Once again, however, the pace of life allows for little more than general oversight by
executive control functions. As in the case of sensory processing, the brain is called
upon to organize a system response that is minimally dependent on top-level steering.
The Small-World Model of the Cerebrum
The limited role of top-level executive control in motor activity does not imply the
absence of hierarchical control of movement. It’s just that the most relevant hierarchy
begins at the brainstem rather than in our pre-frontal cortex. Control is implemented
through a hierarchical network structure with ‘small-world’ character. That is to say,
there is sufficient global inter-connectivity to draw the whole network into intimate,
efficient communication. Once such interconnectivity exists, there is an ineluctable
tendency toward the emergence of hierarchy. This tendency is exploited wherever
possible. On the large scale, hierarchy emerges over the course of evolutionary
development and becomes obvious in the cerebral architecture. This gives brain
function the capacity for unitary operation. The gross hierarchy of control is nicely
illustrated in a study of structural connectivity in the macaque monkey by Modha and
Singh (2010). At the very highest levels of connectivity between hubs, we have top-
down control emanating from the brainstem. This is seen in Figure 3. Even at the next
lower level of connectivity, all of the control linkages are still top-down, and the
appearance is still very modular. This is shown in Figure 4. One has to drop to yet lower
22
levels of connectivity in order to see cortical-cortical and other linkages enter the picture
to facilitate global integration and feedback to the brainstem.
Figure 3. Linkages with the highest levels of connectivity between regions are illustrated here for
the brain of a macaque monkey. The brainstem is seen as the highest level of the regulatory
hierarchy. The next level of connectivity includes cortex, the diencephalon (thalamus and
hypothalamus), and the basal ganglia. Primary cortical linkages are to the temporal lobe, to
frontal cortex, to parietal cortex, to the cingulate gyrus, and to the insula.
However, small-world character of the networks also prevails within the brainstem itself,
within the thalamus, and within cortex itself. Within cortex, hierarchy emerges over the
course of development as neurons differentiate in terms of connectivity (Hagmann,
2008). The driver here is the principle of preferential attachment (“the rich get richer”),
which drives connectivity to a broad, scale-free distribution (Barabasi, 2002). The two
salient characteristics of small-world models are high local connectivity and efficient
long-distance communication. It is in cortex that the small-world model is taken to
extremes. The dendritic tree (of a thousand to ten thousand dendritic branches, and
between 5,000 to 60,000 synaptic inputs), together with large-scale axonal branchings,
assures high levels of local connectivity. At the same time, global connectivity is
maximized by having every pyramidal cell also participate in distal communication. The
result is that essentially every pyramidal cell is accessible to every other within three
synaptic linkages, a simply staggering level of large-scale interconnectivity. In fact, it
can be readily argued that this number represents a biological limit. It cannot be lower
than three. Large-scale connectivity has been taken to its limit in cortex! Since this
arrangement is energetically costly, local and global connectivity must have been key
drivers in evolutionary development.
23
Figure 4. At the next lower level of connectivity illustrated here, all of the linkages shown still
represent top-down regulatory pathways traceable back to the brainstem. The connectivity tree
still appears very modular. Interestingly, the temporal lobe commands as much real estate on
this plot as the thalamus and the frontal lobe. One has to drop down to yet a lower level of
connectivity to bring inter-regional connections and linkages back to the brainstem into the
picture.
With this as a background, the question to be asked is whether the above hierarchy of
control manifests itself in motor function in particular. If so, then that may be considered
paradigmatic for brain function in general. One fruitful area of inquiry is into the
coordination of movements under conditions where the two hands are asked to manage
tasks that differ in level of difficultyunder time pressure. It is found that the brain
choreographs the two independent activities so that both objectives are attained at
about the same time (Kelso, 1982). This all occurs beneath the level of awareness, so it
is certainly not the outcome of any intention. This must be strictly a matter of the brain
optimizing its own performance. By arranging the trajectories to follow a common time
course to completion, the brain is limiting the degrees of freedom that it has to manage
independently. Significantly, this simplification falls into the domain of timing.
This is such a foundational concept that perhaps another illustration is in order. Children
often challenge each other to simultaneously pat themselves on the head with one hand
while making a rotating motion over the stomach with the other. This is difficult to do
simultaneously right out of the starting gate. When the task is learned, however, it will
be noted that most likely both motions are embedded in a common periodicity.
Specifically, an integral number of pats on the head will go with a single rotation of the
24
other hand. An over-arching order is imposed into which both activities can be enfolded
and jointly optimized. This degree of order emerges, however, out of a self-organizing
process, without any top-down guidance.
Another useful probe of the underpinnings of motor control is to challenge performance
near its limits. J.A. Scott Kelso famously performed a simple experiment that illuminated
yet another core concept. Here’s the challenge: Place both hands before you and
extend both index fingers upward, folding the other fingers. Then move the index fingers
toward and away from each other synchronously at a comfortable frequency. Imagine a
metronome synchronized to the frequency. Now imagine a malevolent agent gradually
increasing the frequency of the metronome while you try to keep up with the identical,
anti-symmetric movement. It will not be long before the fingers undergo a natural
transition to moving in parallel rather than retaining the mirror image pattern. The brain
will have migrated from one pattern to the other via a phase transition, which took it
from one ‘basin of attraction,’ its preferred operating space, to one that was easier to
implement and thus more suitable to the higher frequencies.
Just to add to the mystery of how the brain slips so comfortably into symmetric and anti-
symmetric movement, the very same results are obtained when this experiment is
performed with someone whose inter-hemispheric connections have been severed to
eliminate seizures. The basic principles that govern the self-organization of patterns of
brain function override even major hardware constraints. All of this transpires, of course,
beneath the level of voluntary control and even of awareness. One must conclude that
movement is organized according to basic patterns that arise out of the brain’s own
optimization schemes in the domain of timing and frequency.
Synergetics: The Scientific Principles Underlying Self-Organization of
Natural Systems
This brings us then, finally, to the core issue of the principles underlying self-
organization. For this discussion, we turn to a physicist, Hermann Haken, who has been
engaged on the topic since the 1960’s, when he concerned himself with the properties
of the laser that had just been invented. The core principles, then, are already on view
in inanimate systems. In the laser, atoms in a particular excited state can be stimulated
to emit a photon, with the result that both the stimulating and emitted photons now
possess a common phase. By this process, a large number of photons can be brought
to a state of common phaseand, in the case of this quantum-mechanical system, to a
common identity. One such photon is no longer distinguishable from another. They have
all effectively become enslaved, each to the others. So we have slavery, yet we have no
25
master. This theory, along with its elaboration into living self-organizing systems, is
called Synergetics (Haken H & Stadler M, 2000).
In the two-finger experiment just described, what one finger is doing is highly predictive
of what the other one is doing, even if one cannot see it. Phrased mathematically, the
phase relationship between the two fingers is very stable, undergoing only small
fluctuations, within the two comfort zones of the low frequency and the high. (This is
similar to what is observed in the laser. All participating photons have identical phase.)
This relatively stable measure can, therefore, be used to specify the degree of order in
the system, the degree of similarity among the elements of the system. As such, it is
termed an order parameter, which is simply a measure of the degree of prevailing order
in the system.
In the finger experiment, we are witness to the behavior of the system under the forcing
function of the metronome. The periodicity of the metronome is termed a control
parameter in the parlance of synergetics. The experiment allows us to say that the order
parameter is stable over broad ranges of the control parameter, with the exception of
the transition zone between them, the region of phase change. All of the stimulation
procedures discussed in this book are frequency based, and within the framework of
synergetics these frequencies would be regarded as control parameters with the
objective of enslaving the neural populations that are available for such recruitment.
The Brain as a Non-Equilibrium System in a State of Criticality
The behavioral invariance and stability demonstrated by the finger experiment stand in
stark contrast to the dynamics that are displayed in the real-time EEG, irrespective of
whether we sample the EEG at sensorimotor cortex or anywhere else. One observes a
densely packed array of brief spindle-burst activity that covers the entire spectral range
of the EEG, as shown in Figure 1. Persistence varies inversely with frequency, with
lower-frequency spindle-bursts lingering longer than higher-frequency ones. It is not
clear how behavioral stability emerges out of such apparent cacophony. Matters are
even worse than they appear.
There is yet one more key organizing principle to be discussed before we try to fit both
neurofeedback and stimulation-based methods into this framework. It is that the brain
operates far from equilibrium under all circumstances. There is, in fact, no such thing as
a resting state as far as the brain is concerned. The term is in common usage, to be
sure, but it refers to yet another highly active state---the state of mere non-engagement.
26
Even worse, the brain is driven to the very edge of microscopic instability. What is at
issue here is a bounded instability rather than a runaway condition such as a seizure.
This state is difficult to describe, but the general principles operative here are exhibited
in the sandpile (Bak, 1997). As one adds grains of sand to the top of the sandpile, the
entire conical surface will gradually approach what is called the angle of repose. Adding
sand beyond that point will trigger the formation of small avalanches that will restore the
surface to the quasi-stable angle of repose. If sand continues to be added, the sandpile
continues to ‘live’ at the edge of stability thus defined. A similar situation prevails for the
brain.
Cortex prevails in a state that is perpetually ready for macroscopic state change (Plenz
and Niebur, 2014). An analogy exists between this process and phase transitions in
inanimate systems. When such systems are poised at the threshold of a phase
transition, they are deemed to be in a critical state. In physical systems, such critical
states occupy a very small part of state space. By contrast, that is where the waking
brain lives perpetually, thus occupying a much larger state space. What is mere
happenstance in the case of the sandpile is under active management in cortex, so that
those phase transitions that do occur are ones occasioned by functional demandsof
either internal or external origin. In physical systems, the phase transition is often
between ordered and disordered states. In the brain, the phase transition is between
one state of local order and another.
The brain faces the complementary challenges of maintaining its own macroscopic
stability while also remaining poised for nearly instantaneous state change locally as
circumstances may demand. To solve this problem, the brain takes full advantage of the
entire frequency spectrum. It arranges for stability and continuity of state at low
frequency, and for agile responsiveness at intermediate frequencies. Transient cognitive
activity is managed at yet higher frequencies. In this manner, both stability and agility of
responsiveness can be accommodated within this frequency-based schema.
The state transitions that are managed by means of the intermediate frequency range
(below the gamma range of frequency) resemble phase transitions, as already
indicated. They are macroscopic shifts that rapidly encompass the entire neuronal pool
that is susceptible to such a shift. Large cortical regions suddenly shift from one pattern
of functioning to another very different pattern, and the transition zone is very briefjust
milliseconds. The stable period between transitions can be fairly brief as well, i.e.
fractions of a second. In the limit, brain function is organized in terms of a sequence of
four microstates that toggle between giving priority to different brain regions (Lehman D,
1987).
27
Once we have taken things apart like this, we also have to put them together again.
Every initiative by the brain involves all of the frequencies, each playing its assigned
role but ultimately being part of one orchestration. It is useful to think of this in terms of a
kind of nesting, in which the lower frequencies set the context for the higher ones. The
problem is that once one has that idea in one’s head, it is easy to think of the higher
frequency activity as being largely prescribed by the lower (i.e., that one is ‘locked’ to
the other), when in fact it is typically more a matter of shifting probabilities of
occurrence. It is perhaps more realistic to view the lower frequencies as context-setting,
as being permissive rather than prescriptive. It is the demands of life that are
prescriptive for the brain’s response, and it is the brain’s burden to be poised for
whatever response is called for. Under other conditions, the higher-frequency activity is
indeed phase-locked to the lower.
A Summary Perspective
All the above has laid a basis for the discussion of neurofeedback. The thesis has been
presented that the mass action of neural assemblies is subject to tight constraints in the
domain of timing and frequency. As such, they constitute critical failure modes for brain
regulation. As these are dynamically organized, they should be susceptible to
systematic remediation with reinforcement-based techniques, or simply through skill-
learning by way of self-observation. Most will be soft failures rather than catastrophic,
and as such, available for incremental, progressive improvement.
Neural networks are spatially organized into hierarchical configurations with overall
small-world character. At every level, from brainstem to cortex, the distributions are also
scale-free. All elements are poised for communication with all others, and as such they
abide in a state of perpetual mutual engagement.
The dynamics of group neuronal activity are also observed to be broadly distributed, i.e.
scale-free. They drive the brain toward a state of self-organized criticality, one that is
poised for rapid, macroscopic re-configuration. They are organized in terms of a broad
frequency hierarchy that directly parallels the spatial hierarchy, with low frequencies
more globally organized, and high frequencies more locally coordinated. The
combination renders the brain maximally adaptive to challenges, and among other
things, that opens the door for productive and efficient neurofeedback.
28
Neurofeedback
The above discussion sets the table for how we should regard the potential contribution
of neurofeedback to the enhancement of the brain’s functional competences, and to
their recovery from dysfunction. Our tool is frequency-based in that it depends
principally on the trainee’s response to information derived from a narrow portion of the
frequency spectrum. This allows one to target different parts of the EEG spectrum for
particular objectives, and it makes it appropriate to discuss the issues to a large extent
within the framework of the frequency spectrum.
The principal applications of neurofeedback to date have been to the matter of state
regulation. However, the issue has not always been framed in this way, and it has not
always even been apparent. In the beginning, there was Joe Kamiya’s alpha training,
and the driving objective here was its relationship to psychological states and the
opening it provided to enhanced states of awareness. Sterman’s sensorimotor rhythm
(SMR) training was aiming narrowly toward the control of motor seizures at the outset,
and Lubar’s elaboration of that method initially targeted the management of
hyperkinesis. There was very little engagement with issues of arousal regulation per se
in the early days, as arousal-based models were falling into disfavor.
What engaged attentions in those early days were those functions in which alpha band
activity and SMR band activity was explicitly involved. Their implicit involvement in
matters of core state regulation remained in the background. It is difficult for the tutored
mind to appreciate just how rigidly these lines were being drawn at the time. An
anecdote might be helpful here. When we were first observing the beneficial effects of
SMR training on anxiety states in 1990, Barry Sterman was non-plussed. “But you
should do temperature training for that,” he declared. Autonomic regulation was a
matter for traditional biofeedback, not neurofeedback. Similarly, the suggestion that beta
training might be helpful for depression was categorically rejected.
And when the suggestion was made that SMR-beta training was very helpful for the
elimination of PMS, people in the field were simply apoplectic. That wasn’t even a
recognized disorder! Critics could hardly picture a better way to get neurofeedback
dismissed from learned discussion. And yet PMS was really the paradigm for disorders
of dysregulation, encompassing a whole host of symptoms with a broad range of
symptom presentations. They were all functional in character, and therefore susceptible
to a functional remedy.
The field has largely overcome its blinkered history. The main target of neurofeedback
is the quality of brain self-regulation in general rather than specific disorders or
29
dysfunctions. The neurofeedback challenge typically evokes a more general re-ordering
of network relations than we felt entitled to anticipate at the time. Typically, it is these
general effects, rather than the specific ones, that are of primary interest in clinical work.
The generality and universality of the impact of neurofeedback tended to be obscured
for two reasons. First, there was the obligation on the scientist to be as specific as
possible with respect to findings, and secondly there was the problem that core
regulatory function is not readily quantifiable.
Since neurofeedback has thus far been a clinically driven field, it has not been
emphasized sufficiently that it is also an excellent probe of brain function, and more
specifically a probe into its frequency-based organization. Investigating brain function
typically involves the comparison of a challenge state with a baseline state. This is the
case for evoked-potential work as well as for the new era of brain imaging (PET,
SPECT, and now fMRI). Alternatively, one evaluates performance-related issues, as in
tests of reaction time or of cognitive function.
If we regard neurofeedback in its role as a probe of brain function, the trainee’s brain is
effectively serving in the role of detector, and the salient observables are the
physiological shifts that can be noted in the trainee rather than the changes that may be
seen in the signal. Since the brain is observing a correlate of its own activity, it comes to
the task with an enormous advantage vis-a-vis a naive external observer—such as a
neuroscientist, for example. The brain is performing a recognition task rather than a
detection task. As soon as such recognition occurs, the brain is navigating on familiar
ground. And just as one can allow the brain to play the role of signal detection in the
research on brain functional organization, one can allow the brain to inform us as to how
it is best trained. Just as the French farmer uses a pig to find the truffles, we can let the
brain lead us to its own most productive training procedures. This is largely a matter of
skilled clinical observation.
Single-session effects are routinely seen in neurofeedback, and have in fact been
documented by now using measurements of evoked potentials (Hill A, 2013), of the
contingent negative variation (CNV) (Magana V, et al, 2016), and of functional
connectivity as revealed in fMRI data (unpublished). The effects of single sessions on
the state of the system have been apparent since the early days of the field. Indeed
many published studies of biofeedback were based on results obtained after a mere
one to three sessions.
Once clinical objectives of neurofeedback became paramount, as occurred with SMR-
beta training, the emphasis shifted toward learned control of ‘the behavior,’ in the
lexicon of operant conditioning, and talk of quick effects was dismissed in a general
30
disparagement of ‘over-claiming.’ More particularly, the mere induction of state shifts
was not seen as germane to the real objectives of the training (whereas state shifts had
been of primary concern in the prior initiatives in Alpha- and Theta-band training). Once
the issue of induced state shifts was dismissed from the discussion, it became very
difficult to re-introduce it. In fact, state shifts can be achieved within a matter of minutes,
and such state shifts can be used to guide the training to its most propitious outcome.
Cumulatively, these observations also illuminate the larger issue of how state regulation
is organized.
Once the discussion is focused on the core issue of state regulation, one must have a
schema to organize the findings. When the brain is regarded in its role as a control
system, the first priority is to assure its own unconditional stability, as argued earlier. As
it happens, Sterman’s SMR training was directly relevant to that core objective in its
concern with seizures, but matters were not discussed in those terms at the time.
Seizures were seen narrowly in terms of a focus rather than broadly in terms of brain
stability.
The second tier of the regulatory hierarchy is the refined control of state with respect to
arousal, affect, autonomic function, and interoceptionthe self-monitoring of the state
of the body. The objective here is the maintenance of homeodynamic equilibrium.
Autonomic regulation subsumes the brain’s regulation of other bodily systems.
The tertiary objective in the hierarchy is regulating the brain’s engagement with the
outside world, the domain of executive function. Lubar’s SMR-beta training explicitly
trained attentional faculties while implicitly serving the purpose of arousal regulation
(and the training of vigilance). Independently of the hierarchy of regulation, one is also
concerned with containing behavioral disinhibition, as well as with rolling back learned
behaviors such as addictions, acquired fears and phobias, and other such specific
issues.
Our own work in this field began in 1985 with the procedure pioneered by Sterman and
first clinically deployed by Ayers. Sterman’s sleep research involving cats had firmly
established operant conditioning as the scientific model for EEG biofeedback. The cats
had learned to produce SMR spindle bursts in greater abundance with the help of
conditional food reward, and they benefited in terms of resistance to chemically induced
seizures in consequence. The very first experiment was both blinded and fully
controlledby pure happenstance. And there was no placebo effect: they were cats,
after all. There was nothing to connect their experience of the training in a sleep study
with exposure to chemically induced seizures some months later in an entirely separate
experiment. It has even been said, tongue-in-cheek, that SMR-training for the
31
management of epilepsy came into the world by immaculate conception. For a review,
see Egner and Sterman (2006).
The same method used in cats was introduced to human subjects insofar as that was
possible. The problem was that the large SMR spindle bursts that became apparent in
cat sensorimotor cortex during resting-state conditions were not replicated in the human
waking EEG. In the human EEG, we were faced with a smooth, well-behaved,
continuous distribution of amplitudes over the low-beta frequency range. In line with the
operant conditioning model, Sterman chose to set the threshold relatively high and
continued to concern himself only with the threshold-crossing events. With such a high
threshold, the rewards were sparse, as mandated in an operant conditioning design.
Ayers was the first to apply Sterman’s method to a variety of clinical conditions.
Following Ayers, our first instrument design merely computerized Sterman’s
experimental design. The approach is illustrated in Figure 5. The training signal is
extracted from the raw signal with a filter of 3-Hz bandwidth. It is rectified and
Figure 5. Conventional frequency-band neurofeedback is illustrated here. The top trace is the
raw EEG signal in 0.5-30 Hz bandwidth. The narrow-band filtered signal (3-Hz bandwidth,
infinite impulse response filter, with 2nd-order roll-off) is shown in the second trace. The third
trace shows the rectified signal, which is then smoothed with 0.5 Hz time constant to yield the
fourth trace. A threshold is applied to the fourth trace to govern the discrete rewards.
32
smoothed, and then a threshold is applied. Threshold crossings are signaled with a
beep, and the beep is repeated (at half-second intervals) for as long as that status is
maintained.
Through her clinical work, Ayers found it helpful to set the threshold lower so that
rewards would be much more plentiful, and we adopted that approach as well. With
rewards no longer rare, the moorings were being loosened on Sterman’s operant
conditioning paradigm. The training was no longer event-focused, but rather had
become state-focused. There would be runs of beeps followed by intervals of no beeps,
effectively bringing lower-frequency modulatory influences into the picture. The
thresholding strategy had been shifted for the tactical purpose of enhancing
engagement with the training task, but in fact the very nature of the process had been
changed in consequence. It was no longer the standard Skinnerian operant conditioning
procedure.
Our approach further differed from Sterman’s in that we opted to display the entire real-
time behavior of the SMR-band to the trainee, in the expectation that that would
promote the brain’s engagement with the process generally. This had become possible
by virtue of computerization of the procedure, which we had accomplished by 1987. The
brain derived far more information from the signal dynamics than we expected. This
turned out to be the main event, with the threshold crossings a mere grace note. This
consigned the operant conditioning aspect of the design, the threshold crossings, to an
even further diminished role in the entire procedure.
With this much more dynamic approach to feedback, we observed an explicit
dependence of arousal level on target frequency in the SMR-beta range. State
sensitivity could be discriminated at the 0.5 Hz level (the frequency resolution we had
available at the time). This seems quite surprising on its face, in that the EEG does not
differ much over a 0.5 Hz range. This is shown in Figure 6, where traces are given for
three frequencies that differ by 0.5 Hz. It is indeed difficult to discern a difference in the
time domain waveforms. Frequency domain data are also shown for the three bands,
and these reveal at least a discernible difference. Parenthetically, it seems possible that
the brain processes these signals as frequency-domain signatures rather than as time-
domain waveforms, by analogy to our sense of hearing.
33
Figure 6. Spectral response for narrowband filter: The EEG time course and corresponding
spectral distribution are shown for three band-pass filters processing the identical signal with
center frequencies that differ by 0.5 Hz around 11 Hz. The time waveforms are difficult to
distinguish; the spectrals reveal a more readily observable difference. The brain that is
observing its own activity under such circumstances may very well respond differently to each of
these signals.
The above discovery was actually somewhat fortuitous, as is often the case at the
scientific frontier, and it was also long in coming. For years, we had been trying to
establish the existence of a systematic difference between SMR and beta1 training, and
were never able to do so. A Ph.D. dissertation was even devoted to the topic (Thorpe,
1997). Others were similarly occupied, even many years later, also without systematic
success (Egner and Gruzelier, 2004). Once the coupling of arousal level and reward
frequency was established firmly, however, there was no way back to standard bands. It
became obligatory to tailor the training to each person to optimize the training.
The favored frequency was called the ORF, for Optimum Reward Frequency, or
alternatively Optimum Response Frequency. The process involved moving
incrementally to that frequency at which the person felt maximally calm, alert, and
euthymic during the session. Symptom relief might also be experienced. It became a
matter of maximizing the positive attributes and minimizing the severity of clinical
complaints, and that process typically converged on a single frequency, the ORF. The
process is illustrated in terms of a behavior surface in hyperspace in Figure 7. The
favored frequency then also consistently yielded the most propitious outcome for the
training. The search for the ORF might take a number of sessions, but once it was
identified, it was observed to remain rather stable over the course of training. Once the
optimization strategy was adopted, it also became clear that the training of brain
instabilities such as seizures, migraines, panic, vertigo, asthma, and Bipolar Disorder
was exquisitely sensitive to the choice of target frequency. Brain instabilities served as
34
our canaries that drove the agenda for the further optimization of the training. They also
presented the strongest argument for the optimization procedure itself.
Figure 7. Refining the reward strategy: Optimizing the reward frequency: The behavior surface
is shown for the frequency dependence of two behavioral features, one that is being promoted
and one that is being ameliorated. Both the maximization and the minimization criteria are met
at a single frequency, the Optimum Response Frequency. Training under these optimum
conditions in state space maximizes the likelihood of a favorable outcome of the training.
The rest of the field did not follow our lead with regard to the individualization of reward
frequencies, and this was for understandable reasons. Sterman had good physiological
grounds for training the SMR-band of 12-15 Hz, to do so on the sensorimotor strip, and
to adopt referential montage for the purpose. His work was essentially pinned to that
protocol for the rest of his career. Lubar’s burden was to persuade an intransigent
mainstream, and the best battering ram was to pursue a single claim with a single
protocol. It was not helpful at that juncture to have neurofeedback promoted as a
panacea for nearly every ailment in the mental health universe.
Finally, there was nearly universal conviction that in order to get research results
accepted, one had to be working with fixed protocols. The upshot was that by the time
Egner and Gruzelier published in 2004 on the relative roles of SMR and beta1 bands,
we had already been operating according to the ORF schema for some four years.
There was yet one more explanation for the discovery that led to our independent
journey into adaptive training: We had gone back to bipolar montage, which had been
universally employed in the early research of Sterman and Lubar. This likewise went
against the grain of prevailing trends elsewhere within the field. From the early nineties
on, there was a move to adopt QEEG-based targeting, and this came to play a primary
35
role with respect to the inhibit aspect of neurofeedback protocols. With the attractions of
QEEG-based training beckoning, there had been a corresponding shift toward the
adoption of referential placement for neurofeedback, in the spirit of the reigning
localization hypothesis of neuropsychology.
Initially, we responded to the appeal of this as well. In time, when it became of interest
to move off the sensorimotor strip, we did so with a bipolar montage in order to keep
one foot planted on familiar turf, and we observed that bipolar montage was giving us
stronger effects. The brain found the relationship between two sites to be more salient
than the activity at a single site. The greater level of discernment also made the training
more frequency-specific, which then led to the identification of the extraordinary
frequency-specificity of the response.
The concept of the ORF implied that there was an underlying frequency-based
organization of brain function that was not necessarily apparent in the EEG. The
implications of this are potentially huge, but we all understand that large claims demand
good evidence. The only evidence that could be brought to bear in support of this
concept was self-report by the trainee. Could such an edifice be constructed on the
basis of mere subjective evidence? Skepticism was rampant. On the other hand, the
clinical evidence was compelling, with brain instabilities in particular. Whereas a
migraine might be expunged at one frequency, a nearby frequency might well evoke a
migraine aura. The reproducibility of such phenomena turns anecdotal findings into
evidence, and ultimately into publishable data. It is absurd for purists to argue that once
an anecdote, always an anecdote. On the contrary, the astute observations of patterns
of consistency among disparate data are the very essence of good science.
By 2004 we had extended the range of target frequencies to cover the entire EEG band
out to 40 Hz, our software limit. A substantial bias to the lower frequencies asserted
itself, however, and the range was gradually extended all the way down to 0-3 Hz with
our 3-Hz signal bandwidth, so that the lowest target frequency was 1.5 Hz. The clinical
strategy was to start the optimization procedure at 12 -15 Hz, our traditional comfort
zone, and to move up or down as necessary. The distribution we observed in target
frequencies by 2005 is shown in Figure 8. Our original default protocol of 15-18 Hz
training had sunk into relative insignificance. The distribution was essentially flat below
the SMR range, but in fact, the highest peak was at the lowest frequency. This becomes
apparent when one imagines the data plotted in terms of one-Hz wide bins. Each of the
bars in the graph represents three such bins, with the exception of the lowest, for which
we have only one bin. Clearly the lowest frequency was somehow favored.
36
Figure 8. The distribution of reward frequencies that was observed in 2005 at the EEG Institute,
just prior to entry into to the infra-low frequency regime. The most common single target
frequency was actually the lowest, 1.5 Hz, which becomes apparent when one imagines this
figure expressed in terms of one-Hz bins (referring to the center frequency). This was the
impetus for pursuing the further reduction in target frequency.
By 2006, the adoption of new software permitted the extension of the range to 0.05 Hz.
This was the first venture into the training of the tonic slow cortical potential using a
frequency-based approach. The clinical strategy of starting the optimization procedure
at 12-15Hz remained the same. The expectation was that the pile-up that had
Figure 9. The distribution of target frequencies (ORFs) is shown for the first six-month period
after the threshold to the infra-low frequencies was breached. The lowest accessible frequency
of 0.05 Hz was by far the most prevalent. This trend was further confirmed in 2007, indicating a
need for the exploration of yet lower frequencies.
37
previously occurred at 1.5 Hz would distribute itself over the new range. Much to our
surprise, the lowest frequency was once again favored. In fact, it was much more
strongly favored than before, with about half the clients preferring the lowest frequency.
This is shown in Figure 9.
The venture into the infra-low frequency region required an entirely new approach to
training. The frequency was simply too low to permit threshold-based amplitude training,
where the amplitude refers to the envelope of the spindle-burst activity. Instead, the
trainee simply watches the unfolding low-frequency signal, which reflects the ebb and
flow of (differential!) cortical activation in its subtle fluctuations. One way or another, the
brain appears to have no difficulty recognizing its connection to the displayed signal,
and it responds accordingly. In fact, it does so with as much rapidity as at the higher
frequencies. That would appear to violate expectations based on signal processing
theory. How can one explain quick responses to slowly-varying signals? More
specifically, how can such a response be so frequency-specific when an external
observer would have to observe a good part of an entire period to be sure what the
frequency is?
The answer is the obvious one: The brain cannot be keying on the basic low-frequency
signal, which indeed is much too slow for feedback. Instead, it must be attending to the
subtle fluctuations in that signal that relate to its own real-time operations. The basic
rhythm being extracted by the filter software provides the context for what is being
observed. It is not itself the observable.
As for the exquisite frequency sensitivity of this process, a mere change in perspective
is required. Since the brain is both the agent and the observer of the unfolding signal, its
sensitivity to the fluctuations is greater than that of an outside observer. The brain is not
doing frequency detection. It is experiencing a process that intrinsically possesses high-
frequency specificity. The signal processing software invites the brain into engagement
with this signal. But even a slow rhythm must respond at the speed of life, and it is the
resulting modulations that engage the attention of the brain. Thus, for both reasons the
trend toward lower frequencies did not lead to slower response in the training. On the
contrary, by and large trainees were more responsive, and that responsiveness became
apparent even earlier in the training than before. With more clinical experience, it was
found that two-thirds of all clients preferred the lowest frequency. We obviously needed
to move even lower to provide a wider range of options.
By 2008 new software allowed us to go down to 0.01 Hz in target frequency, with finer
resolution. In short order, the peak in the distribution of ORFs moved down from 0.05 to
0.01 Hz. Quickly it transpired that once again two-thirds of the population preferred the
38
lowest frequency. Since the software allowed it, the target range was therefore
extended to 1mHz in late 2008. Once again, some two-thirds of clients preferred the
new lowest frequency. With the bulk of the population now training at low frequency, the
starting frequency was changed to 1.5 Hz. Yet the distribution of ORFs still covered the
entire EEG spectrum. A number of people could not train at the low frequencies at all.
The distribution is shown in Figure 10.
In 2010 the range was further extended to 0.1mHz, and after just a few months of
clinical experience, the starting point of 0.1mHz was adopted for everyone. With this
fuller range available, very few clients remained who failed to optimize within the ILF
range. It was apparent that clients had several preferred training frequencies over the
spectrum, but of these, the lowest was always the most effectual.
In 2015 a final step was taken to extend the range even further to 0.01mHz, and finally,
the distribution of ORFs does broaden somewhat, as we had been expecting all along.
On the other hand, the lowest frequency remains the dominant frequency in the
distribution. With each downward step in range, the peak observed previously would be
Figure 10. The distribution of target frequencies is shown for a four-month period in 2008 where
the software limit had been extended to 0.01 Hz, along with the distribution obtained in the
subsequent two-month period in which the software limit had been extended to 0.001 Hz, or one
milli-Hertz. The distribution altered shape, and the pattern of preference for the lowest available
frequency was sustained.
39
obliterated as trainees migrated toward the new low frequency. Those whose training
optimized at ORFs other than the lowest were not affected.
It appears that there are two client populations. There are those whose state of
dysregulation is dominated by brain instabilities, and there are those who primarily need
calming of their agitated states. The former require very specific target frequencies, and
the latter appear to gravitate toward the lowest target frequency that the software
allows.
The Frequency Rules
Over the entire trajectory of development of this protocol, the consistent observation
was <