• About
    102
    Research items
    10,694
    Reads
    963
    Citations
    Research Experience
    Dec 2017
    Research Associate
    Brandenburg University of Technology Cottbus - Senftenberg · Institute of Electronics and Information Technology
    Cottbus, Germany
    Oct 2017 - Nov 2017
    Visiting Fellow
    Fachhochschule Ansbach · Department of Biomedical Engineering
    Germany
    Sep 2017 - Sep 2017
    Visiting Fellow
    Basque Center for Applied Mathematics · Department of Applied Mathematics
    Spain
    Education
    Dec 1995 - Jul 2000
    Universität Potsdam
    Physics, Cognitive Neuroscience, Linguistics
    Sep 1987 - Feb 1995
    Universität Hamburg
    Physics and Philosophy
    Followers (440)
    View All
    H Chris Ransford
    Florian Hantschel
    Marc Codron
    Elio Conte
    Stefan Müller
    He Wang
    Matthias Wolff
    Andrea Calilhanna
    Yidong Liao
    Angus McCoss
    Following (768)
    View All
    H Chris Ransford
    Karol Zyczkowski
    Lola L. Cuddy
    Si Wu
    K. Y. Michael Wong
    Marisa Hoeschele
    Vadim Kostrykin
    David Huron
    Andrea Schiavio
    Florian Hantschel
    Awards & Achievements (2)
    Mar 2010
    DFG Heisenberg-Fellowship
    Scholarship
    Feb 2009
    Lotze Price
    Award
    Current research
    Projects (7)
    Project
    We are using Schrödinger wave functions and gauge theory as a mathematical model of music cognition. We already have a nice model of (static) tonal attraction (beim Graben & Blutner 2016), explaining the data from Krumhansl & Kessler (1982). Future research will include dynamic attraction and the Schönberg/Mazzola theory of modulation.
    Project
    This is a PhD project. The main goal is to perform classification of single-channel EEG signals recorded under general anesthesia. We aim to classify pre-/post-incision states, as well as distinguish patients who received propofol from patients who received desflurane as anesthetic agent. The features used in the classification process are extracted using power spectral analysis and recurrence symbolic analysis.
    Research
    Research Items
    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as to transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Goedelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: i) the design of a Central Pattern Generator from a finite-state locomotive controller, and ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.
    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced threecompartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the "open-field" configuration of the dendritic field potential around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. [Mazzoni, A., S. Panzeri, N. K. Logothetis, and N. Brunel (2008). Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Computational Biology 4 (12), e1000239], and conclude that our biophysically motivated approach yields substantial improvement.
    The present work proposes a solution of a pertinent problem for recurrence plots and recurrence quantification analysis of time series: the optimal selection of distance thresholds for estimating the recurrence structure of complex dynamic systems. We approximate this recurrence structure through Markov chains obtained from recurrence grammars. The goodness of fit is assessed with a utility function derived from a stochastic Markov transition matrix. It assumes a local maximum for that distance threshold which reflects the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to the segmentation of neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
    Quantum cognition emerged as an important discipline of mathematical psychology during the last two decades. Using abstract analogies between mental phenomena and the formal framework of physical quantum theory, quantum cognition demonstrated its ability to resolve several puzzles from cognitive psychology, issued in the bounded rationality research program. Until now, quantum cognition essentially exploited ideas from projective (Hilbert space) geometry, such as quantum probability or quantum similarity. However, many powerful tools provided by physical quantum theory, such as symmetry groups and their representation theories, particularly local gauge symmetries and their connection to physical forces, have not been utilized in the field of quantum cognition research sofar. This is somewhat astonishing as "mental forces" or "mental energies" appear as important metaphorical concepts in cognitive science. One particular example is music psychology, where static or dynamic tonal attraction phenomena are computationally often modeled in terms of metaphorical forces. Inspired by seminal work by Guerino Mazzola on the connection between musical forces and symmetries in tonal music, our study aims at elucidating and reconciling musical forces in the framework of quantum cognition. Based on the fundamental transposition symmetry of tonal music, as reflected by the circle of fifths, we develop different wave function descriptions over this underlying tonal space, governed by a Schrödinger equation containing different interaction energies. We present quantum models for static and dynamic tonal attraction and compare them with traditional computational models in musicology. Our approach turns out to be more parsimonious and of greater explanatory power than symbolic models of music perception.
    Recurrence structures in univariate time series are challenging to detect. We propose a combination of symbolic and recurrence analysis in order to identify recurrence domains in the signal. This method allows to obtain a symbolic representation of the data. Recurrence analysis produces valid results for multidimensional data, however, in the case of univariate time series one should perform phase space reconstruction first. In this chapter, we propose a new method of phase space reconstruction based on the signal’s time-frequency representation and compare it to the delay embedding method. We argue that the proposed method outperforms the delay embedding reconstruction in the case of oscillatory signals. We also propose to use recurrence complexity as a quantitative feature of a signal. We evaluate our method on synthetic data and show its application to experimental EEG signals.
    See "readme.txt" for instruction. Example files are "lorenzepl.m" for RSA of Lorenz attractor and "ensemblelorenzepl.m" for ensemble of Lorenz time series.
    Metastable attractors and heteroclinic orbits are present in the dynamics of various complex systems. Although their occurrence is well-known, their identification and modeling is a challenging task. The present work reviews briefly the literature and proposes a novel combination of their identification in experimental data and their modeling by dynamical systems. This combination applies recurrence structure analysis permitting the derivation of an optimal symbolic representation of metastable states and their dynamical transitions. To derive heteroclinic sequences of metastable attractors in various experimental conditions, the work introduces a Hausdorff clustering algorithm for symbolic dynamics. The application to brain signals (event-related potentials) utilizing neural field models illustrates the methodology.
    Project - Extraction of multivariate components in brain signals obtained during general anesthesia
    Update
    Project goal
    This is a PhD project. The main goal is to perform classification of single-channel EEG signals recorded under general anesthesia. We aim to classify pre-/post-incision states, as well as distinguish patients who received propofol from patients who received desflurane as anesthetic agent. The features used in the classification process are extracted using power spectral analysis and recurrence symbolic analysis.
    Background and motivation
    t.b.a.
    Answer
    Thank you very much, Mauricio! Well appreciated.
    Best,
    Peter
    How well does a given pitch fit into a tonal scale or key, being either a major or minor key? This question addresses the well-known phenomenon of tonal attraction in music psychology. Metaphorically, tonal attraction is often described in terms of attracting and repelling forces that are exerted upon a probe tone of a scale. In modern physics, forces are related to gauge fields expressing fundamental symmetries of a theory. In this study we address the intriguing relationship between musical symmetries and gauge forces in the framework of quantum cognition.
    Computation is classically studied in terms of automata, formal languages and algorithms; yet, the relation between neural dynamics and symbolic representations and operations is still unclear in traditional eliminative connectionism. Therefore, we suggest a unique perspective on this central issue, to which we would like to refer as to transparent connectionism, by proposing accounts of how symbolic computation can be implemented in neural substrates. In this study we first introduce a new model of dynamics on a symbolic space, the versatile shift, showing that it supports the real-time simulation of a range of automata. We then show that the Goedelization of versatile shifts defines nonlinear dynamical automata, dynamical systems evolving on a vectorial space. Finally, we present a mapping between nonlinear dynamical automata and recurrent artificial neural networks. The mapping defines an architecture characterized by its granular modularity, where data, symbolic operations and their control are not only distinguishable in activation space, but also spatially localizable in the network itself, while maintaining a distributed encoding of symbolic representations. The resulting networks simulate automata in real-time and are programmed directly, in absence of network training. To discuss the unique characteristics of the architecture and their consequences, we present two examples: i) the design of a Central Pattern Generator from a finite-state locomotive controller, and ii) the creation of a network simulating a system of interactive automata that supports the parsing of garden-path sentences as investigated in psycholinguistics experiments.
    Hans Primas laid the groundwork for contextual emergence and also had a long-standing interest in issues of stochasticity and determinism and their consequences. In this contribution we describe contextual emergence and then turn to the question of whether determinism and stochasticity could be regarded as contextually emergent properties. In a first step we demonstrate that the conventional notion of determinism is not fully contained in the fundamental description of dynamical systems but requires some contextual stability condition for the emergence of unique trajectories. Second, we discuss theoretical dilution techniques of deterministic systems for the contextual emergence of stochastic descriptions. Finally, the emergence of deterministic "mean field" descriptions from stochastic Markov processes will illustrate another contextual aspect of the nature of determinism. We discuss our results on contextual determinism and stochasticity in the framework of relative onticity and indicate its potential relevance for the determinism versus freewill controversy.
    We consider several puzzles of bounded rationality. These include the Allais- and Ellsberg paradox, the disjunction effect, and related puzzles. We argue that the present account of quantum cognition—taking quantum probabilities rather than classical probabilities—can give a more systematic description of these puzzles than the alternate treatments in the traditional frameworks of bounded rationality. Unfortunately, the quantum probabilistic treatment does not always provide a deeper understanding and a true explanation of these puzzles. One reason is that quantum approaches introduce additional parameters which possibly can be fitted to empirical data but which do not necessarily explain them.Hence, the phenomenological research has to be augmented by responding to deeper foundational issues. In this article, we make the general distinction between foundational and phenomenological research programs, explaining the foundational issue of quantum cognition from the perspective of operational realism. This framework is motivated by assuming partial Boolean algebras (describing particular perspectives). They are combined into a uniform system (i.e. orthomodular lattice) via a mechanism preventing the simultaneous realization of perspectives. Gleason’s theorem then automatically leads to a distinction between probabilities that are defined by pure states and probabilities arising from the statistical mixture of pure states. This formal distinction relates to the conceptual distinction between risk and ignorance. Another outcome identifies quantum aspects in dynamic macro-systems using the framework of symbolic dynamics. Finally, we discuss several ideas that are useful for justifying complementarity in cognitive systems.
    Contextual emergence is an important methodological closed loop approach in the sciences. It involves the introduction of weak contextual topologies and their implementation as stability conditions. Guided by these principles, I discuss three applications of contextual emergence in the neurosciences. First, I show that relevant neural observables may belong to different classes of function algebras and that the transition from continuous functions to physiologically important step functions involves a transition from a strong norm topology to a weak contextual topology. Second, starting from the paradigmatic case of contextual emergence of molecular structure according to the Born-Oppenheimer approximation, I argue that the description of the neurophysiological function of ion channels requires contextual emergence over two distinct steps from a quantum mechanical treatment of a molecule as a many particle system over a stochastic treatment of molecular configurations to a coarse-grained description of functional states. Finally, I discuss contextual emergence of macroscopic descriptions of large-scale neural networks and of slow behavior in neurophysiological time series in close analogy to the treatment of thermodynamics.
    In his study "What are the best hierarchical organizations for the success of a common endeavour" [Mathematical Anthropology and Cultural Theory 9, 1 (2016)] Gil uses coupled oscillator networks as a model for socially interacting individuals. In this commentary, I argue that such networks could also be related with lattice gauge theory. The most stable lattice gauge society would be an open society whose gauge invariance indicates the existence of universal values, such as human rights.
    Recurrence structures in univariate time series are challenging to detect. We propose a combination of symbolic and recurrence analysis in order to identify recurrence domains in the signal. This method allows to obtain a symbolic representation of the data. Recurrence analysis produces valid results for multidimensional data, however, in the case of univariate time series one should perform phase space reconstruction first. In this paper, we propose a new method of phase space reconstruction based on signal's time-frequency representation and compare it to delay embedding method. We argue that the proposed method outper-forms delay embedding reconstruction in the case of oscillatory signals. We also propose to use recurrence complexity as a quantitative feature of a signal. We evaluate our method on synthetic data and show its application to experimental EEG signals.
    The present work proposes a solution of a pertinent problem for recurrence plots and recurrence quantification analysis of time series: the optimal selection of distance thresholds for estimating the recurrence structure of complex dynamic systems. We approximate this recurrence structure through Markov chains obtained from recurrence grammars. The goodness of fit is assessed with a utility function derived from a stochastic Markov transition matrix. It assumes a local maximum for that distance threshold which reflects the optimal estimate of the system's recurrence structure. We validate our approach by means of the nonlinear Lorenz system and its linearized stochastic surrogates. The final application to the segmentation of neurophysiological time series obtained from anesthetized animals illustrates the method and reveals novel dynamic features of the underlying system. We propose the number of optimal recurrence domains as a statistic for classifying an animals' state of consciousness.
    The scientific description of any system depends on the target properties of that description. A detailed, fine-grained account of all individual constituents of a system differs from that of properties at larger scales of granularity, up to the system as a whole. All these level-specific descriptions can be compatible or incompatible with one another. This contribution addresses a particular pair of descriptions of complex dynamical systems: their Liouville dynamics, treating each constituent separately in a conventional state space, and their information dynamics, based on partitions of that state space. The relation between them can be formulated as a commutation relation, in which the commutator quantifies the degree of their incompatibility.
    For decades, research in neuroscience has supported the hypothesis that brain dynamics exhibits recurrent metastable states connected by transients, which together encode fundamental neural information processing. To understand the system's dynamics it is important to detect such recurrence domains, but it is challenging to extract them from experimental neuroscience datasets due to the large trial-to-trial variability. The proposed methodology extracts recurrent metastable states in univariate time series by transforming datasets into their time-frequency representations and computing recurrence plots based on instantaneous spectral power values in various frequency bands. Additionally, a new statistical inference analysis compares different trial recurrence plots with corresponding surrogates to obtain statistically significant recurrent structures. This combination of methods is validated by applying it to two artificial datasets. In a final study of visually-evoked Local Field Potentials in partially anesthetized ferrets, the methodology is able to reveal recurrence structures of neural responses with trial-to-trial variability. Focusing on different frequency bands, the δ-band activity is much less recurrent than α-band activity. Moreover, α-activity is susceptible to pre-stimuli, while δ-activity is much less sensitive to pre-stimuli. This difference in recurrence structures in different frequency bands indicates diverse underlying information processing steps in the brain.
    We improve the results by Siegelmann & Sontag (1995) by providing a novel and parsimonious constructive mapping between Turing Machines and Recurrent Artificial Neural Networks, based on recent developments of Nonlinear Dynamical Automata. The architecture of the resulting R-ANNs is simple and elegant, stemming from its transparent relation with the underlying NDAs. These characteristics yield promise for developments in machine learning methods and symbolic computation with continuous time dynamical systems. A framework is provided to directly program the R-ANNs from Turing Machine descriptions, in absence of network training. At the same time, the network can potentially be trained to perform algorithmic tasks, with exciting possibilities in the integration of approaches akin to Google DeepMind's Neural Turing Machines.
    Conventional neural field models describe well some experimental data, such as Local Field Potentials or electroencephalographic data. The work reviews recent extensions of neural field models and describes the activation and attenuation of spectral power in certain frequency bands subjected to the statistical properties of an external input and subjected to the properties of synaptic receptor efficacy and the heteroclinic transitions between meta-stable state as a model for event-related potentials.
    The concept of complementarity in combination with a non-Boolean calculus of propositions refers to a pivotal feature of quantum systems which has long been regarded as a key to their distinction from classical systems. But a non-Boolean logic of complementary features may also apply to classical systems, if their states and observables are defined by partitions of a classical state space. If these partitions do not satisfy certain stability criteria, complementary observables and non-Boolean propositional lattices may be the consequence. This is especially the case for non-generating partitions of nonlinear dynamical systems. We show how this can be understood in more detail and indicate some challenging consequences for systems outside quantum physics, including mental processes.
    Nowadays, surgical operations are impossible to imagine without general anaesthesia, which involves loss of consciousness, immobility, amnesia and analgesia. Understanding mechanisms underlying each of these effects guarantees well-controlled medical treatment. Our work focuses on analgesia effect of general anaesthesia, more specifically, on patients reaction on nociception stimuli. The study was conducted on dataset consisting of 230 EEG signals: pre- and post-incisional recordings for 115 patients, who received desflurane and propofol [1]. Initial analysis was performed by power spectral analysis, which is a widespread approach in signal processing. Power spectral information was described by fitting the background activity and measuring power contained in delta and alpha bands according to power of background activity. The fact that power spectrum of background activity decays as frequency increasing is well known and thoroughly studied [2]. Here, traditional 1/f^a behaviour of the decay was replaced by a Lorentzian model to describe the power spectrum of background activity. Due to observed non-stationary nature of EEG signals spectral analysis does not suffice to reveal significant changes between two states. A further improvement was done by expanding spectra with time information. To obtain time-frequency representations of the signals conventional spectrograms were used as well as a spectrogram reassignment technique [3]. The latter allows to ameliorate readability of a spectrogram by reassigning energy contained in spectrogram to more precise positions. Subsequently, obtained spectrograms were used in recurrence analysis [4] and its quantification by complexity measure. Recurrence analysis allows to describe and visualise dynamics of a system and discover structural patterns contained in the data. Structure of each recurrence plot is characterised by Lempel–Ziv complexity measure [5], which shows a difference between pre- and post-incision.
    Transients, metastable states (MS) and their temporal recurrences encode fundamental information of biophysiological system dynamics. To understand the system’s temporal dynamics, it is important to detect such events. In neural experimental data, it is challenging to extract these features due to the large trial-to-trial variability. A proposed detection methodology extracts recurrent MS in time series (TS). It comprises a time-frequency embedding of TS and a novel statistical inference analysis for recurrence plots (RP). To this end, we propose to transform TS into their time-frequency representations and compute RPs based on the instantaneous spectral power values in various frequency bands. Additionaly, we introduce a new statistical test that compares trial RPs with corresponding surrogate RPs and obtains statistically significant information within RPs. The combination of methods is validated by applying it on two artificial datasets, commonly used in low-level brain dynamics modelling. In a final study of visually evoked Local Field Potentials, the methodology is able to reveal recurrence structures of neural responses in recordings with a trial-to-trial variability. Focusing on different frequency bands, the delta-band activity is much less recurrent than alpha-band activity in partially anesthetized ferrets. Moreover, alpha-activity is susceptible to pre-stimuli, while delta-activity is much less sensitive to pre-stimuli.
    We present numerical simulations of metastable states in heterogeneous neural fields that are connected along heteroclinic orbits. Such trajectories are possible representations of transient neural activity as observed, for example, in the electroencephalogram. Based on previous theoretical findings on learning algorithms for neural fields, we directly construct synaptic weight kernels from Lotka-Volterra neural population dynamics without supervised training approaches. We deliver a MATLAB neural field toolbox validated by two examples of one- and two-dimensional neural fields. We demonstrate trial-to-trial variability and distributed representations in our simulations which might therefore be regarded as a proof-of-concept for more advanced neural field models of metastable dynamics in neurophysiological data.
    Quasistationarity is ubiquitous in complex dynamical systems. In brain dynamics there is ample evidence that event-related potentials reflect such quasistationary states. In order to detect them from time series, several segmentation techniques have been proposed. In this study we elaborate a recent approach for detecting quasistationary states as recurrence domains by means of recurrence analysis and subsequent symbolisation methods. As a result, recurrence domains are obtained as partition cells that can be further aligned and unified for different realisations. We address two pertinent problems of contemporary recurrence analysis and present possible solutions for them.
    Biophysical modeling of brain activity has a long and illustrious history (Ermentrout, 1998; Deco et al., 2008; Coombes, 2010) and has recently profited from technological advances that furnish neuroimaging data at an unprecedented spatiotemporal resolution (Guillory and Bujarski, 2014; Sporns, 2014). Neuronal modeling is a very active area of research, with applications ranging from the characterization of neurobiological and cognitive processes, (Jirsa, 2004b,a; Bojak and Liley, 2005; Phillips and Robinson, 2009; Rolls and Treves, 2011) to constructing artificial brains in silico and building brain-machine interface and neuroprosthetic devices, e.g., Einevoll et al., 2013; Whalen et al., 2013. Biophysical modeling has always benefited from interdisciplinary interactions between different and seemingly distant fields; ranging from mathematics and engineering to linguistics and psychology. This Research Topic aims to promote such interactions by promoting papers that contribute to a deeper understanding of neural activity as measured by fMRI or electrophysiology. In general, mean field models of neural activity can be divided into two classes: neural mass and neural field models. The main difference between these classes is that field models prescribe how a quantity characterizing neural activity (such as average depolarization of a neural population) evolves over both space and time as opposed to mass models, which characterize activity over time only; by assuming that all neurons in a population are located at (approximately) the same point. This Research Topic focusses on both classes of models and considers several aspects and their relative merits that: span from synapses to the whole brain; comparisons of their predictions with EEG and MEG spectra of spontaneous brain activity; evoked responses, seizures, and fitting data—to infer brain states and map physiological parameters.
    The present work aims to reproduce certain changes observed experimentally in the EEG power spectrum over the frontal head region during general anesthesia induced by propofol. These observations include increased delta (0-4 Hz) and alpha (8-12 Hz) activities [1]. We extend a previous cortical model [2] and study a neuronal popula-tion model of a single thalamo-cortical module consisting of three different populations of neurons, namely cortical excitatory neurons, thalamocortical relay neurons and inhibitory thalamic reticular neurons (Fig. 1). Each module obeys a neural mass model. The cortical inhibitory popula-tion is neglected in our model to reveal the effect of pro-pofol action in the thalamus. Our model describes well the characteristic spectral changes observed experimentally within the delta-and alpha-frequency bands in frontal and occipital electrodes with increasing concentration of propofol. This shows that neglecting inhibitory action in the cor-tex but considering thalamic GABAergic action suffices to reproduce the data. From a modeling point of view, our reduced mathematical model is low dimensional and remains analytically treatable while still being adequate to reproduce observed changes in EEG rhythms. Moreover, it is shown that the propofol concentration acts as a control parameter of the system and that propofol-induced changes in the stationary states of the model system lead to changes in the corresponding nonlinear gain function that result in EEG power modulation: increases of power over the frontal region can be caused by an increase in the gain function of thalamocortical network. The results sug-gest that intra-thalamic inhibition from reticular neurons to relay cells plays an important role in the generation of the characteristic EEG patterns seen during general anesthesia. References 1. Cimenser A, et al: Tracking brain states under general anesthesia by using global coherence analysis. PNAS 2011, 108:8832-8837. 2. Hutt A: The anaesthetic propofol shifts the frequency of maximum spectral power in EEG during general anaesthesia: analytical insights from a linear model,. Front. Comput. Neurosci 2013, 7:2.
    Please look at http://www.springer.com/mathematics/analysis/book/978-3-642-54592-4
    We present a microscopic approach for the coupling of cortical activity, as resulting from proper dipole currents of pyramidal neurons, to the electromagnetic field in extracellular fluid in presence of diffusion and Ohmic conduction. Starting from a full-fledged three-compartment model of a single pyramidal neuron, including shunting and dendritic propagation, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. Under reasonable simplifications, we then derive a leaky integrate-and-fire model for the dynamics of a neural network, which facilitates comparison with existing neural network and observation models. In particular, we compare our results with a related model by means of numerical simulations. Performing a continuum limit, neural activity becomes represented by a neural field equation, while an observation model for electric field potentials is obtained from the interaction of cortical dipole currents with charge density in non-resistive extracellular space as described by the Nernst-Planck equation. Our work consistently satisfies the widespread dipole assumption discussed in the neuroscientific literature.
    Turing machines and Gödel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a Gödel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distribution functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a Dynamic Field Theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation dynamic field automaton.
    The tools of dynamical systems theory are having an increasing impact on our understanding of patterns of neural activity. In this tutorial chapter we describe how to build tractable tissue level models that maintain a strong link with biophysical reality. These models typically take the form of nonlinear integro-differential equations. Their non-local nature has led to the development of a set of analytical and numerical tools for the study of spatiotemporal patterns, based around natural extensions of those used for local differential equation models. We present an overview of these techniques, covering Turing instability analysis, amplitude equations , and travelling waves. Finally we address inverse problems for neural fields to train synaptic weight kernels from prescribed field dynamics.
    Background - We present analytical and numerical studies on the linear stability of spatially non-constant stationary states in heterogeneous neural fields for specific synaptic interaction kernels. Methods - The work shows the linear stabiliy analysis of stationary states and the implementation of a nonlinear heteroclinic orbit. Results - We find that the stationary state obeys the Hammerstein equation and that the neural field dynamics may obey a saddle-node bifurcation. Moreover our work takes up this finding and shows how to construct heteroclinic orbits built on a sequence of saddle nodes on multiple hierarchical levels on the basis of a Lotka-Volterra population dynamics. Conclusions - The work represents the basis for future implementation of meta-stable attractor dynamics observed experimentally in neural population activity, such as Local Field Potentials and EEG.
    By means of an intriguing physical example, magnetic surface swimmers, that can be described in terms of Dennett's intentional stance, I reconstruct a hierarchy of necessary and sufficient conditions for the applicability of the intentional strategy. It turns out that the different levels of the intentional hierarchy are contextually emergent from their respective subjacent levels by imposing stability constraints upon them. At the lowest level of the hierarchy, phenomenal physical laws emerge for the coarse-grained description of open, nonlinear, and dissipative non-equilibrium systems in critical states. One level higher, dynamic patterns, such as, for example, magnetic surface swimmers, are contextually emergent as they are invariant under certain symmetry operations. Again one level up, these patterns behave apparently rationally by selecting optimal pathways for the dissipation of energy that is delivered by external gradients. This is in accordance with the restated Second Law of thermodynamics as a stability criterion. At the highest level, true believers are intentional systems that are stable under exchanging their observation conditions.
    In their target article, Wang & Busemeyer (2013) [A quantum question order model supported by empirical tests of an a priori and precise prediction. Topics in Cognitive Science] discuss question order effects in terms of incompatible projectors on a Hilbert space. In a similar vein, Blutner recently presented an orthoalgebraic query language essentially relying on dynamic update semantics. Here, I shall comment on some interesting analogies between the different variants of dynamic semantics and generalized quantum theory to illustrate other kinds of order effects in human cognition, such as belief revision, the resolution of anaphors, and default reasoning that result from the crucial on-commutativity of mental operations upon the belief state of a cognitive agent.
    Answer
    Supervenience and a particular form of emergence, called "contextual emergence" are actually compatible with each other. You may consult the Scholarpedia entry:
    We propose a way in which Pothos and Busemeyer could strengthen their position. Taking a dynamic stance, we consider cognitive tests as functions that transfer a given input state into the state after testing. Under very general conditions, it can be shown that testable properties in cognition form an orthomodular lattice. Gleason’s theorem then yields the conceptual necessity of quantum probabilities (QP).
    Answer
    Hi Yanxia,
    there are also several ERP (event-related brain potential) studies on metaphore comprehension. Eg.
    Arzouan, Y.; Goldstein, A. & Faust, M. Brainwaves are stethoscopes: ERP correlates of novel metaphor comprehension Brain Research, 2007, 1160, 69 - 81
    Coulson, S. & Petten, C. V. A special role for the right hemisphere in metaphor comprehension?: ERP evidence from hemifield presentation Brain Research, 2007, 1146, 128 - 145
    Kazmerski, V. A.; Blasko, D. G. & Dessalegn, B. G. ERP and behavioral evidence of individual differences in metaphor comprehension Memory and Cognition, 2003, 31, 673 - 689
    Lai, V. T.; Curran, T. & Menn, L. Comprehending conventional and novel metaphors: An ERP study Brain Research, 2009, 1284, 145 - 155
    Pynte, J.; Besson, M.; Robichon, F.-H. & Poli, J. The time-course of metaphor comprehension: An event-related potential study Brain and Language, 1996, 55, 293 - 316
    I hope that could help you.
    Best,
    Peter
    We propose an algorithm for the detection of recurrence domains of complex dynamical systems from time series. Our approach exploits the characteristic checkerboard texture of recurrence domains exhibited in recurrence plots (RP). In phase space, RPs yield intersecting balls around sampling points that could be merged into cells of a phase space partition. We construct this partition by a rewriting grammar applied to the symbolic dynamics of time indices. A maximum entropy principle defines the optimal size of intersecting balls. The final application to high-dimensional brain signals yields an optimal symbolic recurrence plot revealing functional components of the signal.
    Answer
    Dear Samit,
    certainly, Fibonacci numbers cannot explain everything in nature (btw "42" is not a Fibonacci number...). However, they play an important role in nature. One possible reason for that is that the golden ratio is known as the "most irrational number". This has important consequences e.g. for the stability of dynamical systems as reflected by the KAM theorem which states that the movement of a classical three-body system is most robust against perturbations if its frequency ratio is the golden ratio.
    Answer
    This is a very important question! One significant contribution was made by Siegelmann and Sontag in a series of papers on "super Turing computability". They showed that a Turing machine can be realized by a recurrent neural network with rational numbered synaptic weights. Allowing for real numbers yields then super-Turing computational power. You may also consult my paper "Implementing Turing Machines by Dynamic Field Architectures".
    Best,
    Peter
    Question
    Quantum entanglement relies on the fact that pure quantum states are dispersive and often inseparable. Since pure classical states are dispersion-free they are always separable and cannot be entangled. However, entanglement is possible for epistemic, dispersive classical states. We show how such epistemic entanglement arises for epistemic states of classical dynamical systems based on phase space partitions that are not generating. We compute epistemically entangled states for two coupled harmonic oscillators.
    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced threecompartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the "open-field" configuration of the dendritic field potential around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. [Mazzoni, A., S. Panzeri, N. K. Logothetis, and N. Brunel (2008). Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Computational Biology 4 (12), e1000239], and conclude that our biophysically motivated approach yields substantial improvement.
    Answer
    Hi George,
    yes of course! In sematics linear algebra is used for latent semantic analysis (LSA). For syntax and to account for distributivity vector symbolic architectures (VSA) have been developed. We taught a class on these fields at last ESSLLI summer school:
    I hope this helps you.
    Peter
    Answer
    I could recommend Stablers' minimalist grammars: http://www.springerlink.com/content/833l733482rt1527/
    We reformulate minimalist grammars as partial functions on term algebras for strings and trees. Using filler/role bindings and tensor product representations, we construct homomorphisms for these data structures into geometric vector spaces. We prove that the structure-building functions as well as simple processors for minimalist languages can be realized by piecewise linear operators in representation space. We also propose harmony, i.e. the distance of an intermediate processing step from the final well-formed state in representation space, as a measure of processing complexity. Finally, we illustrate our findings by means of two particular arithmetic and fractal representations.
    We present a phenomenological modeling account to event-related brain potentials (ERP) in syntactic language processing. For a paradigmatic ERP experiment on the processing of phrase structure violations in German [Hahne and Friederici (1999). Electrophysiological evidence for two steps in syntactic analysis: Early automatic and late controlled processes. Journal of Cognitive Neuroscience, 11(2):194 – 205] we derive a context-free grammar representation of the stimulus material. We describe the phrase structure violation by a prediction error in an interactive left-corner parser that is mapped onto a dynamic field by means of dynamic cognitive modeling (DCM). Our model phenomenologically replicates the experimentally observed P600 ERP component elicited by the violation.
    Cognitive computation such as e.g. language processing, is conventionally regarded as Turing computation, and Turing machines can be uniquely implemented as nonlinear dynamical systems using generalized shifts and subsequent G\"odel encoding of the symbolic repertoire. The resulting nonlinear dynamical automata (NDA) are piecewise affine-linear maps acting on the unit square that is partitioned into rectangular domains. Iterating a single point, i.e. a microstate, by the dynamics yields a trajectory of, in principle, infinitely many points scattered through phase space. Therefore, the NDAs microstate dynamics does not necessarily terminate in contrast to its counterpart, the symbolic dynamics obtained from the rectangular partition. In order to regain the proper symbolic interpretation, one has to prepare ensembles of randomly distributed microstates with rectangular supports. Only the resulting macrostate evolution corresponds then to the original Turing machine computation. However, the introduction of random initial conditions into a deterministic dynamics is not really satisfactory. As a possible solution for this problem we suggest a change of perspective. Instead of looking at point dynamics in phase space, we consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation, yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. Thus, the symbolically meaningful NDA macrostate dynamics becomes represented by iterated function dynamics in DFT; hence we call the resulting representation dynamic field automata.
    In this article we give a short introduction to the online method of event-related (brain) potentials (ERPs) and their importance for our understanding of language structure and grammar. This methodology places high demands on (technical) requirements for laboratory equipment as well as on the skills of the investigator. However, the high costs are relatively balanced compared to the advantages of this experimental method. By using ERPs, it becomes possible to monitor the electrophysiological brain activity associated with speech processing in real time (millisecond by millisecond) and to draw conclusions on human language processing and the human parser. First, we present briefly how this method works and how ERPs can be classified (Section 1 and 2). In the following, we show that the ERP method can be used to study the processing of e. g. semantic, pragmatic and syntactic information (Section 3). Crucial for our discussion will be the interpretation of the so-called ERP components and their connection and importance for psycholinguistics and theoretical linguistics. In our presentation, we emphasize, that the electrophysiological brain activity in relation to specific (e. g. linguistic) stimuli can be used to identify distinct processes, which give a deeper insight into the different processing steps of language. At the end of this article (Section 4), we present some results from ERP studies of German negative-polar elements. Additionally, we highlight the advantage and benefits of an alternative method to analyze ERP data compared to the more ‘classical’ average technique.
    Computational neurolinguistics integrates methods from computational (psycho-)linguistics and computational neuroscience in order to model neural correlates of linguistic behavior. We illustrate these techniques using an example of the language processing of German negative polarity items (NPI) in the event-related brain potential (ERP) paradigm. To that aim, we first describe the syntactic and semantic licensing conditions of NPIs by means of slightly modified minimalist grammars. In a second step we use dynamic cognitive modeling (DCM) to map the state descriptions of a minimalist parser onto activation patterns of a neural network. Thirdly, the network’s synaptic weights are trained with the correct parse of NPI constructions. Using these weights we calculate neural harmony measures for correct and for ungrammatical NPI constructions. In a final step we correlate the harmonies of the dynamical model with experimentally obtained ERP amplitudes by means of a simple statistical model.
    Entanglement is a well-known and central concept in quantum theory, where it expresses a fundamental nonlocality (holism) of ontic quantum states, regarded as independent of epistemic means of gathering knowledge about them. An alternative, epistemic kind of entanglement is proposed for epistemic states (distributions) of dynamical systems represented in classical phase spaces. We conjecture that epistemic entanglement is to be expected if the states are based on improper phase space partitions. The construction of proper partitions crucially depends on the system dynamics. Although improper partitions have a number of undesirable consequences for the characterization of dynamical systems, they offer the potential to understand some interesting features such as incompatible descriptions, which are typical for complex systems. Epistemic entanglement due to improper partitions may give rise to epistemic classical states analogous to quantum superposition states. In mental systems, interesting candidates for such states have been coined acategorial states, and among their key features are temporally nonlocal correlations. These correlations can be related to the situation of epistemic entanglement.
    We discuss a specific way in which the notion of complementarity can be based on the dynamics of the system considered. This approach rests on an epis-temic representation of system states, reflecting our knowledge about a system in terms of coarse grainings (partitions) of its phase space. Within such an epistemic quantization of classical systems, compatible, comparable, commensurable, and com-plementary descriptions can be precisely characterized and distinguished from each other. Some tentative examples are indicated that, we suppose, would have been of interest to Pauli.
    Inverse problems in computational neuroscience comprise the determination of synaptic weight matrices or kernels for neural networks or neural fields respectively. Here, we reduce multi-dimensional inverse problems to inverse problems in lower dimensions which can be solved in an easier way or even explicitly through kernel construction. In particular, we discuss a range of embedding techniques and analyze their properties. We study the Amari equation as a particular example of a neural field theory. We obtain a solution of the full 2D or 3D problem by embedding 0D or 1D kernels into the domain of the Amari equation using a suitable path parametrization and basis transformations. Pulses are interconnected at branching points via path gluing. As instructive examples we construct logical gates, such as the persistent XOR and binary addition in neural fields. In addition, we compare results of inversion by dimensional reduction with a recently proposed global inversion scheme for neural fields based on Tikhonov-Hebbian learning. The results show that stable construction of complex distributed processes is possible via neural field dynamics. This is an important first step to study the properties of such constructions and to analyze natural or artificial realizations of neural field architectures.
    Syntactic theory provides a rich array of representational assumptions about linguistic knowledge and processes. Such detailed and independently motivated constraints on grammatical knowledge ought to play a role in sentence comprehension. However most grammar-based explanations of processing difficulty in the literature have attempted to use grammatical representations and processes per se to explain processing difficulty. They did not take into account that the description of higher cognition in mind and brain encompasses two levels: on the one hand, at the macrolevel, symbolic computation is performed, and on the other hand, at the microlevel, computation is achieved through processes within a dynamical system. One critical question is therefore how linguistic theory and dynamical systems can be unified to provide an explanation for processing effects. Here, we present such a unification for a particular account to syntactic theory: namely a parser for Stabler's Minimalist Grammars, in the framework of Smolensky's Integrated Connectionist/Symbolic architectures. In simulations we demonstrate that the connectionist minimalist parser produces predictions which mirror global empirical findings from psycholinguistic research.
    Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
    More than thirty years ago, Amari and colleagues proposed a statistical framework for identifying structurally stable macrostates of neural networks from observations of their microstates. We compare their stochastic stability criterion with a deterministic stability criterion based on the ergodic theory of dynamical systems, recently proposed for the scheme of contextual emergence and applied to particular inter-level relations in neuroscience. Stochastic and deterministic stability criteria for macrostates rely on macro-level contexts, which make them sensitive to differences between different macro-levels.
    We study inverse problems in neural eld theory, i.e. the construction of synaptic weight kernels yielding a prescribed neural eld dynamics. We address the issues of existence, uniqueness and stability of solutions to the inverse problem for the Amari neural eld equation as a special case, and prove that these problems are generally ill- posed. In order to construct solutions to the inverse problem, we rst recast the Amari equation into a linear perceptron equation in an innite-dimension al Banach or Hilbert space. In a second step, we construct sets of bi-orthogonal function systems allowing the approximation of synaptic weight kernels by a generalized Hebbian learning rule. Numerically, this construction is implemented by the Moore-Penrose pseudo-inverse method. We demonstrate the instability of these solutions and use the Tikhonov regular- ization method for stabilization and to prevent numerical overtting. We illustrate the stable construction of kernels by means of three instructive examples.
    The first goal of this work is to study the solvability of the neural field equation (known as ‘Amari equation’) which is an integro-differential equation in m+ 1 dimensions. In particular, we show the existence of global solutions for smooth activation functions f with values in [0, 1] and L1 kernels w via the Banach fixpoint theorem. We note that this setting is much more general than in most related studies, e.g. Ermentrout and McLeod (Proceedings of the Royal Society of Edinburgh 1993; 123A:461–478).For a Heaviside-type activation function f, we show that the approach above fails. However, with slightly more regularity on the kernel function w (we use Hölder continuity with respect to the argument x) we can employ compactness arguments, integral equation techniques and the results for smooth nonlinearity functions to obtain a global existence result in a weaker space.Finally, general estimates on the speed and durability of waves are derived. We show that compactly supported waves with directed kernels (i.e. w(x, y)⩽0 for x⩽y ) decay exponentially after a finite time and that the field has a well-defined finite speed. Copyright © 2009 John Wiley & Sons, Ltd.
    The human electroencephalogram (EEG) is globally characterized by a 1/f power spectrum superimposed with certain peaks, whereby the “alpha peak” in a frequency range of 8–14 Hz is the most prominent one for relaxed states of wakefulness. We present simulations of a minimal dynamical network model of leaky integrator neurons attached to the nodes of an evolving directed and weighted random graph (an Erdős–Rényi graph). We derive a model of the dendritic field potential (DFP) for the neurons leading to a simulated EEG that describes the global activity of the network. Depending on the network size, we find an oscillatory transition of the simulated EEG when the network reaches a critical connectivity. This transition, indicated by a suitably defined order parameter, is reflected by a sudden change of the network's topology when super-cycles are formed from merging isolated loops. After the oscillatory transition, the power spectra of simulated EEG time series exhibit a 1/f continuum superimposed with certain peaks.
    After reviewing several physiological findings on oscillations in the electroencephalogram (EEG) and their possible explanations by dynamical modeling, we present neural networks consisting of leaky integrator units as a universal paradigm for neural and cognitive modeling. In contrast to standard recurrent neural networks, leaky integrator units are described by ordinary differential equations living in continuous time. We present an algorithm to train the temporal behavior of leaky integrator networks by generalized back-propagation and discuss their physiological relevance. Eventually, we show how leaky integrator units can be used to build oscillators that may serve as models of brain oscillations and cognitive processes.
    This chapter presents an introductory course to the biophysics of neurons, comprising a discussion of ion channels, active and passive membranes, action potentials and postsynaptic potentials. It reviews several conductance-based and reduced neuron models, neural networks and neural .eld theories. Finally, the basic principles of the neuroelectrodynamics of mass potentials, i.e. dendritic .elds, local .eld potentials, and the electroencephalogram are elucidated and their putative functional role as a mean .eld is discussed.
    Event-related brain potentials (ERP) are important neural correlates of cognitive processes. In the domain of language processing, the N400 and P600 reflect lexical-semantic integration and syntactic processing problems, respectively. We suggest an interpretation of these markers in terms of dynamical system theory and present two nonlinear dynamical models for syntactic computations where different processing strategies correspond to functionally different regions in the system's phase space.
    Manuscript Region of Origin: Abstract: We construct a mapping from complex recursive linguistic data structures to spherical wave functions using Smolensky's filler/role bindings and tensor product representations. Syntactic language processing is then described by the transient evolution of these spherical patterns whose amplitudes are governed by nonlinear order parameter equations. Implications of the model in terms of brain wave dynamics are indicated. Manuscript Click here to download Manuscript: cnd_bgpsp.tex Cognitive Neurodynamics manuscript No.
    Nonlinear dynamical automata (NDAs) are implementations of Turing machines by nonlinear dynamical systems. In order to use them as parsers, the whole string to be processed has to be encoded in the initial conditions of the dynamics. This is, however, rather unnatural for modeling human language processing. I shall outline an extension of NDAs that is able to cope with that problem. The idea is to encode only a "working memory" by a set of initial conditions in the system's phase space, while incoming new material then acts like "quantum operators" upon the phase space thus mapping a set of initial conditions onto an-other set. Because strings can be concatenated non-commutatively, they form the word semigroup, whose algebraic properties must be preserved by this mapping. This leads to an algebraic representation theory of the word semigroup by quantum operators acting upon the phase space of the NDA.
    We present the symbolic resonance analysis (SRA) as a viable method for addressing the problem of enhancing a weakly dominant mode in a mixture of impulse responses obtained from a nonlinear dynamical system. We demonstrate this using results from a numerical simulation with Duffing oscillators in different domains of their parameter space, and by analyzing event-related brain potentials (ERPs) from a language processing experiment in German as a representative application. In this paradigm, the averaged ERPs exhibit an N400 followed by a sentence final negativity. Contemporary sentence processing models predict a late positivity (P600) as well. We show that the SRA is able to unveil the P600 evoked by the critical stimuli as a weakly dominant mode from the covering sentence final negativity.
    The emergence of mental states from neural states by partitioning the neural phase space is analyzed in terms of symbolic dynamics. Well-defined mental states provide contexts inducing a criterion of structural stability for the neurodynamics that can be implemented by particular partitions. This leads to distinguished subshifts of finite type that are either cyclic or irreducible. Cyclic shifts correspond to asymptotically stable fixed points or limit tori whereas irreducible shifts are obtained from generating partitions of mixing hyperbolic systems. These stability criteria are applied to the discussion of neural correlates of consiousness, to the definition of macroscopic neural states, and to aspects of the symbol grounding problem. In particular, it is shown that compatible mental descriptions, topologically equivalent to the neurodynamical description, emerge if the partition of the neural phase space is generating. If this is not the case, mental descriptions are incompatible or complementary. Consequences of this result for an integration or unification of cognitive science or psychology, respectively, will be indicated.
    We outline a computational model of syntactic lan- guage processing based on Smolensky’s Fock space representations of symbolic expressions using spherical wave functions. Symbolic computation, regarded as non- linear operators acting upon these waves, provides a dis- crete sequence of training patterns that could be used to solve the inverse problem of neural field theories in or- der to determine the synaptic connectivity/weight kernels. The solutions of a neural field equation should then pro- vide a model of event-related brain potentials that are elicited by syntactic processing problems.
    In a post hoc analysis, we investigate differences in event-related potentials of two studies (Drenhaus et al., 2004, Drenhaus et al., to appear, Saddy et al., 2004a and Saddy et al., 2004b) by using the symbolic resonance analysis (Beim Graben & Kurths, 2003). The studies under discussion, examined the failure to license a negative polarity item (NPI) in German: Saddy et al. (2004a) reported an N400 component when the NPI was not accurately licensed by negation; Drenhaus et al., 2004 and Drenhaus et al., to appear considered additionally the influence of constituency of the licensor in NPI constructions. A biphasic N400-P600 response was found for the two induced violations (the lack of licensor and the inaccessibility of negation in a relative clause). The symbolic resonance analysis (SRA) revealed an effect in the P600 time window for the data in Saddy et al., which was not found by using the averaging technique. The SRA of the ERPs in Drenhaus et al., showed that the P600 components are distinguishable concerning the amplitude and latency. It was smaller and earlier in the condition where the licensor is inaccessible, compared to the condition without negation in the string. Our findings suggest that the failure in licensing NPIs is not exclusively related to semantic integration costs (N400). The elicited P600 components reflect differences in syntactic processing. Our results confirm and replicate the effects of the traditional voltage average analysis and show that the SRA is a useful tool to reveal and pull apart ERP differences which are not evident using the traditional voltage average analysis.
    The concept of complementarity, originally defined for non-commuting observables of quantum systems with states of non-vanishing dispersion, is extended to classical dynamical systems with a partitioned phase space. Interpreting partitions in terms of ensembles of epistemic states (symbols) with corresponding classical observables, it is shown that such observables are complementary to each other with respect to particular partitions unless those partitions are generating. This explains why symbolic descriptions based on an \emph{ad hoc} partition of an underlying phase space description should generally be expected to be incompatible. Related approaches with different background and different objectives are discussed.
    In 1972, Ernst Ulrich and Christine von Weizsäcker introduced the concept of pragmatic information with three desiderata: (i) Pragmatic information should assess the impact of a message upon its receiver; (ii) Pragmatic information should vanish in the limits of complete (non-interpretable) "novelty" and complete "confirmation"; (iii) Pragmatic information should exhibit non-classical properties since novelty and confirmation behave similarly to Fourier pairs of complementary operators in quantum mechanics. It will be shown how these three desiderata can be naturally fulfilled within the framework of Gärdenfors' dynamic semantics of Bayesian belief models. (i) The meaning of a message is its impact upon the epistemic states of a cognitive agent. A pragmatic information measure can then be quantified by the average information gain for the transition from a prior to a posterior state. (ii) Total novelty can be represented by the identical proposition, total confirmation by the logical consequence of propositions. In both cases, pragmatic information vanishes. (iii) For operators that are neither idempotent nor commuting, novelty and confirmation relative to a message sequence can be defined within Gärdenfors' theory of belief revisions. The proposed approach is consistent with measures of relevance derived from statistical decision theory and it contains Bar-Hillel's and Carnap's theory of semantic information as a special case.
    The present issue of Mind and Matter on the concept of "pragmatic information" has originated from a frutiful collaboration with Peter beim Graben, whose active involvement as a co-editor was decisive for its production and is greatly appreciated. The following extended editorial introduces the topic within a broader background. In particular, the concept of pragmatic information will be related to the study of complex systems and to concepts of complexity that are not in detail addressed in the individual contributions to the issue. Finally, possible connections to an epistemically understood distinction of mental and material domains of discourse will be indicated.
    We apply the recently developed symbolic resonance analysis to electroencephalographic measurements of event-related brain potentials (ERPs) in a language processing experiment by using a three-symbol static encoding with varying thresholds for analyzing the ERP epochs, followed by a spin-flip transformation as a nonlinear filter. We compute an estimator of the signal-to-noise ratio (SNR) for the symbolic dynamics measuring the coherence of threshold-crossing events. Hence, we utilize the inherent noise of the EEG for sweeping the underlying ERP components beyond the encoding thresholds. Plotting the SNR computed within the time window of a particular ERP component (the N400) against the encoding thresholds, we find different resonance curves for the experimental conditions. The maximal differences of the SNR lead to the estimation of optimal encoding thresholds. We show that topographic brain maps of the optimal threshold voltages and of their associated coherence differences are able to dissociate the underlying physiological processes, while corresponding maps gained from the customary voltage averaging technique are unable to do so.
    Previous ERP studies have found an N400-P600 pattern in sentences in which the number of arguments does not match the number of arguments that the verb can take. In the present study, we elaborate on this question by investigating whether the case of the mismatching object argument in German (accusative/direct object versus dative/indirect object) affects processing differently. In general, both types of mismatches elicited a biphasic N400-P600 response in the ERP. However, traditional voltage average analysis was unable to reveal differences between the two mismatching conditions, that is, between a mismatching accusative versus dative. Therefore, we employed a recently developed method on ERP data analysis, the symbolic resonance analysis (SRA), where EEG epochs are symbolically encoded in sequences of three symbols depending on a given parameter, the encoding threshold. We found a larger proportion of threshold crossing events with negative polarity in the N400 time window for a mismatching dative argument compared to a mismatching accusative argument. By contrast, the proportion of threshold crossing events with positive polarity was smaller for dative in the P600 time window. We argue that this difference is due to the phenomenon of "free dative" in German. This result also shows that the SRA provides a useful tool for revealing ERP differences that cannot be discovered using the traditional voltage average analysis.
    In most experiments using event-related brain potentials (ERPs), there is a straightforward way to define--on theoretical grounds-which of the conditions tested is the experimental condition and which is the control condition. If, however, theoretical assumptions do not give sufficient and unambiguous information to decide this question, then the interpretation of an ERP effect becomes difficult, especially if one takes into account that certain effects can be both a positivity or a negativity on the basis of the morphology of the pattern as well as with respect to peak latency (regard for example, N400 and P345). Exemplified with an ERP experiment on language processing, we present such a critical case and offer a possible solution on the basis of nonlinear data analysis. We show that a generalized polarity histogram, the word statistics of symbolic dynamics, is in principle able to distinguish negative going ERP components from positive ones when an appropriate encoding strategy, the half wave encoding is employed. We propose statistical criteria which allow to determine ERP components on purely methodological grounds.
    Complexity within the language system arises from two a priori distinct sources: the computational complexity inherent in the grammar of the language itself or ``formal linguistic complexity'', and the procedural complexity resulting from marshalling processing resources in order to produce or interpret utterances that correspond to the grammar. Whether or not these two aspects of language can be distinguished is a long debated issue. In this short paper we will outline how the use of symbolic encoding techniques may reveal both markers of procedural processing and markers of formal linguistic content.
    Top co-authors
    View All