Max Planck Institute for Biological Cybernetics
Recent publications
Our choices are typically accompanied by a feeling of confidence—an internal estimate that they are correct. Correctness, however, depends on our goals. For example, exploration-exploitation problems entail a tension between short- and long-term goals: finding out about the value of one option could mean foregoing another option that is apparently more rewarding. Here, we hypothesised that after making an exploratory choice that involves sacrificing an immediate gain, subjects will be confident that they chose a better option for long-term rewards, but not confident that it was a better option for immediate reward. We asked 250 subjects across 2 experiments to perform a varying-horizon two-arm bandits task, in which we asked them to rate their confidence that their choice would lead to more immediate, or more total reward. Confirming previous studies, we found a significant increase in exploration with increasing trial horizon, but, contrary to our predictions, we found no difference between confidence in immediate or total reward. This dissociation is further evidence for a separation in the mechanisms involved in choices and confidence judgements.
Surprise responses signal both, high-level cognitive alerts that information is missing, and increasingly specific back-propagating error signals that allow updates in processing nodes. Studying surprise is hence central for cognitive neuroscience to understand internal world representations and learning. Yet, only few prior studies used naturalistic stimuli targeting our high-level understanding of the world. Here, we use magic tricks in an fMRI experiment to investigate neural responses to violations of core assumptions held by humans about the world. We showed participants naturalistic videos of three types of magic tricks, involving objects appearing, changing color, or disappearing, along with control videos without any violation of expectation. Importantly, the same videos were presented with and without prior knowledge about the tricks’ explanation. Results revealed generic responses in frontal and parietal areas, together with responses specific to each of the three trick types in posterior sensory areas. A subset of these regions, the midline areas of the default mode network (DMN), showed surprise activity that depended on prior knowledge. Equally, sensory regions showed sensitivity to prior knowledge, reflected in differing decoding accuracies. These results suggest a hierarchy of surprise signals involving generic processing of violation of expectations in frontal and parietal areas with concurrent surprise signals in sensory regions that are specific to the processed features.
The draining-vein bias of T2*-weighted sequences, like gradient echo echo-planar imaging (GRE-EPI), can limit the spatial specificity of functional MRI (fMRI). The underlying extravascular signal changes increase with field strength (B0) and the perpendicularity of draining veins to the main axis of B0, and are, therefore, particularly problematic at ultra-high field (UHF). In contrast, simulations showed that T2-weighted sequences are less affected by the draining-vein bias, depending on the amount of rephasing of extravascular signal. As large pial veins on the cortical surface follow the cortical folding tightly, their orientation can be approximated by the cortical orientation to B0→. In our work, we compare the influence of the cortical orientation to B0→ on the resting-state fMRI signal of three sequences aiming to understand their macrovascular contribution. While 2D GRE-EPI and 3D GRE-EPI (both T2*-weighted) showed a high dependence on the cortical orientation to B0→, especially on the cortical surface, this was not the case for 3D balanced steady-state free precession (bSSFP) (T2/T1-weighted). Here, a slight increase of orientation dependence was shown in depths closest to white matter (WM). And while orientation dependence decreased with increased distance to the veins for both EPI sequences, no change in orientation dependence was observed in bSSFP. This indicates the low macrovascular contribution to the bSSFP signal, making it a promising sequence for layer fMRI at UHF.
Nuclear magnetism underpins areas such as medicine in magnetic resonance imaging (MRI). Hyperpolarization of nuclei enhances the quantity and quality of information that can be determined from these techniques by increasing their signal to noise ratios by orders of magnitude. However, some of these hyperpolarization techniques rely on the use of low to ultralow magnetic fields (ULF) (nTs-mTs). The broadband character and ultrasensitive field sensitivity of superconducting quantum interference devices (SQUID) allow for probing nuclear magnetism at these fields, where other magnetometers, such as Faraday coils and flux gates do not. To this end, we designed a reactor to hyperpolarize [1- 13 C]pyruvate with the technique, signal amplification by reversible exchange in shield enables alignment transfer to heteronuclei (SABRE-SHEATH). Hyperpolarized pyruvate has been shown to be very powerful for the diagnosis of tumours with MRI as its metabolism is associated with various pathologies. We were able to characterize the field sensitivity of our setup by simulating the filled reactor in relation to its placement in our ultralow noise, ULF MRI setup. Using the simulations, we determined that our hyperpolarization setup results in a 13 C polarization of 0.4%, a signal enhancement of \sim 100 000 000 versus the predicted thermal equilibrium signal at earth field ( \sim 50 μ\mu T). This results in a 13 C signal of 6.20 ±\pm 0.34 pT, which with our ultralow noise setup, opens the possibility for direct observation of the hyperpolarization and the subsequent spin-lattice relaxation without system perturbation.
Purpose To develop and validate a novel analytical approach simplifying , , proton density (PD), and off‐resonance quantifications from phase‐cycled balanced steady‐state free precession (bSSFP) data. Additionally, to introduce a method to correct aliasing effects in undersampled bSSFP profiles. Theory and Methods Off‐resonant‐encoded analytical parameter quantification using complex linearized equations (ORACLE) provides analytical solutions for bSSFP profiles. which instantaneously quantify , , proton density (PD), and . An aliasing correction formalism was derived to allow undersampling of bSSFP profiles. ORACLE was used to quantify , , PD, /, and based on fully sampled () bSSFP profiles from numerical simulations and 3T MRI experiments in phantom and 10 healthy subjects' brains. Obtained values were compared with reference scans in the same scan session. Aliasing correction was validated in subsampled () bSSFP profiles in numerical simulations and human brains. Results ORACLE quantifications agreed well with input values from simulations and phantom reference values ( R ² = 0.99). In human brains, and quantifications when compared with reference methods showed coefficients of variation below 2.9% and 3.9%, biases of 182 and 16.6 ms, and mean white‐matter values of 642 and 51 ms using ORACLE. The quantification differed less than 3 Hz between both methods. PD and maps had comparable histograms. The maps effectively identified cerebrospinal fluid. Aliasing correction removed aliasing‐related quantification errors in undersampled bSSFP profiles, significantly reducing scan time. Conclusion ORACLE enables simplified and rapid quantification of , , PD, and from phase‐cycled bSSFP profiles, reducing acquisition time and eliminating biomarker maps' coregistration issues.
Deciphering the connectome, the ensemble of synaptic connections that underlie brain function, is a central goal of neuroscience research. Here we report the in vivo mapping of connections between presynaptic and postsynaptic partners in zebrafish, by adapting the trans-Tango genetic approach that was first developed for anterograde transsynaptic tracing in Drosophila. Neural connections were visualized between synaptic partners in larval retina, brain and spinal cord and followed over development. The specificity of labeling was corroborated by functional experiments in which optogenetic activation of presynaptic spinal cord interneurons elicited responses in known motor neuronal postsynaptic targets, as measured by trans-Tango-dependent expression of a genetically encoded calcium indicator or by electrophysiology. Transsynaptic signaling through trans-Tango reveals synaptic connections in the zebrafish nervous system, providing a valuable in vivo tool to monitor and interrogate neural circuits over time.
Study Objectives Melanopsin-expressing retinal ganglion cells, which provide light information to time sleep and entrain circadian clocks, also influence perceived brightness raising the possibility that psychophysical paradigms could be used to explore the origins and implications of variability in melanopic sensitivity. We aimed to develop accessible psychophysical tests of melanopic vision and relate outcomes with a pupillometric measure of melanopsin function (post-illumination pupil response; PIPR) and prior light exposure. Methods Individually calibrated pairs of isoluminant stimuli differing in melanopic radiance from a four primary source were presented sequentially with superimposed random colour offsets in a two alternative forced choice brightness preference paradigm to 41 naïve adult participants with personal light exposure data for the prior 7 days and PIPR measures defined by comparing maintained pupil constriction for luminance matched ‘red’ vs ‘blue’ pulses. Results Across participants we observed the expected tendency to report positive melanopsin contrast stimuli as ‘brighter’ (one-tailed t-test p<0.001), but with substantial inter-individual variability in both sensitivity (melanopsin contrast at criterion preference p=0.75) and amplitude (preference at maximum melanopic contrast). There was little correlation between these psychophysical outcomes and PIPR magnitude, or between either psychophysical or PIPR measures and light history metrics (pairwise Pearson correlation coefficients -0.5> <0.5). Random forest machine learning failed to satisfactorily predict outcome for either psychophysical or PIPR measures based upon these inputs. Conclusions Our findings reveal that estimates of melanopic function provided by perceptual and pupillometric paradigms can be largely independent of one another and of recent history of light exposure.
Background Light exposure regulates the human circadian system and more widely affects health, well-being, and performance. With the rise in field studies on light exposure’s effects, the amount of data collected through wearable loggers and dosimeters has also grown. These data are more complex than stationary laboratory measurements. Determining sample sizes in field studies is challenging, as the literature shows a wide range of sample sizes (between 2 and 1,887 from a recent review of the field and approaching 10⁵ participants in first studies using large-scale ‘biobank’ databases). Current decisions on sample size for light exposure data collection lack a specific basis rooted in power analysis. Therefore, there is a need for clear guidance on selecting sample sizes. Methods Here, we introduce a novel procedure based on hierarchical bootstrapping for calculating statistical power and required sample size for wearable light and optical radiation logging data and derived summary metrics, taking into account the hierarchical data structure (mixed-effects model) through stepwise resampling. Alongside this method, we publish a dataset that serves as one possible basis to perform these calculations: one week of continuous data in winter and summer, respectively, for 13 early-day shift-work participants (collected in Dortmund, Germany; lat. 51.514° N, lon. 7.468° E). Results Applying our method on the dataset for twelve different summary metrics (luminous exposure, geometric mean, and standard deviation, timing/time above/below threshold, mean/midpoint of darkest/brightest hours, intradaily variability) with a target comparison across winter and summer, reveals required sample sizes ranging from as few as 3 to more than 50. About half of the metrics–those that focus on the bright time of day–showed sufficient power already with the smallest sample. In contrast, metrics centered around the dark time of the day and daily patterns required higher sample sizes: mean timing of light below mel EDI of 10 lux (5), intradaily variability (17), mean of darkest 5 hours (24), and mean timing of light above mel EDI of 250 lux (45). The geometric standard deviation and the midpoint of the darkest 5 hours lacked sufficient power within the tested sample size. Conclusions Our novel method provides an effective technique for estimating sample size in light exposure studies. It is specific to the used light exposure or dosimetry metric and the effect size inherent in the light exposure data at the basis of the bootstrap. Notably, the method goes beyond typical implementations of bootstrapping to appropriately address the structure of the data. It can be applied to other datasets, enabling comparisons across scenarios beyond seasonal differences and activity patterns. With an ever-growing pool of data from the emerging literature, the utility of this method will increase and provide a solid statistical basis for the selection of sample sizes.
Under natural conditions, animals repeatedly encounter the same visual scenes, objects or patterns repeatedly. These repetitions constitute statistical regularities, which the brain captures in an internal model through learning. A signature of such learning in primate visual areas V1 and V4 is the gradual strengthening of gamma synchronization. We used a V1-V4 Dynamic Causal Model (DCM) to explain visually induced responses in early and late epochs from a sequence of several hundred grating presentations. The DCM reproduced the empirical increase in local and inter-areal gamma synchronization, revealing specific intrinsic connectivity effects that could explain the phenomenon. In a sensitivity analysis, the isolated modulation of several connection strengths induced increased gamma. Comparison of alternative models showed that empirical gamma increases are better explained by (1) repetition effects in both V1 and V4 intrinsic connectivity (alone or together with extrinsic) than in extrinsic connectivity alone, and (2) repetition effects on V1 and V4 population input rather than output gain. The best input gain model included effects in V1 granular and superficial excitatory populations and in V4 granular and deep excitatory populations. Our findings are consistent with gamma reflecting bottom-up signal precision, which increases with repetition and, therefore, with predictability and learning. Highlights We model learning effects in macaque visual cortex using Dynamic Causal Modeling. Microcircuit-level changes explain the repetition-induced gamma increases. The best models include changes 1) within V1 and V4 and 2) in neuronal input gain. Gamma may reflect bottom-up signal precision.
The signal amplification by reversible exchange process (SABRE) enhances NMR signals by unlocking hidden polarization in parahydrogen through interactions with to-be-hyperpolarized substrate molecules when both are transiently bound to an Ir-based organometallic catalyst. Recent efforts focus on optimizing polarization transfer from parahydrogen-derived hydride ligands to the substrate in SABRE. However, this requires quantitative information on ligand exchange rates, which common NMR techniques struggle to provide. Here, we introduce an experimental spin order transfer sequence, with readout occurring at ¹⁵N nuclei directly interacting with the catalyst. Enhanced ¹⁵N NMR signals overcome sensitivity challenges, encoding substrate dissociation rates. This methodology enables robust data fitting to ligand exchange models, yielding substrate dissociation rate constants with higher precision than classical 1D and 2D ¹H NMR approaches. This refinement improves the accuracy of key activation enthalpy ΔH‡ and entropy ΔS‡ estimates. Furthermore, the higher chemical shift dispersion provided by enhanced ¹⁵N NMR reveals the kinetics of substrate dissociation for acetonitrile and metronidazole, previously inaccessible via ¹H NMR due to small chemical shift differences between free and Ir-bound substrates. The presented approach can be successfully applied not only to isotopically enriched substrates but also to compounds with natural abundance of the to-be-hyperpolarized heteronuclei.
Memory deficits are a hallmark of many different neurological and psychiatric conditions. The Rey–Osterrieth complex figure (ROCF) is the state-of-the-art assessment tool for neuropsychologists across the globe to assess the degree of non-verbal visual memory deterioration. To obtain a score, a trained clinician inspects a patient’s ROCF drawing and quantifies deviations from the original figure. This manual procedure is time-consuming, slow and scores vary depending on the clinician’s experience, motivation, and tiredness. Here, we leverage novel deep learning architectures to automatize the rating of memory deficits. For this, we collected more than 20k hand-drawn ROCF drawings from patients with various neurological and psychiatric disorders as well as healthy participants. Unbiased ground truth ROCF scores were obtained from crowdsourced human intelligence. This dataset was used to train and evaluate a multihead convolutional neural network. The model performs highly unbiased as it yielded predictions very close to the ground truth and the error was similarly distributed around zero. The neural network outperforms both online raters and clinicians. The scoring system can reliably identify and accurately score individual figure elements in previously unseen ROCF drawings, which facilitates explainability of the AI-scoring system. To ensure generalizability and clinical utility, the model performance was successfully replicated in a large independent prospective validation study that was pre-registered prior to data collection. Our AI-powered scoring system provides healthcare institutions worldwide with a digital tool to assess objectively, reliably, and time-efficiently the performance in the ROCF test from hand-drawn images.
Light profoundly impacts many aspects of human physiology and behaviour, including the synchronization of the circadian clock, the production of melatonin, and cognition. These effects of light, termed the non-visual effects of light, have been primarily investigated in laboratory settings, where light intensity, spectrum and timing can be carefully controlled to draw associations with physiological outcomes of interest. Recently, the increasing availability of wearable light loggers has opened the possibility of studying personal light exposure in free-living conditions where people engage in activities of daily living, yielding findings associating aspects of light exposure and health outcomes, supporting the importance of adequate light exposure at appropriate times for human health. However, comprehensive protocols capturing environmental (e.g., geographical location, season, climate, photoperiod) and individual factors (e.g., culture, personal habits, behaviour, commute type, profession) contributing to the measured light exposure are currently lacking. Here, we present a protocol that combines smartphone-based experience sampling (experience sampling implementing Karolinska Sleepiness Scale, KSS ratings) and high-quality light exposure data collection at three body sites (near-corneal plane between the two eyes mounted on spectacle, neck-worn pendant/badge, and wrist-worn watch-like design) to capture daily factors related to individuals’ light exposure. We will implement the protocol in an international multi-centre study to investigate the environmental and socio-cultural factors influencing light exposure patterns in Germany, Ghana, Netherlands, Spain, Sweden, and Turkey (minimum n = 15, target n = 30 per site, minimum n = 90, target n = 180 across all sites). With the resulting dataset, lifestyle and context-specific factors that contribute to healthy light exposure will be identified. This information is essential in designing effective public health interventions.
Identifying goal-relevant features in novel environments is a central challenge for efficient behaviour. We asked whether humans address this challenge by relying on prior knowledge about common properties of reward-predicting features. One such property is the rate of change of features, given that behaviourally relevant processes tend to change on a slower timescale than noise. Hence, we asked whether humans are biased to learn more when task-relevant features are slow rather than fast. To test this idea, 295 human participants were asked to learn the rewards of two-dimensional bandits when either a slowly or quickly changing feature of the bandit predicted reward. Across two experiments and one preregistered replication, participants accrued more reward when a bandit’s relevant feature changed slowly, and its irrelevant feature quickly, as compared to the opposite. We did not find a difference in the ability to generalise to unseen feature values between conditions. Testing how feature speed could affect learning with a set of four function approximation Kalman filter models revealed that participants had a higher learning rate for the slow feature, and adjusted their learning to both the relevance and the speed of feature changes. The larger the improvement in participants’ performance for slow compared to fast bandits, the more strongly they adjusted their learning rates. These results provide evidence that human reinforcement learning favours slower features, suggesting a bias in how humans approach reward learning.
We introduce an open-source Python package for the analysis of large-scale electrophysiological data, named SyNCoPy, which stands for Systems Neuroscience Computing in Python. The package includes signal processing analyses across time (e.g., time-lock analysis), frequency (e.g., power spectrum), and connectivity (e.g., coherence) domains. It enables user-friendly data analysis on both laptop-based and high-performance computing systems. SyNCoPy is designed to facilitate trial-parallel workflows (parallel processing of trials), making it an ideal tool for large-scale analysis of electrophysiological data. Based on parallel processing of trials, the software can support very large-scale datasets via innovative out-of-core computation techniques. It also provides seamless interoperability with other standard software packages through a range of file format importers and exporters and open file formats. The naming of the user functions closely follows the well-established FieldTrip framework, which is an open-source MATLAB toolbox for advanced analysis of electrophysiological data.
Cortical neurons are versatile and efficient coding units that develop strong preferences for specific stimulus characteristics. The sharpness of tuning and coding efficiency is hypothesized to be controlled by delicately balanced excitation and inhibition. These observations suggest a need for detailed co-tuning of excitatory and inhibitory populations. Theoretical studies have demonstrated that a combination of plasticity rules can lead to the emergence of excitation/inhibition (E/I) co-tuning in neurons driven by independent, low-noise signals. However, cortical signals are typically noisy and originate from highly recurrent networks, generating correlations in the inputs. This raises questions about the ability of plasticity mechanisms to self-organize co-tuned connectivity in neurons receiving noisy, correlated inputs. Here, we study the emergence of input selectivity and weight co-tuning in a neuron receiving input from a recurrent network via plastic feedforward connections. We demonstrate that while strong noise levels destroy the emergence of co-tuning in the readout neuron, introducing specific structures in the non-plastic pre-synaptic connectivity can re-establish it by generating a favourable correlation structure in the population activity. We further show that structured recurrent connectivity can impact the statistics in fully plastic recurrent networks, driving the formation of co-tuning in neurons that do not receive direct input from other areas. Our findings indicate that the network dynamics created by simple, biologically plausible structural connectivity patterns can enhance the ability of synaptic plasticity to learn input-output relationships in higher brain areas.
Technological advances in fMRI including ultra-high magnetic fields (≥ 7 T) and acquisition methods that increase spatial specificity have paved the way for studies of the human cortex at the scale of layers and columns. This mesoscopic scale promises an improved mechanistic understanding of human cortical function so far only accessible to invasive animal neurophysiology. In recent years, an increasing number of studies have applied such methods to better understand the cortical function in perception and cognition. This future perspective article asks whether closed-loop fMRI studies could equally benefit from these methods to achieve layer and columnar specificity. We outline potential applications and discuss the conceptual and concrete challenges, including data acquisition and volitional control of mesoscopic brain activity. We anticipate an important role of fMRI with mesoscopic resolution for closed-loop fMRI and neurofeedback, yielding new insights into brain function and potentially clinical applications. This article is part of the theme issue ‘Neurofeedback: new territories and neurocognitive mechanisms of endogenous neuromodulation’.
The thalamus has a key role in mediating cortical-subcortical interactions but is often neglected in neuroimaging studies, which mostly focus on changes in cortical structure and activity. One of the main reasons for the thalamus being overlooked is that the delineation of individual thalamic nuclei via neuroimaging remains controversial. Indeed, neuroimaging atlases vary substantially regarding which thalamic nuclei are included and how their delineations were established. Here, we review current and emerging methods for thalamic nuclei segmentation in neuroimaging data and consider the limitations of existing techniques in terms of their research and clinical applicability. We address these challenges by proposing a roadmap to improve thalamic nuclei segmentation in human neuroimaging and, in turn, harmonize research approaches and advance clinical applications. We believe that a collective effort is required to achieve this. We hope that this will ultimately lead to the thalamic nuclei being regarded as key brain regions in their own right and not (as often currently assumed) as simply a gateway between cortical and subcortical regions.
Generalization, defined as applying limited experiences to novel situations, represents a cornerstone of human intelligence. Our review traces the evolution and continuity of psychological theories of generalization, from its origins in concept learning (categorizing stimuli) and function learning (learning continuous input-output relationships) to domains such as reinforcement learning and latent structure learning. Historically, there have been fierce debates between approaches based on rule-based mechanisms, which rely on explicit hypotheses about environmental structure, and approaches based on similarity-based mechanisms, which leverage comparisons to prior instances. Each approach has unique advantages: Rules support rapid knowledge transfer, while similarity is computationally simple and flexible. Today, these debates have culminated in the development of hybrid models grounded in Bayesian principles, effectively marrying the precision of rules with the flexibility of similarity. The ongoing success of hybrid models not only bridges past dichotomies but also underscores the importance of integrating both rules and similarity for a comprehensive understanding of human generalization.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
143 members
Kuno Kirschfeld
  • max planck institut für biologische kybernetik
Vahid S. Bokharaie
  • Division of Neurophysiology of Cognitive Processes
Wolfgang Grodd
  • Department of High-Field Magnetic Resonance
Gabriele Lohmann
  • Department of High-Field Magnetic Resonance
Aenne A. Brielmann
  • Computational Neuroscience
Information
Address
Tübingen, Germany