Conference PaperPDF Available

The Human Brain Project–Chances and Challenges for Cognitive Systems

Authors:

Abstract and Figures

The Human Brain Project is one of the largest scientific initiatives dedicated to the research of the human brain worldwide. Over 80 research groups from a broad variety of scientific areas, such as neuroscience, simulation science, high performance computing, robotics, and visualization work together in this European research initiative. This work at hand will identify certain chances and challenges for cognitive systems engineering resulting from the HBP research activities. Beside the main goal of the HBP gathering deeper insights into the structure and function of the human brain, cognitive system research can directly benefit from the creation of cognitive architectures, the simulation of neural networks, and the application of these in context of (neuro-)robotics. Nevertheless, challenges arise regarding the utilization and transformation of these research results for cognitive systems, which will be discussed in this paper. Tools necessary to cope with these challenges are visualization techniques helping to understand and gain insights into complex data. Therefore, this paper presents a set of visualization techniques developed at the Virtual Reality Group at the RWTH Aachen University.
Content may be subject to copyright.
The Human Brain Project Chances and Challenges for Cognitive Systems
Benjamin Weyers*, Christian Nowke*, Claudia Hänel*,
Daniel Zielasko*, Bernd Hentschel*, Torsten Kuhlen*
*Virtual Reality Group, RWTH Aachen, Seffenter Weg 23, 52074 Aachen
Germany (Tel: +49-241-80 24920; e-mail: {weyers, nowke, haenel, zielasko, hentschel, kuhlen}@vr.rwth-aachen.de).
Abstract: The Human Brain Project is one of the largest scientific initiatives dedicated to the research of
the human brain worldwide. Over 80 research groups from a broad variety of scientific areas, such as
neuroscience, simulation science, high performance computing, robotics, and visualization work together
in this European research initiative. This work at hand will identify certain chances and challenges for
cognitive systems engineering resulting from the HBP research activities. Beside the main goal of the HBP
gathering deeper insights into the structure and function of the human brain, cognitive system research can
directly benefit from the creation of cognitive architectures, the simulation of neural networks, and the
application of these in context of (neuro-)robotics. Nevertheless, challenges arise regarding the utilization
and transformation of these research results for cognitive systems, which will be discussed in this paper.
Tools necessary to cope with these challenges are visualization techniques helping to understand and gain
insights into complex data. Therefore, this paper presents a set of visualization techniques developed at the
Virtual Reality Group at the RWTH Aachen University.
1. INTRODUCTION
The main goal of the Human Brain Project (HBP) is “to build
a completely new ICT [(Information and Communication
Technology)] infrastructure for neuroscience, and for brain-
related research in medicine and computing, catalyzing a
global effort to understand the human brain and its diseases
and ultimately to emulate its computational capabilities” (The
HBP, 2013, p. 3). To reach this ambitious goal, around 80
European research groups contribute to the project working
together in various subprojects. They are facing challenges of
gathering, handling, and making available vast amounts of
data and applying analysis tools on it. Furthermore,
researchers are challenged by the generation and simulation
of brain models, and finally by bringing results to a variety of
applications in medicine and brain research. The highly
interdisciplinary mixture of researchers from neuroscience,
computer science, physics, medicine, and philosophy offers a
great chance to merge expert knowledge of different
disciplines for brain research.
In context of these goals and efforts in the HBP, various
chances can be identified regarding cognitive systems
research. The list below gives a brief review of possible
chances. It should be noted that this work does not intend to
identify possible gaps in the research field of cognitive
systems and therefore, will not clarify how these gaps can be
filled by means of results emerging form the HBP.
Understanding the human brain: One central
objective of the HBP is to significantly deepen the
understanding of the human brain. Cognitive
systems research can directly benefit from insights
into the structure and function of the human brain by
applying these to extend and refine concepts used
for the creation and the implementation of intelligent
system components and cognitive controlling
mechanisms.
Brain simulation: A variety of simulation
approaches and algorithms exists for simulating
neural structures and functions, e.g., implemented in
the NEST (Diesmann et al., 2001) or Neuron
(Markram, 2006) codes. The main objective of these
approaches is to simulate the neural structure and
function of small regions in an animal or human
brain to gain deeper insights into its emergent
behavior. Another application for these simulations
can be found in cognitive system implementations,
as focused on in the next passage.
Application to technical systems: As an individual
subproject of the HBP, it is planned to use simulated
neural networks as “intelligent” component for
robotic systems by sending sensory stimuli to the
network simulation and retrieve output from it
redirected to actuators of the robot. This research
has a high potential to generate results that can
directly be applied to cognitive systems in general.
To make these findings accessible to cognitive systems
engineering, the results coming from the HBP have to be
transformed and integrated into existing concepts, tools, and
implementations used in this discipline. Only by coping with
this major challenge, research results from the HBP can be
successfully used in cognitive systems engineering and
leverage the high potential of brain research in this context.
This work discusses these issues in more detail and tries to
identify certain requirements and concepts offering solutions
to this integration problem. In this context, visualization
techniques are one of various promising tool solutions
supporting this transformation and integration task.
Therefore, scientific visualization approaches are capable of
reducing the complexity to understand models and data, and
thereby making the results accessible for cognitive systems
engineers.
This work is structured as follows. Section 2 will discuss the
introduced chances in more detail focusing on further related
work and a closer look to current research activities in the
HBP. Section 3 will focus on visualization techniques and
tools developed in the Virtual Reality Group at RWTH
Aachen University, especially in the focus of scientific
visualization of neural experimental and simulation data.
These tools are on one hand an integral part of the HBP and
on the other, are used for working with the above mentioned
data. Furthermore, these tools cope with the transformation
and integration challenge for cognitive systems research as
discussed above. Section 4 will summarize the discussed
aspects and catalyze indications for research on cognitive
systems.
2. CHANCES AND CHALLENGES
The main objective of the HBP is to create a platform
containing data sources and tools supporting the research
process to understand the human brain. Here, the structure
and function of the human brain are two major foci of
ongoing investigation. Both aspects are essential for human
cognition and possible outcomes are of interest for various
related research fields like medicine, computer science and
system engineering, etc. Understanding the function of
human cognition on a functional level could make these
extractable for system engineering purposes and so can
directly benefit to cognitive system research.
The HBP is generally structured into 13 sub-projects (SP), as
listed in Table 1. SP 1-3, 5 and 8 mainly concentrate on data
generation and organization, SP 4 migrates research on
theoretical foundations of brain research, SP 6 and 7 focus on
brain simulation and high performance computing, SP 9
develops a neuromorphic computing platform, which focus
on rebuilding neural structures in silicon-based hardware
chips. SP 10 integrates research results from other SPs into a
neurorobotics simulation platform, which relates this SP very
close to cognitive systems engineering. Finally, SP 11
discusses applications and SP 12 concentrates on ethics and
society in context of HBP. SP 13 is meant to handle the entire
management processes of the project.
Cognitive System research can mainly benefit from data
acquisition and from data organization sub-projects as well as
sub-projects concentrating on brain simulation and
neurorobotics. The latter sub-projects combine results from
cognitive architectures, models of the human brain, and brain
simulation as cognitive system generally do.
SP 1 and 2 mainly concentrate on data generation and
integration, resulting from experiments involving the human
and mouse brain. Results from these experiments are
incorporated into models of the brain describing its structure
and function. A model based abstraction of these findings is
assembled in context of work done in the SP 3 creating
cognitive architectures. This offers great chances for
cognitive system research regarding the development of new
system concepts and intelligent controlling implementations.
Nevertheless, various types of data are generated and have to
be integrated and interpreted to derive the targeted
abstraction and research results. First of all, structural data in
form of brain atlases are produced. For instance, the JuBrain
Atlas (The JUBrain Project, 2014) contains cytoarchitonical
probabilistic maps that were gained by analyzing histological
sections of post-mortem brains (Zilles et al., 2002, & Amunts
et al., 2007). The Big Brain (Amunts et al., 2013) is a
recently developed tool for accessing anatomical data at a
resolution of 20 micrometers that allows for new analysis of
the structural and spatial organization of the brain. Beside
these atlases, brains are acquired post-mortem to gain high
resolution images with information on nerve fiber tracts.
Polarized Light Imaging (PLI) is an imaging technique that
enables the identification of the structure and direction of
nerve fibers in brain slices (Axer et al., 2011a, b). Finally,
data is derived from recording local field potential (LFP) of
monkeys’ visual cortex while they are performing visual
tasks (Ito et al., 2013).
As mentioned above, this experimental and data-driven view
on the HBP brings up the challenge of making these results
accessible to cognitive system research. On one hand,
visualization techniques can be used, as further discussed in
Section 3 below. On the other, research done on
neuroscientific simulation comes into place. SP 6 entitled
“Brain Simulation” concentrates on research and modeling of
the human brain’s structure and function using mathematical
and algorithmic simulation models. There are various
approaches to model and simulate neural structures and their
functionality. Neuron (Markram, 2006) is a simulator that
concentrates on a small set of neurons in more detail (on a
chemical level), while the NEST simulator (Diesmann et al.,
2001) focuses on very large networks by simplifying the
description of single neurons. NEST concentrates on the
dynamics, size, and structure of neural systems. The result of
a NEST simulation is spiking data describing the firing of
single neurons at specific points in time. Main challenge for
cognitive systems research here is to integrate these
approaches into cognitive systems. Nevertheless, SP 10
Tab. 1: Sub projects of the Human Brain Project
SP1 Strategic Mouse Brain Data
SP2 Strategic Human Brain Data
SP3 Cognitive Architectures
SP4 Mathematical and Theoretical Foundations of Brain
Research
SP5 Neuroinformatics
SP6 Brain Simulation
SP7 High Performance Computing
SP8 Medical Informatics
SP9 Neuromorphic Computing
SP10 Neurorobotics
SP11 Applications
SP12 Ethics and Society
SP13 Management
Fig. 1: Left: A researcher is inspecting a simulation run in a CAVE. While interacting with pie menus the user can control the
visualization. Right: The user is studying the dynamics of information exchange between brain regions.
develops certain approaches for using simulation techniques
in context of technical systems.
Thus, SP 10 is one example of a direct consumer of results
from HBP’s research efforts. Main goal of SP 10 is to
develop a simulation of robots in a virtual environment that is
based on simplified brain models resulting from the HBP
research. Beside function and structure of human brains, also
cognitive architectures play a central role as modeled and
developed in SP 3 entitled “Cognitive Architectures”. Thus,
the combination of simulated neurorobots, simplified brain
models, and cognitive architectures can be of great benefit for
cognitive system research. Nevertheless, main challenge in
this context is the transfer of these results into cognitive
system research by abstracting neuroscientific research
results.
As one key technology for enabling cognitive systems
researchers to integrate knowledge from experimental brain
research, models and techniques, the following section
focuses on visualization concepts and tools developed by the
Virtual Reality Group at the RWTH Aachen University.
These concepts and tools are developed to support the
neuroscientific work undertaken in the HBP as well as to
enable scientists to gain insights into the complex models and
data produced.
3. VISUALIZATION AND INTERACTIVE
SUPERCOMPUTING
The Virtual Reality Group at RWTH Aachen University is
part of the HBP and cooperates closely with the
Forschungszentrum Jülich on the topics of scientific
visualization and interactive supercomputing. The central
goal of our research in the context of the HBP is to develop
highly scalable scientific visualizations, such as demonstrated
in the VisNEST prototype (Nowke et al., 2013), described in
more detail in Section 3.1, volume rendering associated to
degenerative brain data (Hänel et al., 2014), sophisticated
graph visualization applications using edge bundling, and in
multi-view and multi-device scenarios offering scientific
provenance tracking for neuroscientists. Volume rendering
refers to a class of visualization algorithms, which mainly
address the visualization of 3D volumetric data that is
generated by, e.g., MRI or CT imaging techniques. Section
3.2 will discuss a specific use case for volume rendering,
which shows the relevance of these techniques in context of
brain research.
Graph visualization methods are a means to generate visual
representations of relation data, e.g., the connectivity of brain
areas. In most cases, graphs are visualized as node-link
diagrams, where nodes are visually represented as dots and
links as lines connecting these dots. In brain research, spatial
structures are of interest, which deem 3D graph visualization
in many cases relevant. Section 3.4 will point out this aspect
in more detail. Finally, scientific provenance tracking aims at
making data generation and processing persistent and thereby
reproducible.
We are targeting highly immersive Virtual Reality and high
resolution display settings as possible application and
production environments. As mentioned above, the
visualization as well as the interpretation of 3D spatial data is
of central interest in brain research. Immersive Virtual
Reality techniques offer 3D stereoscopic rendering
capabilities as well as a variety of interaction methods and
tools to interact with these visualizations. This offers a
distinct quality of scientific work with data than it is possible
with standard workstation equipment. Section 3.3 introduces
a tool for the visualization of high resolution data. For this
use case, high resolution visualization infrastructure is of
main interest.
Our technologies are planned to be part of a concept for
interactive supercomputing, as an essential part of the HBP.
Interactive supercomputing addresses the problem of data
transferring costs and the steering of simulations on high
performance computing infrastructure. The overall goal
thereby is to transfer visualization to the data generation
process during simulation runtime and therefore enable
scientists to steer these simulations which is impossible with
Fig. 2: Left: The semi-transparent anatomy is combined in one volume rendering with degenerative data (red to yellow) and the
premotor cortex (blue). The anatomical section additionally supports the spatial orientation. Right: A view-dependent
cutout is created around the premotor cortex to allow for a detailed examination of surrounding structures.
current system architectures. Finally, all tools and
visualization concepts developed in the Virtual Reality Group
will directly contribute to the previously discussed research
topics and will likely be key enabler for neuroscientific
research. The following subsections will give a deeper insight
into the current developments in the group and show its
contributions to the research efforts in the HBP.
3.1 VisNEST
The simulation of large spiking neural network models
generates an unprecedented amount of data that has to be
analyzed to understand the dynamics and properties of these
networks. While most researchers visualize their data in a
non-interactive fashion, sophisticated visualization systems
tailored to quickly verify or reject a specific hypothesis on
the simulation data can have great impact on the researcher’s
time spent to study the simulated neural network. In addition,
data exploration often involves constructing a mental model
of the simulation to identify features of interest. To this end,
we developed VisNEST, a visualization tool specifically
build to visualize simulation results from modeling the visual
cortex of a Macaque monkey. One driving challenge behind
this work is the integration of macroscopic data at the level of
brain regions with microscopic simulation results such as the
spiking behavior of a single neuron. VisNEST primarily
offers four distinct views to inspect this simulation data. Each
view highlights certain aspects of the simulation.
The first view is designed to give neuroscientists a first
impression of the entire simulation run. To do so, we render
brain regions of the visual cortex and map their respective
neural activity by means of color coding each region (cf. Fig.
1 left). As neural activity, we define and calculate the mean
firing rate of all neurons inside a region. Interaction with the
system is primarily mapped to pie menus, beside some
control elements like the time slider, which allows for
browsing through simulation time. This approach allows for
scaling the visualizer from desktop workstations to CAVE-
like environments, since interaction among the system stays
consistent.
A CAVE is a computer generated immersive virtual
environment in which a user can interact with her natural
senses. Immersion is achieved through stereopsis which
basically works by generating images for each eye. These
images are rendered from the perspective of each eye.
Through a filter process, each eye perceives only its
corresponding image, thus creating a sensation of depth. One
important aspect offered by CAVEs is the possibility to use
natural interaction, hence lowering the mental load of the
user, permitting to concentrate more on the brain simulation
data set in context of VisNEST. In a CAVE, a user can point
at a brain region of interest using a tracked input device. The
system then automatically displays its corresponding activity
as a function-plot over time, shows the raster plot of firing
neurons, and depicts the individual activity of populations.
Each brain region is built of populations which on their part
define a set of neurons that constitute this population.
The second view depicts the hierarchy used for simulation of
brain regions and shows the connectivity within populations.
In this view researchers can see how activity is cascading and
spreading along the simulated brain regions. Inspecting the
connectivity of brain regions is provided by the third view.
To this end, a fixed graph node layout is used where
connections are depicted by arrows. The thickness of arrows
is used to convey the connection strength between regions.
The last view offers a visualization design to depict the
dynamics of neural activity projected to brain regions (cf.
Fig. 1 right). This approach can help to obtain an impression
of how information is exchanged between regions.
Conceptually, this view is similar to the previous one but
instead encodes the dynamics of information exchange
dynamically to arrow thickness.
The visualization application is being developed in close
collaboration with domain scientists. This approach requires
constant reevaluation of the current system to assure its
usability and integration to the current workflow of
neuroscientists. In the near future, we will face the challenge
to directly link the visualizer to the above discussed neural
simulator NEST in order to steer the simulation and directly
assess the impact on the network. In addition, we will need to
store a variety of data modalities, be it raw spiking data,
derived statistical quantities like information exchange, or
geometry data. A unifying middleware for storing and
accessing this data while considering latency constraints is
required and aspect of future work.
3.2 Volume rendering of combined data sets
The JuBrain Atlas (The JUBrain Project, 2014) contains
probabilistic maps of cytoarchitonical regions that can be
registered onto other acquired brains. Thus, they are also used
for analyzing the evolution of a brain degenerating disease
called corticobasal syndrome. According to this mapping, it
can be statistically analyzed to what extent a certain brain
area is affected. However, this data is commonly visualized
by rendering it as 2D sections for gaining deeper insights into
it. This reduction of three dimensional data to two
dimensions affects the spatial perception of extent and
location of the degeneration.
Therefore, we implemented two approaches for a 3D
visualization of anatomy, degeneration data, and brain
regions to undergo various problems arising from the classic
2D visualization. On the one hand, the degeneration happens
not only on the surface of the brain so that a surface
visualization is not sufficient. On the other hand, the surface
contours are necessary for a better spatial orientation for a
user who interprets the data.
The first visualization design combines a common 2D
anatomy section with the volume rendered data sets (see Fig.
2 left). The anatomy is integrated as a semi-transparent
volume and the degeneration data are opaque. The brain areas
can be adjusted in their opacity to give an individually
blended view onto degeneration that may be located inside of
the visualized brain. This view can be controlled by pie
menus, as has been discussed above.
In comparison to the first design, the second one allows for a
more detailed inspection. Here, one or multiple brain areas
are determined as the volume of interest (VOI), which is
supposed to be visible from all points of view. Therefore, a
conical cutout is created around the VOI and always stays
aligned to the viewer (see Fig. 2 right). In this design, the
anatomy is opaque and, thus, is the part to be clipped. On the
cutting surface, anatomical data is shown to estimate a spatial
orientation and depth. The user can influence the rendering of
the degeneration data in the cutout. The enlargement or
reduction of the cutout that is shaped like the VOI allows the
user to set the desired amount of context information.
The application combining these two visualization designs is
available for desktop and immersive setups. As shown in
(Laha et al., 2013), the depth perception and estimation in
volume data is improved in immersive setups and in our case
enhances easier differentiation of the combined data sets.
3.3 PLI visualization
Polarized light imaging is a recently developed technique for
the acquisition of fiber tracts at a very high resolution (Axer
et al., 2011a, b). Nerve fibers are surrounded by a myelin
sheath that shows a uniaxial double refraction. When thin
brain slices are shined through with polarized light, this
Fig. 4: Node-link diagram of an almost fully connected, bidirectional graph, originating from a simulation of point neurons.
This images depicts 32 nodes each of which resembles a brain region. The edges are the regions’ interconnectivity,
whose weights are not considered here. Left: original graph shown with 27 color-coded clusters consisting of similar
edges; black edges are unclustered, i.e., not similar to any other. Right: the same graph after edge bundling; the edges
are directed from blue to red.
property evokes an optical anisotropy and corresponds to the
spatial orientation of the fibers. PLI images have a resolution
of down to 2 micrometers, and thus allow for a data analysis
at a nearly single nerve level as nerve fibers have a diameter
of about 0.1-22 micrometers. With techniques like diffusion
tensor imaging only much larger fiber tracts can be
approximated. The high resolution of PLI data facilitates to
gain new knowledge about the spatial distributions of fibers.
However, an interactive visualization of data with this
resolution is challenging. Next to the 3D reconstructed PLI
image stack, an anatomical data set at a lower resolution is
available leading to an overall data size of about 1 terabyte or
more. Therefore, advanced memory and visualization
techniques need to be applied, as for example introduced by
(Fogal et al., 2013) and (Hadwiger et al., 2013). Here, the
volume is divided into bricks, and due to a virtual memory
hierarchy only visible parts of the volume at multiple
resolutions are loaded into the GPU, which then can be
displayed with a high frame rate. If the user zooms out or
wants to rotate the volume and not all needed data blocks are
already loaded into the GPU, a request to the data storage is
sent. This limits the in- and output data stream to a minimum,
that is known to be a bottle neck of implementations like this.
For PLI data, we want to further improve these approaches
with special consideration of displaying them in a virtual
environment. For a better immersive feeling in such an
environment, head tracking in combination with a user-
centered projection is used. This leads to a permanent update
of the view position and therefore to missing data blocks on
the GPU. It needs to be explored how this affects the frame
rate and the fluency of interaction. In addition, switching
between different levels of detail may also lead to the so
called cyber sickness.
A first visualization is shown in Fig. 3. Here, mipmaps of one
brain section are pre-calculated to allow for an interactive
visualization. The fiber direction is color-coded in the HSV
color space, while the orientation of the space can be adjusted
to highlight fibers.
3.4 Interactive 3D graph visualization
Graphs have turned out to be an important data structure in
computer-supported brain research, which can also be seen in
the above described VisNEST application. Networks of
neurons in the brain can be identified on the microscopic
scale and linking of whole brain regions on the macroscopic
scale. Furthermore, graphs are an essential tool for visual
analytics, for example in correlation analysis.
We are concentrating on the visualization of graphs in node-
link representation regarding their intuitiveness and
addressing the challenge of reducing visual clutter. While 2D
graph visualization is very common and represents a broad
field of past and current research (van Landsberger et al.,
2011), 3D graph visualization is not that wide covered. Still,
adding the third dimension to graph visualization approaches
is especially interesting for spatial data like the interaction of
cortical regions in a human or animal brain. Additionally,
exploration and analysis of non-place-bound graphs can
benefit from 3D layout and visualization in particular in an
immersive environment (Ware et al., 2008). Especially the
latter allows interactively working on larger graphs while
keeping the same precision, respectively error rate, of
interaction in typical tasks, such as following paths in a given
graph (see Figure 4).
As mentioned above, drawing larger graphs as node-link
diagrams quickly leads to heavy visual clutter caused by edge
crossings. This problem is particularly relevant for dense
graphs as formed by neurons. Edge bundling is one technique
tackling this problem by partially drawing nearby edges as a
single one. To decide which edges are bundled and which are
not, versatile metrics are applied that differ in their results,
for instance in the ratio of information loss to clutter
reduction, cf. (Lou et al., 2012, & Holten et al., 2009). One
of our goals is to extend this technique to 3D and in addition
to let the user influence the edge layout process, as well as
interactively change the underlying metrics.
A second method used to reduce the visual cluttering of
node-link diagrams is rearranging vertices to reduce edge
crossings. Many existing approaches use a system of spring
forces for this purpose (Fruchterman et al., 1991, &
Walshaw, 2003). Nevertheless, spatial data that is visualized
as node-link diagram is excluded from these approaches
according to the applied change of vertex positions, which is
data inherent. Thus, in the moment vertex positions get
changed the visualization does not represent the data
correctly. Similar to the edge bundling approach, we want to
involve the user into this process by allowing her to
interactively manipulate the resulting graph layout by means
of a continuous force equilibrium. Thus, this algorithm
differs from the classic algorithms that use a cooling factor
for stabilization and termination. In this approach, it is much
more challenging to stabilize the continuous computation in
terms of overreacting forces. Nevertheless, in the focus of
interactive graph visualization, this approach shows better
runtime behavior because every user initiated change does
not cause a re-computation of the graph layout.
We want to investigate clustering (Balzer et al., 2007),
interactive graph operations, and session management as
further topics in interactive 3D graph visualization.
Clustering reduces clutter while the other approaches focus
on supporting the analyst in understanding and exploring
graphical structures. Thus, methods of visual analytics have
to be applied to the underlying data to gain further insights,
which requires operations on the graphical data and in turn
can change its representation.
4. DISCUSSION AND FUTURE WORK
In conclusion, the above presented work shows various
aspects of possible outcomes and chances for cognitive
systems research and potential ways how to gather relevant
results out of the HBP, e.g., by using visualization tools and
techniques. Especially the work on neural simulators make
neural models accessible for cognitive systems research.
Furthermore, their application in neurorobotics and the
creation of cognitive architectures are candidates to directly
offering results to cognitive systems research, especially due
to the close relation of the neurorobotics research to cognitive
systems engineering.
Future work should consider the question how to integrate
results from the HBP into cognitive systems research and
how cognitive systems can directly benefit from these results.
Therefore, in a first step existing research gaps in cognitive
research should be identified. Afterwards, concepts and
results from the HBP w.r.t. their use in cognitive systems
should be examined, implemented and finally transferred to
relevant system architectures.
ACKNOWLEDGMENTS
The research leading to these results has received funding
from the European Union Seventh Framework Programme
(FP7/2007-2013) under grant agreement n° 604102 (HBP).
REFERENCES
Amunts, K., Lepage, C., Borgeat, L., Mohlberg, H.,
Dickscheid, T., Rousseau, M.-É., Bludau, S., Bazin, P.-
L., Lewis, L.B., Oros-Peusquens, A.-M., Shah, N.J.,
Lippert, T., Zilles, K., and Evans, A.C. (2013). BigBrain:
An Ultrahigh-Resolution 3D Human Brain Model.
Science, 340(6139), 14721475.
Amunts, K., Schleicher, A., and Zilles, K. (2007).
Cytoarchitecture of the cerebral cortexMore than
localization. NeuroImage, 37(4), 10611065.
Axer, M., Amunts, K., Gräßel, D., Palm, C., Dammers, J.,
and Axer, H., (2011a). A novel approach to the human
connectome: ultra-high resolution mapping of fiber tracts
in the brain. Neuroimage, 54(2), 10911101.
Axer, M., Grässel, D., Kleiner, M., Dammers, J., Dickscheid,
T., Reckfort, J., Hütz, T., Eiben, B., Pietrzyk, U., Zilles,
K., and Amunts K. (2011b). High-resolution fiber tract
reconstruction in the human brain by means of three-
dimensional polarized light imaging (3d-pli). Frontiers
in Neuroinformatics, 5(34).
Balzer, M. and Deussen, O. (2007). Level-of-detail
visualization of clustered graph layouts. In: Proc. of the
6th International Asia-Pacific Symposium on
Visualization, 133140. IEEE.
Diesmann, M. and Gewaltig, M.-O. (2001). NEST: An
environment for neural systems simulations. Forschung
und wisschenschaftliches Rechnen, Beiträge zum Heinz-
Billing-Preis, 58, 4370.
Fogal, T., Schiewe, A., and Krüger, J. (2013). An Analysis of
Scalable GPU-Based Ray-Guided Volume Rendering.
In: Proc. of IEEE Symposium on Large Data Analysis
and Visualization, 4351. IEEE.
Fruchterman, T.M.J, Reingold, E.M. (1991). Graph drawing
by force directed placement. Software: Practice and
Experience, 21(11), 11291164.
Hadwiger, M., Beyer, J., Jeong, W.-K., and Pfister, H.
(2013). Interactive Volume Exploration of Petascale
Microscopy Data Streams using a Visualization-Driven
Virtual Memory Approach. IEEE Transactions on
Visualization and Computer Graphics, 18(12), 2285
2294.
Hänel, C., Pieperhoff, P., Hentschel, B., Amunts, K., and
Kuhlen, T. (2014). Interactive 3D Visualization of
Structural Changes in the Brain of a Person with
Corticobasal Syndrome. Frontiers in Neuroinformatics,
in press.
Holten, D. and van Wijk, J.J. (2009). Force-Directed Edge
Bundling for Graph Visualization. Computer Graphics
Forum, 28(3), 983990.
The Human Brain Project (2013).
https://www.humanbrainproject.eu/documents/10180/17
646/Vision+Document/8bb75845-8b1d-41e0-bcb9-
d4de69eb6603, last accessed 11-22-2013.
Ito, J., Maldonado, P., and Grün, S. (2013). Cross-frequency
interaction of the eye-movement related LFP signals in
V1 of freely viewing monkeys. Frontiers in Systems
Neuroscience, 7(1), 111.
The JUBrain Project (2014). https://www.jubrain.fz-
juelich.de, last accessed 02-18-2014.
Laha, B., Sensharma, K., Schiffbauer, J., and Bowman, D.
(2012). Effects of Immersion on Visual Analysis of
Volume Data. IEEE Transactions on Visualization and
Computer Graphics, 18(4), 597606.
von Landesberger, T., Kuijper, A., Schreck, T., Kohlhammer,
J., van Wijk, J.J., Fekete, J.-D., and Fellner, D.W.
(2011). Visual Analysis of Large Graphs: State-of-the-
Art and Future Research Challenges. Computer Graphics
Forum, 30(6), 17191749.
Luo, S.-J., Liu, C.-L., Chen, B.-Y., and Ma, K.-L. (2012).
Ambiguity-Free Edge-Bundling for Interactive Graph
Visualization. IEEE transactions on visualization and
computer graphics, 18(5), 810821.
Markram, H. (2006). The blue brain project. Nature Reviews
Neuroscience, 7(2), 153160.
Nowke, C., Schmidt, M., van Albada, S.J., Eppler, J.M.,
Bakker, R., Diesmann, M., Hentschel, B., and Kuhlen, T.
(2013). VisNEST - Interactive Analysis of Neural
Activity Data. In: Proc. of IEEE Symposium on
Biological Data Visualization, 6572. IEEE.
Walshaw, C. (2003). A Multilevel Algorithm for Force-
Directed Graph-Drawing. In: Graph Drawing, 171182.
Springer, Berlin.
Ware, C. and Mitchell, P. (2008). Visualizing Graphs in
Three Dimensions. ACM Transactions on Applied
Perception, 5(1), 115.
Zilles, K., Schleicher, A., Palomero-Gallagher, N., and
Amunts, K. (2002), Quantitative analysis of cytoand
receptor architecture of the human brain. Brain mapping:
the methods, 2, 573602.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions.
Conference Paper
Full-text available
The aim of computational neuroscience is to gain insight into the dynamics and functionality of the nervous system by means of modeling and simulation. Current research leverages the power of High Performance Computing facilities to enable multi-scale simulations capturing both low-level neural activity and large-scalce interactions between brain regions. In this paper, we describe an interactive analysis tool that enables neuroscientists to explore data from such simulations. One of the driving challenges behind this work is the integration of macroscopic data at the level of brain regions with microscopic simulation results, such as the activity of individual neurons. While researchers validate their findings mainly by visualizing these data in a non-interactive fashion, state-of-the-art visualizations, tailored to the scientific question yet sufficiently general to accommodate different types of models, enable such analyses to be performed more efficiently. This work describes several visualization designs, conceived in close collaboration with domain experts, for the analysis of network models. We primarily focus on the exploration of neural activity data, inspecting connectivity of brain regions and populations, and visualizing activity flux across regions. We demonstrate the effectiveness of our approach in a case study conducted with domain experts.
Article
Full-text available
Reference brains are indispensable tools in human brain mapping, enabling integration of multimodal data into an anatomically realistic standard space. Available reference brains, however, are restricted to the macroscopic scale and do not provide information on the functionally important microscopic dimension. We created an ultrahigh-resolution three-dimensional (3D) model of a human brain at nearly cellular resolution of 20 micrometers, based on the reconstruction of 7404 histological sections. “BigBrain” is a free, publicly available tool that provides considerable neuroanatomical insight into the human brain, thereby allowing the extraction of microscopic data for modeling and simulation. BigBrain enables testing of hypotheses on optimal path lengths between interconnected cortical regions or on spatial organization of genetic patterning, redefining the traditional neuroanatomy maps such as those of Brodmann and von Economo.
Article
Full-text available
Recent studies have emphasized the functional role of neuronal activity underlying oscillatory local field potential (LFP) signals during visual processing in natural conditions. While functionally relevant components in multiple frequency bands have been reported, little is known about whether and how these components interact with each other across the dominant frequency bands. We examined this phenomenon in LFP signals obtained from the primary visual cortex of monkeys performing voluntary saccadic eye movements (EMs) on still images of natural-scenes. We identified saccade-related changes in respect to power and phase in four dominant frequency bands: delta-theta (2-4 Hz), alpha-beta (10-13 Hz), low-gamma (20-40 Hz), and high-gamma (>100 Hz). The phase of the delta-theta band component is found to be entrained to the rhythm of the repetitive saccades, while an increment in the power of the alpha-beta and low-gamma bands were locked to the onset of saccades. The degree of the power modulation in these frequency bands is positively correlated with the degree of the phase-locking of the delta-theta oscillations to EMs. These results suggest the presence of cross-frequency interactions in the form of phase-amplitude coupling (PAC) between slow (delta-theta) and faster (alpha-beta and low gamma) oscillations. As shown previously, spikes evoked by visual fixations during free viewing are phase-locked to the fast oscillations. Thus, signals of different types and at different temporal scales are nested to each other during natural viewing. Such cross-frequency interaction may provide a general mechanism to coordinate sensory processing on a fast time scale and motor behavior on a slower time scale during active sensing.
Chapter
We describe a heuristic method for drawing graphs which uses a multilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is repeated recursively to create a hierarchy of increasingly coarse graphs, G0, G1,.., GL The coarsest graph, GL, is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for Gl is taken from its coarser and smaller child graph, Gl+1, and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 and 3 dimensional layouts and we demonstrate it on examples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms. © 2006 by World Scientific Publishing Co. Pte. Ltd. All rights reserved.
Chapter
This chapter discusses the principles of classical architectonic mapping in the context of recent imaging techniques. It presents an observer-independent approach for a quantitative analysis of cortical areas and their borders, which is based on a multivariate statistical analysis of the cytoarchitecture and illustrates the application of this approach for the cytoarchitectonic mapping of the human visual cortex. Important criteria for cytoarchitectonic mapping are the absolute thickness of cortical layers, the proportionate thickness of a layer relative to the other cortical layers and to the total cortical depth, the presence of clearly recognizable laminar borders and vertical columns, the packing density and size of neuronal cell bodies, the homogeneous or clustered distribution of cell bodies throughout the layers, and the presence of special cell types such as Betz cells. Understanding the regional distributions of neurotransmitter receptors is likely to provide a crucial intermediary level of description between function and structure, since different cytoarchitectonic and functional areas have different mean receptor densities as well as distinct laminar distribution patterns. Another way for a better understanding of brain function and the underlying anatomy is to compare architectonic maps obtained in postmortem brains with activation maps obtained in functional imaging studies in a common spatial reference system. Since these two kinds of maps stem from different subsets of brains, such a comparison must be performed on a probabilistic basis.
Conference Paper
Volume rendering continues to be a critical method for analyzing large-scale scalar fields, in disciplines as diverse as biomedical engineering and computational fluid dynamics. Commodity desktop hardware has struggled to keep pace with data size increases, challenging modern visualization software to deliver responsive interactions for O(N3) algorithms such as volume rendering. We target the data type common in these domains: regularly-structured data. In this work, we demonstrate that the major limitation of most volume rendering approaches is their inability to switch the data sampling rate (and thus data size) quickly. Using a volume renderer inspired by recent work, we demonstrate that the actual amount of visualizable data for a scene is typically bound considerably lower than the memory available on a commodity GPU. Our instrumented renderer is used to investigate design decisions typically swept under the rug in volume rendering literature. The renderer is freely available, with binaries for all major platforms as well as full source code, to encourage reproduction and comparison with future research.
Article
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
Article
NEST is a framework for simulating large, structured neuronal systems. It is designed to inves- tigate the functional behavior of neuronal systems in the context of their anatomical, morpho- logical, and electrophysiological properties. NEST aims at large networks, while maintaining an appropriate degree of biological detail. This is achieved by combining a broad range of ab- straction levels in a single network simulation. Great biological detail is then maintained only at the points of interest, while the rest of the system can be modeled by more abstract compo- nents. Here, we describe the conception of NEST and illustrate its key features. We demonstrate that software design and organizational aspects were of equal importance for the success of the project.