
Luigi Iapichino- Dr.
- Group Lead at Leibniz Supercomputing Centre
Luigi Iapichino
- Dr.
- Group Lead at Leibniz Supercomputing Centre
User Enablement and Applications
About
61
Publications
8,776
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
927
Citations
Introduction
Leader of the LRZ Quantum Computing Team
Current institution
Additional affiliations
Education
April 2002 - August 2005
September 1995 - April 2001
Publications
Publications (61)
Adaptation to climate change requires robust climate projections, yet the uncertainty in these projections performed by ensembles of Earth system models (ESMs) remains large. This is mainly due to uncertainties in the representation of subgrid-scale processes such as turbulence or convection that are partly alleviated at higher resolution. New deve...
Most heritable diseases are polygenic. To comprehend the underlying genetic architecture, it is crucial to discover the clinically relevant epistatic interactions (EIs) between genomic single nucleotide polymorphisms (SNPs) (1–3). Existing statistical computational methods for EI detection are mostly limited to pairs of SNPs due to the combinatoria...
Most heritable diseases are polygenic. To comprehend the underlying genetic architecture, it is crucial to discover the clinically relevant epistatic interactions (EIs) between genomic single nucleotide polymorphisms (SNPs). Existing statistical computational methods for EI detection are mostly limited to pairs of SNPs due to the combinatorial expl...
We present our experience with the modernization on the GR-MHD code BHAC, aimed at improving its novel hybrid (MPI+OpenMP) parallelization scheme. In doing so, we showcase the use of performance profiling tools usable on x86 (Intel-based) architectures. Our performance characterization and threading analysis provided guidance in improving the concu...
Tensor network methods are incredibly effective for simulating quantum circuits. This is due to their ability to efficiently represent and manipulate the wave-functions of large interacting quantum systems. We describe the challenges faced when scaling tensor network simulation approaches to Exascale compute platforms and introduce QuantEx, a frame...
The simulation of quantum circuits using the tensor network method is very computationally demanding and requires significant High Performance Computing (HPC) resources to find an efficient contraction order and to perform the contraction of the large tensor networks. In addition, the researchers want a workflow that is easy to customize, reproduce...
We present our experience with the modernization on the GR-MHD code BHAC, aimed at improving its novel hybrid (MPI+OpenMP) parallelization scheme. In doing so, we showcase the use of performance profiling tools usable on x86 (Intel-based) architectures. Our performance characterization and threading analysis provided guidance in improving the concu...
Understanding the physics of turbulence is crucial for many applications, including weather, industry and astrophysics. In the interstellar medium1,2, supersonic turbulence plays a crucial role in controlling the gas density and velocity structure, and ultimately the birth of stars3,4,5,6,7,8. Here we present a simulation of interstellar turbulence...
We describe a novel, scalable approach for scientific visualization in HPC environments, based on the ray tracing engine Intel® OSPRay associated with VisIt. Part of the software stack of the Leibniz Supercomputing Centre, this method has been applied to the visualization of the largest simulations of interstellar turbulence ever performed, produce...
Understanding the physics of turbulence is crucial for many applications, including weather, industry, and astrophysics. In the interstellar medium (ISM), supersonic turbulence plays a crucial role in controlling the gas density and velocity structure, and ultimately the birth of stars. Here we present a simulation of interstellar turbulence with a...
We describe a novel, scalable approach for scientific visualization in HPC environments, based on the ray tracing engine Intel OSPRay associated with VisIt. Part of the software stack of the Leibniz Supercomputing Centre, this method has been applied to the visualization of the largest simulations of interstellar turbulence ever performed, produced...
The complexity of modern and upcoming computing architectures poses severe challenges for code developers and application specialists, and forces them to expose the highest possible degree of parallelism, in order to make the best use of the available hardware. The Intel® Xeon Phi™ of second generation (code-named Knights Landing, henceforth KNL) i...
The complexity of modern and upcoming computing architectures poses severe challenges for code developers and application specialists, and forces them to expose the highest possible degree of parallelism, in order to make the best use of the available hardware. The Intel$^{(R)}$ Xeon Phi$^{(TM)}$ of second generation (code-named Knights Landing, he...
As modern scientific simulations grow ever more in size and complexity, even their analysis and post-processing becomes increasingly demanding, calling for the use of HPC resources and methods. yt is a parallel, open source post-processing python package for numerical simulations in astrophysics, made popular by its cross-format compatibility, its...
Modern computing architectures allow for unprecedented levels of parallelization,
bringing a much-needed speedup to key scientific applications, such as ever improving
numerical simulations and their post-processing, likewise increasingly taxing.
We report on optimization techniques used on popular codes for computational
astrophysics (FLASH and EC...
We present recent developments in the parallelization scheme of ECHO-3DHPC, an efficient astrophysical code used in the modelling of relativistic plasmas. With the help of the Intel Software Development Tools, like Fortran compiler and Profile-Guided Optimization (PGO), Intel MPI library, VTune Amplifier and Inspector we have investigated the perfo...
Galaxy clusters are known to be the reservoirs of Cosmic Rays (CRs), mostly inferred from theoretical calculations or detection of CR derived observables. Though CR electrons have been detected through radio emissions, CR protons and its derivative gamma rays remained undetected. CR acceleration in clusters is mostly attributed to its dynamical act...
Galaxy clusters are known to be reservoirs of Cosmic Rays (CRs), as inferred from theoretical calculations or detection of CR-derived observables. CR acceleration in clusters is mostly attributed to the dynamical activity that produces shocks. Shocks in clusters emerge out of merger or accretion, but which one is more effective in producing CRs? at...
The outskirts of galaxy clusters are characterised by the interplay of gas accretion and dynamical evolution involving turbulence, shocks, magnetic fields and diffuse radio emission. The density and velocity structure of the gas in the outskirts provide an effective pressure support and affect all processes listed above. Therefore it is important t...
The outskirts of galaxy clusters are characterised by the interplay of gas accretion and dynamical evolution involving turbulence, shocks, magnetic fields and diffuse radio emission. The density and velocity structure of the gas in the outskirts provide an effective pressure support and affect all processes listed above. Therefore it is important t...
We describe a strategy for code modernisation of Gadget, a widely used community code for computational astrophysics. The focus of this work is on node-level performance optimisation, targeting current multi/many-core Intel architectures. We identify and isolate a sample code kernel, which is representative of a typical Smoothed Particle Hydrodynam...
In spring 2015, the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ), installed their new Peta-Scale System SuperMUC Phase2. Selected users were invited for a 28 day extreme scale-out block operation during which they were allowed to use the full system for their applications. The following projects participated in the extreme scale-out w...
Understanding turbulence is critical for a wide range of terrestrial and astrophysical applications. Here we present first results of the world's highest-resolution simulation of turbulence ever done. The current simulation has a grid resolution of 10048^3 points and was performed on 65536 compute cores on SuperMUC at the Leibniz Supercomputing Cen...
Recorded from live streaming, second file in the page, approximately from 3:25:30 to 3:53:00.
In spring 2015, the Leibniz Supercomputing Centre (Leibniz-Rechenzentrum, LRZ), installed their new Peta-Scale System SuperMUC Phase2. Selected users were invited for a 28 day extreme scale-out block operation during which they were allowed to use the full system for their applications. The following projects participated in the extreme scale-out w...
Galaxy clusters are unique laboratories to investigate turbulent fluid
motions and large scale magnetic fields. Synchrotron radio halos at the center
of merging galaxy clusters provide the most spectacular and direct evidence of
the presence of relativistic particles and magnetic fields associated with the
intracluster medium. The study of polarize...
We compare two different codes for simulations of cosmological structure formation to investigate the sensitivity of hydrodynamical instabilities to numerics, in particular, the hydro solver and the application of adaptive mesh refinement (AMR). As a simple test problem, we consider an initially spherical gas cloud in a wind, which is a an idealize...
The injection of turbulence in the circum-galactic medium at redshift z = 2
is investigated using the mesh-based hydrodynamic code Enzo and a subgrid-scale
(SGS) model for unresolved turbulence. Radiative cooling and heating by a
uniform Ultraviolet (UV) background are included in our runs and compared with
the effect of turbulence modelling. Mecha...
The Square Kilometre Array (SKA) is the most ambitious radio telescope ever
planned. With a collecting area of about a square kilometre, the SKA will be
far superior in sensitivity and observing speed to all current radio
facilities. The scientific capability promised by the SKA and its technological
challenges provide an ideal base for interdiscip...
FEARLESS (Fluid mEchanics with Adaptively Refined Large Eddy SimulationS) is a numerical scheme for modelling subgrid-scale turbulence in cosmological adaptive mesh refinement simulations. In this contribution, the main features of this tool will be outlined. We discuss the application of this method to cosmological simulations of the large-scale s...
Merger shocks induce turbulence in the intracluster medium (ICM), and, under some circumstances, accelerate electrons to relativistic
velocities to form so-called radio relics. Relics are mostly found at the periphery of galaxy clusters and appear to have
magnetic fields at the μ G level. Here we investigate the possible origins of these magnetic f...
FEARLESS (Fluid mEchanics with Adaptively Refined Large Eddy SimulationS) is a numerical scheme for modelling subgrid-scale turbulence in cosmological adaptive mesh refinement simulations. In this contribution, the main features of this tool will be outlined. We discuss the application of this method to
cosmological simulations of the large-scale s...
The injection and evolution of turbulence in the intergalactic medium is studied by means of mesh-based hydrodynamical simulations, including a subgrid scale (SGS) model for small-scale unresolved turbulence. The simulations show that the production of turbulence has a different redshift dependence in the intracluster medium (ICM) and the warm-hot...
Massive structures like cluster of galaxies, embedded in cosmic filaments,
release enormous amount of energy through their interactions. These events are
associated with production of Mpc-scale shocks and injection of considerable
amount of turbulence, affecting the non-thermal energy budget of the ICM. In
order to study this thoroughly, we perform...
Halo mergers and shock waves play a crucial role in the process of
hierarchical clustering. Hydrodynamical simulations are the principal
investigation tool in this field for theoreticians, and predict that a
by-product of cluster formation and virialisation is the injection of
turbulence in the cosmic flow. Here I will summarise results from a seri...
The injection and evolution of turbulent motions in the cosmological large scale structure are studied by means of mesh-based hydrodynamical simulations, including a subgrid scale model for small-scale unresolved turbulence. We find that the production of turbulence in the ICM is closely correlated with merger events occurring in the cluster
enviro...
FEARLESS (Fluid mEchanics with Adaptively Refined Large Eddy SimulationS) is a new numerical scheme arising from the combined use of subgrid scale (SGS) model for turbulence at the unresolved length scales and adaptive mesh refinement (AMR) for resolving the large scales. This tool is especially suitable for the study of turbulent flows in strongly...
It is widely accepted that the onset of the explosive carbon burning in the core of a CO WD triggers the ignition of a SN Ia. The features of the ignition are among the few free parameters of the SN Ia explosion theory. We explore the role for the ignition process of two different issues: firstly, the ignition is studied in WD models coming from di...
We performed a set of cosmological simulations of major mergers in galaxy clusters, in order to study the evolution of merger shocks and the subsequent injection of turbulence in the post-shock region and in the intra-cluster medium (ICM). The computations have been performed with the grid-based, adaptive mesh refinement (AMR) hydrodynamical code E...
The effective modeling of the stirring and development of turbulent flows in grid-based hydrodynamical simulations is computationally challenging. Here we present two possible ways to tackle the problem: first, we consider the use of the adaptive mesh refinement (AMR), applying novel refinement criteria which are optimized to follow the evolution o...
We present a numerical scheme for modeling unresolved turbulence in cosmological adaptive mesh refinement codes. As a first application, we study the evolution of turbulence in the intracluster medium (ICM) and in the core of a galaxy cluster. Simulations with and without subgrid scale (SGS) model are compared in detail. Since the flow in the ICM i...
FEARLESS (Fluid mEchanics with Adaptively Refined Large Eddy SimulationS) is a novel numerical approach for hydrodynamical simulations
of turbulent flows, which combines the use of the adaptive mesh refinement (AMR) with a subgrid scale (SGS) model for the
unresolved scales. We report some results of our first research phase, aimed to the test of n...
The problem of the resolution of turbulent flows in adaptive mesh refinement (AMR) simulations is investigated by means of
three-dimensional (3D) hydrodynamical simulations in an idealized setup, representing a moving subcluster during a merger
event. AMR simulations performed with the usual refinement criteria based on local gradients of selected...
The development of turbulent gas flows in the intra-cluster medium and in the core of a galaxy cluster is studied by means of adaptive mesh refinement (AMR) cosmological simulations. A series of six runs was performed, employing identical simulation parameters but different criteria for triggering the mesh refinement. In particular, two different A...
The onset of the thermonuclear runaway in a Chandrasekhar-mass white dwarf, leading to the explosion as a type Ia supernova, is studied with hydrodynamical simulations. We investigate the evolution of temperature fluctuations (``bubbles'') in the WD's convective core by means of 2D numerical simulations. We show how the occurrence of the thermonucl...
We present three-dimensional numerical simulations of supersonic isotropic turbulence in a periodic box subject to st ochastic forcing. The finite-volume code Enzo from UCSD is utilised with the piece-wise-parabolic method (Colella and Woodward, 1984) to solve the compressible Euler equations. To begin with, a static grid of 7683 cells is used in c...
In the framework of the Chandrasekhar-mass deflagration model for Type Ia supernovae (SNe Ia), a persisting free parameter is the initial morphology of the flame front, which is linked to the ignition process in the progenitor white dwarf. Previous analytical models indicate that the thermal runaway is driven by temperature perturbations (''bubbles...
A detailed knowledge of the ignition phase in Type Ia Supernova
explosions is required in order to infer clues about the initial flame
position in simulations. A parameter study of buoyant, reactive bubbles,
generated during convection phase in the progenitor, is in preparation.
Here we present the first results.
It is now considered plausible that the Ne-O layers of evolved massive stars (M>=10 - 12 Msun) could be the main site for the synthesis of the p nuclei. Nevertheless, there are problems connected with underproductions of p isotopes like 92,94Mo and 96,98Ru. These problems might be cured by a correction of some uncertain key reaction rates strictly...
Ein verbleibender freier Parameter im Rahmen der Chandrasekar-Massen Deflagrations-Modelle für Typ Ia Supernova Explosionen stellt die Anfangsposition der Verbrennungsfront dar, die mit dem Zündprozeß im Vorläuferstern zusammenhängt. Bisherige analytische Modelle deuten an, daß der thermonukleare "Runaway" durch Temperaturfluktuationen ("Blasen"),...