A. Cristiano I. Malossi

A. Cristiano I. Malossi
IBM · IBM Research - Zurich

Ph.D.

About

11
Publications
3,473
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
169
Citations
Citations since 2017
9 Research Items
169 Citations
201720182019202020212022202301020304050
201720182019202020212022202301020304050
201720182019202020212022202301020304050
201720182019202020212022202301020304050
Introduction
Cristiano Malossi is Manager of the AI Automation group at the IBM Research laboratory in Zurich. The group focuses on creating solutions for scalable AI model development and deployment on Cloud and High-Performance on-prem systems. Cristiano main research interests include: AI, AI Automation, Deep Learning & Machine Learning, High Performance Computing, Transprecision & Energy-Aware Computing, Numerical Analysis, Computational Fluid Dynamics, Aircraft Design, Cardiovascular Simulations, and C
Additional affiliations
December 2019 - present
IBM
Position
  • Manager
November 2015 - November 2019
IBM
Position
  • Research Staff Member
July 2013 - October 2015
IBM
Position
  • PostDoc Position
Education
January 2009 - September 2012
École Polytechnique Fédérale de Lausanne
Field of study
  • Applied Mathematics
October 2004 - October 2007
Politecnico di Milano
Field of study
  • Aeronautical Engineering
September 2001 - October 2004
Politecnico di Milano
Field of study
  • Aerospace Engineering

Publications

Publications (11)
Conference Paper
Full-text available
Mantle convection is the fundamental physical process within earth's interior responsible for the thermal and geological evolution of the planet, including plate tectonics. The mantle is modeled as a viscous, incompressible, non-Newtonian fluid. The wide range of spatial scales, extreme variability and anisotropy in material properties, and severel...
Conference Paper
Full-text available
The end of Dennard scaling (i.e., the ability to shrink the feature size of integrated circuits while maintaining a constant power density) has now placed energy as a primary design principle in par with performance, all the way from the hardware to the application software. Along this line, optimizing the performance-energy balance of the 7/13 "dw...
Article
Full-text available
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the wi...
Article
Full-text available
The blood flow in arterial trees in the cardiovascular system can be simulated with the help of different models, depending on the outputs of interest and the desired degree of accuracy. In particular, one-dimensional fluid-structure interaction models for arteries are very effective in reproducing physiological pressure wave propagation and in pro...
Article
Full-text available
Arterial tree hemodynamics can be simulated by means of several models of different level of complexity, depending on the outputs of interest and the desired degree of accuracy. In this work, several numerical comparisons of geometrical multiscale models are presented with the aim of evaluating the benefits of such complex dimensionally-heterogeneo...
Preprint
Full-text available
Aging civil infrastructures are closely monitored by engineers for damage and critical defects. As the manual inspection of such large structures is costly and time-consuming, we are working towards fully automating the visual inspections to support the prioritization of maintenance activities. To that end we combine recent advances in drone techno...
Preprint
Full-text available
Artificial Intelligence (AI) development is inherently iterative and experimental. Over the course of normal development, especially with the advent of automated AI, hundreds or thousands of experiments are generated and are often lost or never examined again. There is a lost opportunity to document these experiments and learn from them at scale, b...
Article
Full-text available
In the deep-learning community new algorithms are published at an incredible pace. Therefore, solving an image classification problem for new datasets becomes a challenging task, as it requires to re-evaluate published algorithms and their different configurations in order to find a close to optimal classifier. To facilitate this process, before bi...
Preprint
Full-text available
In this work, we leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models. We demonstrate that we can jointly optimize for accuracy, inference time, and the number of parameters by combining DNN classifiers. To achieve this, we combine multiple ensemble strategies: bagging, boosting, and an or...
Preprint
Full-text available
This paper reduces the cost of DNNs training by decreasing the amount of data movement across heterogeneous architectures composed of several GPUs and multicore CPU devices. In particular, this paper proposes an algorithm to dynamically adapt the data representation format of network weights during training. This algorithm drives a compression proc...
Preprint
Deep neural networks achieve outstanding results in challenging image classification tasks. However, the design of network topologies is a complex task and the research community makes a constant effort in discovering top-accuracy topologies, either manually or employing expensive architecture searches. In this work, we propose a unique narrow-spac...
Article
Full-text available
Image classification datasets are often imbalanced, characteristic that negatively affects the accuracy of deeplearning classifiers. In this work we propose balancing GANs (BAGANs) as an augmentation tool to restore balance in imbalanced datasets. This is challenging because the few minority-class images may not be enough to train a GAN. We overcom...
Article
Full-text available
Computer modeling can provide quantitative insight into cardiac fluid dynamics phenomena that are not evident from standard imaging tools. We propose a new approach to modeling left ventricle fluid dynamics based on an image-driven model-based description of ventricular motion. In this approach, the end-diastolic geometry and time-dependent deforma...
Article
Full-text available
The FP7-funded Exa2Green project is paving the road to Exascale computing by improving energy efficiency in high performance computing (HPC). Slowly approaching the end of the project, the team can show quite remarkable results.

Network

Cited By

Projects

Projects (4)
Project
OPRECOMP is a 4-year research project funded under the EU Framework Horizon 2020 – Re Program Future and Emerging Technologies (FET) Proactive Initiative: emerging themes and communities. OPRECOMP aims to build an innovative, reliable foundation for computing based on transprecision analytics. Guaranteed numerical precision of each elementary step in a complex computation has been the mainstay of traditional computing systems for many years. This era, fueled by Moore’s law and the constant exponential improvement in computing efficiency, is at its twilight: from tiny nodes of the Internet-of-Things, to large HPC computing centers, sub-picoJoule/operation energy efficiency is essential for practical realisations. To overcome the “power wall”, a shift from traditional computing paradigms is now mandatory. OPRECOMP aims at demolishing the ultra-conservative “precise” computing abstraction and replacing it with a more flexible and efficient one, namely transprecision computing. OPRECOMP will investigate the theoretical and practical understanding of the energy efficiency boost obtainable when accuracy requirements on data being processed, stored and communicated can be lifted for intermediate calculations. While approximate computing approaches have been used before, in OPRECOMP for the first time ever, a complete framework for transprecision computing, covering devices, circuits, software tools,and algorithms, along with the mathematical theory and physical foundations of the ideas will be developed that not only will provide error bounds with respect to full precision results, but also will enable major energy efficiency improvements even when there is no freedom to relax end-to-end application quality-of-results. The mission of OPRECOMP is to demonstrate using physical demonstrators that this idea holds in a huge range of application scenarios in the domains of IoT, Big Data Analytics, Deep Learning, and HPC simulations: from the sub-milliWatt to the MegaWatt range, spanning nine orders of magnitude. In view of industrial exploitation, we will prove the quality and reliability and demonstrate that transprecision computing is the way to think about future systems.
Archived project
Archived project
In the Exa2Green project, an interdisciplinary research team of HPC experts, computer scientists, mathematicians, physicists and engineers takes up the challenge to develop a radically new energy-aware computing paradigm and programming methodology for exascale computing. As a proof of concept, the online coupled model system COSMO-ART, based on the operational weather forecast model of the COSMO Consortium (www.cosmo-model.org), is being modified to incorporate energy-aware numerics. COSMO-ART was developed at KIT and allows the treatment of primary and secondary aerosols and their impact on radiation and clouds.