The subject of this study presents an employed method in deep learning to create a model and predict the following period of turbulent flow velocity. The applied data in this study are extracted datasets from simulated turbulent flow in the laboratory with the Taylor microscale Reynolds numbers in the range of 90 < Rλ< 110. The flow has been seeded with tracer particles. The turbulent intensity of the flow is created and controlled by eight impellers placed in a turbulence facility. The flow deformation has been conducted via two circular flat plates moving toward each other in the center of the tank. The Lagrangian particle-tracking method has been applied to measure the flow features. The data have been processed to extract the flow properties. Since the dataset is sequential, it is used to train long short-term memory and gated recurrent unit model. The parallel computing machine DEEP-DAM module from Juelich supercomputer center has been applied to accelerate the model. The predicted output was assessed and validated by the rest of the data from the experiment for the following period. The results from this approach display accurate prediction outcomes that could be developed further for more extensive data documentation and used to assist in similar applications. The mean average error and R2 score range from 0.001–0.002 and 0.9839–0.9873, respectively, for both models with two distinct training data ratios. Using GPUs increases the LSTM performance speed more than applications with no GPUs.
Chapter “Machine-Learning-Based Control of Perturbed and Heated Channel Flows” was previously published non-open access. It has now been changed to open access under a CC BY 4.0 license and the copyright holder updated to ‘The Author(s)’.
Deep Learning models have proven necessary in dealing with the challenges posed by the continuous growth of data volume acquired from satellites and the increasing complexity of new Remote Sensing applications. To obtain the best performance from such models, it is necessary to fine-tune their hyperparameters. Since the models might have massive amounts of parameters that need to be tuned, this process requires many computational resources. In this work, a method to accelerate hyperparameter optimization on a High-Performance Computing system is proposed. The data batch size is increased during the training, leading to a more efficient execution on Graphics Processing Units. The experimental results confirm that this method reduces the runtime of the hyperparameter optimization step by a factor of 3 while achieving the same validation accuracy as a standard training procedure with a fixed batch size.
Many simulation workflows require to prepare the data for the simulation manually. This is time consuming and leads to a massive bottleneck when a large number of numerical simulations is requested. This bottleneck can be overcome by an automated data processing pipeline. Such a novel pipeline is developed for a medical use case from rhinology, where computer tomography recordings are used as input and flow simulation data define the results. Convolutional neural networks are applied to segment the upper airways and to detect and prepare the in-and outflow regions for accurate boundary condition prescription in the simulation. The automated process is tested on three cases which have not been used to train the networks. The accuracy of the pipeline is evaluated by comparing the network-generated output surfaces to those obtained from a semi-automated procedure performed by a medical professional. Except for minor deviations at interfaces between ethmoidal sinuses, the network-generated surface is sufficiently accurate. To further analyze the accuracy of the automated pipeline, flow simulations are conducted with a thermal lattice-Boltzmann method for both cases on a high-performace computing system. The comparison of the results of the respiratory flow simulations yield averaged errors of less than 1% for the pressure loss between the in-and outlets, and for the outlet temperature. Thus, the pipeline is shown to work accurately and the geometrical deviations at the ethmoidal sinuses to be negligible.
A reinforcement learning algorithm is coupled to a thermal lattice-Boltzmann method to control flow through a two-dimensional heated channel narrowed by a bump. The algorithm is allowed to change the disturbance factor of the bump and receives feedback in terms of the pressure loss and temperature increase between the inflow and out-flow region of the channel. It is trained to modify the bump such that both fluid mechanical properties are rated equally important. After a modification, a new simulation is initialized using the modified geometry and the flow field computed in the previous run. The thermal lattice-Boltzmann method is validated for a fully developed isothermal channel flow. After 265 simulations, the trained algorithm predicts an averaged disturbance factor that deviates by less than 1% from the reference solution obtained from 3,400 numerical simulations using a parameter sweep over the disturbance factor. The error is reduced to less than 0.1% after 1,450 simulations. A comparison of the temperature, pressure , and streamwise velocity distributions of the reference solution with the solution after 1,450 simulations along the line of the maximum velocity component in streamwise direction shows only negligible differences. The presented method is hence a valid method for avoiding expensive parameter space explorations and promises to be effective in supporting shape optimizations for more complex configurations, e.g., in finding optimal nasal cavity shapes.
BigEarthNet is one of the standard large remote sensing datasets. It has been shown previously that neural networks are effective tools to classify the image patches in this data. However, finding the optimum network hyperparameters and architecture to accurately classify the image patches in BigEarthNet remains a challenge. Searching for more accurate models manually is extremely time-consuming and labour intensive. Hence, a systematic approach is advisable. One possibility is automated evolutionary Neural Architecture Search (NAS). With this NAS many of the commonly used network hyperparameters, such as loss functions, are eliminated and a more accurate network is determined.
Using computationally efficient techniques for transforming the massive amount of Remote Sensing (RS) data into scientific understanding is critical for Earth science. The utilization of efficient techniques through innovative computing systems in RS applications has become more widespread in recent years. The continuously increased use of Deep Learning (DL) as a specific type of Machine Learning (ML) for data-intensive problems (i.e., 'big data') requires powerful computing resources with equally increasing performance. This paper reviews recent advances in High-Performance Computing (HPC), Cloud Computing (CC), and Quantum Computing (QC) applied to RS problems. It thus represents a snapshot of the state-of-the-art in ML in the context of the most recent developments in those computing areas, including our lessons learned over the last years. Our paper also includes some recent challenges and good experiences by using Europeans fastest supercomputer for hyper-spectral and multi-spectral image analysis with state-of-the-art data analysis tools. It offers a thoughtful perspective of the potential and emerging challenges of applying innovative computing paradigms to RS problems.
A wide variety of Remote Sensing (RS) missions are continuously acquiring a large volume of data every day. The availability of large datasets has propelled Deep Learning (DL) methods also in the RS domain. Convolutional Neural Networks (CNNs) have become the state of the art when tackling the classification of images, however the process of training is time consuming. In this work we exploit the Layer-wise Adaptive Moments optimizer for Batch training (LAMB) optimizer to use large batch size training on High-Performance Computing (HPC) systems. With the use of LAMB combined with learning rate scheduling and warm-up strategies, the experimental results on RS data classification demonstrate that a ResNet50 can be trained faster with batch sizes up to 32K.
Recent developments in Quantum Computing (QC) have paved the way for an enhancement of computing capabilities. Quantum Machine Learning (QML) aims at developing Machine Learning (ML) models specifically designed for quantum computers. The availability of the first quantum processors enabled further research, in particular the exploration of possible practical applications of QML algorithms. In this work, quantum formulations of the Support Vector Machine (SVM) are presented. Then, their implementation using existing quantum technologies is discussed and Remote Sensing (RS) image classification is considered for evaluation.
We observe a continuously increased use of Deep Learning (DL) as a specific type of Machine Learning (ML) for data-intensive problems (i.e., ’big data’) that requires powerful computing resources with equally increasing performance. Consequently, innovative heterogeneous High-Performance Computing (HPC) systems based on multi-core CPUs and many-core GPUs require an architectural design that addresses end user communities’ requirements that take advantage of ML and DL. Still the workloads of end user communities of the simulation sciences (e.g., using numerical methods based on known physical laws) needs to be equally supported in those architectures. This paper offers insights into the Modular Supercomputer Architecture (MSA) developed in the Dynamic Exascale Entry Platform (DEEP) series of projects to address the requirements of both simulation sciences and data-intensive sciences such as High Performance Data Analytics (HPDA). It shares insights into implementing the MSA in the Jülich Supercomputing Centre (JSC) hosting Europe No. 1 Supercomputer Jülich Wizard for European Leadership Science (JUWELS). We augment the technical findings with experience and lessons learned from two application communities case studies (i.e., remote sensing and health sciences) using the MSA with JUWELS and the DEEP systems in practice. Thus, the paper provides details into specific MSA design elements that enable significant performance improvements of ML and DL algorithms. While this paper focuses on MSA-based HPC systems and application experience, we are not losing sight of advances in Cloud Computing (CC) and Quantum Computing (QC) relevant for ML and DL.