Konrad Meier’s research while affiliated with University of Freiburg and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (15)


Overview of the ROCED\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\texttt{ROCED}$$\end{document} modular design. The ROCED\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\texttt{ROCED}$$\end{document} Core contains the Broker which decides when and on which sites new virtual machines are booted. The Requirement Adapters report about the utilization and resource requirements of the attached batch systems. The Site Adapter is responsible to manage the lifetime of virtual machines on a cloud site and the Integration Adapters ensure that newly booted machines are integrated into the batch system.
From [5]
Implementation of ROCED\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\texttt{ROCED}$$\end{document} with Slurm on the Tier-3 cluster of the WLCG used by ATLAS researchers in Freiburg
Total score as a function of the core multiplicity for the HEP-SPEC06 (top) and KV (bottom) benchmarks for the ATLAS Tier-3 bare metal (blue open circles), the NEMO VMs (red full circles) and the NEMO bare metal (black open squares). The data points represent the average values of the benchmarks for each core multiplicity, and the vertical bars show the associated standard deviations
Utilization of the shared HPC system by booted virtual machines. Up to 9000 virtual cores were in use at peak times. The fluctuations in the utilization reflects the patterns of the submission of jobs by the CMS users at the physics institute in Karlsruhe. The number of draining slots displays the amount of job slots still processing jobs while the rest of the node’s slot are already empty
Estimated usage of the NEMO cluster in the time from September 2016 to September 2018. The orange bars indicate the usage by jobs running directly in the hosts’ operating system, while the blue bars are jobs running in virtual machines. The decrease of VRE jobs is partially explained by an increasing number of bare metal jobs submitted
Dynamic Virtualized Deployment of Particle Physics Environments on a High Performance Computing Cluster
  • Article
  • Publisher preview available

May 2019

·

59 Reads

·

2 Citations

Computing and Software for Big Science

Felix Bührer

·

Frank Fischer

·

Georg Fleig

·

[...]

·

Bernd Wiebelt

A setup for dynamically providing resources of an external, non-dedicated cluster to researchers of the ATLAS and CMS experiments in the WLCG environment is described as it has been realized at the NEMO High Performance Computing cluster at the University of Freiburg. Techniques to provide the full WLCG software environment in a virtual machine image are described. The interplay between the schedulers for NEMO and for the external clusters is coordinated through the ROCED\texttt{ROCED} service. A cloud computing infrastructure is deployed at NEMO to orchestrate the simultaneous usage by bare metal and virtualized jobs. Through the setup, resources are provided to users in a transparent, automatized, and on-demand way. The performance of the virtualized environment has been evaluated for particle physics applications.

View access options

Dynamic Virtualized Deployment of Particle Physics Environments on a High Performance Computing Cluster

December 2018

·

8 Reads

The NEMO High Performance Computing Cluster at the University of Freiburg has been made available to researchers of the ATLAS and CMS experiments. Users access the cluster from external machines connected to the World-wide LHC Computing Grid (WLCG). This paper describes how the full software environment of the WLCG is provided in a virtual machine image. The interplay between the schedulers for NEMO and for the external clusters is coordinated through the ROCED service. A cloud computing infrastructure is deployed at NEMO to orchestrate the simultaneous usage by bare metal and virtualized jobs. Through the setup, resources are provided to users in a transparent, automatized, and on-demand way. The performance of the virtualized environment has been evaluated for particle physics applications.


Virtualization of the ATLAS software environment on a shared HPC system

September 2018

·

25 Reads

Journal of Physics Conference Series

The shared HPC cluster NEMO at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. This concept allows to run both data analysis and production on the HPC host system which is connected to the existing Tier2/Tier3 infrastructure. Schedulers of the two clusters were integrated in a dynamic, on-demand way. An automatically generated, fully functional virtual machine image provides access to the local user environment. The performance in the virtualized environment is evaluated for typical High-Energy Physics applications.


Figure 1. Cluster Structure 
Figure 2. Workflow for starting VMs on the Cluster 
Figure 3 of 4
Figure 4 of 4
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

October 2016

·

102 Reads

·

11 Citations

Journal of Physics Conference Series

Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.


Emulation-as-a-Service – The Past in the Cloud

July 2014

·

112 Reads

·

7 Citations

Until now, emulation of legacy architectures has mostly been seen as a tool for hobbyists and as technical nostalgia. However, in a world in which research and development is producing almost entirely digital artifacts, new and efficient concepts for preservation and re-use are required. Furthermore, a significant amount of today's cultural work is purely digital. Hence, emulation technology appeals to a wider, non-technical, user-group since many of our digital objects cannot be re-used properly without a suitable runtime environment. This article presents a scalable and cost-effective Cloud-based Emulation-as-a-Service (EaaS) architecture, enabling a wide range of non-technical users to access emulation technology in order to re-enact their digital belongings. Together with a distributed storage and data management model we present an implementation from the domain of digital art to demonstrate the practicability of the proposed EaaS architecture.







Citations (7)


... Furthermore, nearly all scientific disciplines are relying on very high computing power for scientific discoveries. Consider, for example, the Particle Physics experiments at the Large Hadron Collider require a great amount of computing power for simulation, data processing and analysis [5]. Similarly, the scientists needs HPC to accelerate genome sequencing by two orders of magnitude in order to crack cancer diseases [3]. ...

Reference:

Accurate Component-level Energy Modelling of Parallel Applications on Modern Heterogeneous Hybrid Computing Platforms using System-level Measurements
Dynamic Virtualized Deployment of Particle Physics Environments on a High Performance Computing Cluster

Computing and Software for Big Science

... Instant clones uses a copy-on-write [23] feature to drastically reduce the provisioning time when compared to full clones [24], which has to boot-up a new VM from scratch. To the best of our knowledge, there are very few existing works, which have support for dynamic VM provisioning [20], [21], [25], however, they do not utilize techniques such as instant clone to reduce provisioning overheads in a virtualized HPC environment. ...

Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

Journal of Physics Conference Series

... But these states do not contain information on how these changes were achieved. This information can be added by manually by labeling intermediate states and adding notes describing the actions that were executed during a session [9]. ...

Emulation-as-a-Service – The Past in the Cloud
  • Citing Conference Paper
  • July 2014

... Mobile devices, which are used to monitor people's context information, record their preferences and behaviors, and hence possess the users' private information. Most of the protocols and algorithms for SAN require users to share their personal information such as physical location [114], preference, and social relation. Therefore, the privacy issue for mobile users in SAN becomes crucial. ...

Reclaiming Location Privacy in Mobile Telephony Networks—Effects and Consequences for Providers and Subscribers
  • Citing Article
  • June 2013

IEEE Systems Journal

... He found out supplementary number of hypotheses that affords wellbuilt squabble manipulation in espousal and convention of Location based Services. Klaus et al. (2011) have characterized security, safety and privacy in mobile telephony networks. By abusing the location details and mobility prototype of mobile users, an individual's confidentiality is threatened. ...

Location Privacy in Mobile Telephony Networks -- Conflict of Interest between Safety, Security and Privacy
  • Citing Article
  • October 2011

... Special hardware and software are needed for the practical implementation of the testbed. A detailed description of hardware components used and the software implementation is given in earlier work [45]. 1) Mobile Network: The mobile network component is currently built of two base transceiver stations (BTS), which are controlled by the base station controller. The BTS provides the air interface and communicates with the MS. ...

Testbed for Mobile Telephony Networks
  • Citing Conference Paper
  • September 2011

... The significance of privacy preserving in User Entity (UE) in WMN attracts the scholars working towards the solution of its problem [8]. The strategies of anonymity authentication and privacy models are categorised into three major classes: IMSI (International Mobile Subscriber Entity) Encryption, Utilizing dynamic identity and Pseudonymes based security. ...

Assessing Location Privacy in Mobile Communication Networks

Lecture Notes in Computer Science