Recent publications
In this chapter, we provide an overview of benchmark standardization efforts in the area of computer systems benchmarking. We focus on SPEC and TPC, the two most prominent benchmark standardization bodies in the area of computer systems and information technology.
In this chapter, we briefly review the basics of probability and statistics, while establishing the statistical notation needed for understanding some of the chapters in the book. The chapter is not intended as a comprehensive introduction to probability and statistics, but rather as a quick refresher assuming that the reader is already familiar with the basic concepts.
This chapter looks at the different measurement strategies and techniques that can be used in practice to derive the values of common metrics, including event-driven, tracing, sampling, and indirect measurement. While most presented techniques are useful for performance metrics, some of them can also be applied generally for other types of metrics. The chapter is wrapped up with an overview of commercial and open-source monitoring tools for performance profiling and call path tracing.
This chapter introduces the foundations of experimental design. Starting with the case of one factor, the “analysis of variance” (ANOVA) technique from statistics is introduced, followed by the method of contrasts for comparing subsets of alternatives. The analysis of variance technique is then generalized to two factors that can be varied independently, and after that, it is generalized to m factors. Following this, the Plackett-Burman fractional factorial design is introduced and compared with the full-factorial analysis of variance technique. Finally, a case study showing how experimental design can be applied in practice is presented.
This chapter provides an overview of well-established microservice benchmark applications used in research. Further, it describes TeaStore as a selected example in more detail. Based on this description, we also showcase two case studies using TeaStore as an example of the usage of a microservice benchmark in practice.
In this chapter, we start by looking at some basic quantitative relationships, which can be used to evaluate a system’s performance based on measured or known data, a process known as operational analysis. In the second part, we provide a brief introduction to the basic notation and principles of queueing theory. While queueing theory has been applied successfully to different domains, for example, to model manufacturing lines or call center operation, in this chapter, we focus on using queueing theory for performance evaluation of computer systems. The chapter is wrapped up with a case study, showing in a step-by-step fashion how to build a queueing model of a distributed software system and use it to predict the performance of the system for different workload and configuration scenarios.
This chapter provides a definition of the term “benchmark” followed by definitions of the major system quality attributes that are typically subject of benchmarking. After that, a classification of the different types of benchmarks is provided, followed by an overview of strategies for performance benchmarking. Next, the quality criteria for good benchmarks are discussed in detail. Reproducibility has gained increasing attention over the past decade as a basis for realizing the principles of findability, accessibility, interoperability, and reusability (FAIR) of research data. The chapter includes a definition of reproducibility as well as differentiation from related terms such as repeatability and replicability. It further elaborates on approaches for enhancing reproducibility, including how benchmarks help to improve the reproducibility of research results. Finally, the chapter is wrapped up by a discussion of application scenarios for benchmarks.
In this chapter, we consider workloads in the context of workload generation. It starts with a classification of the different workload facets and artifacts. We introduce the distinction between executable and non-executable parts of a workload, as well as the distinction between natural and artificial workloads. The executable parts are then discussed in detail, including natural benchmarks, application workloads, and synthetic workloads. Next, the non-executable parts are discussed, distinguishing between workload traces and workload descriptions. In the rest of the chapter, we introduce the different types of workload descriptions that can be used for batch workloads and transactional workloads as well as for open and closed workloads. The challenges of generating steady-state workloads and workloads with varying arrival rates are discussed. Finally, the chapter concludes with a brief introduction of system-metric-based workload descriptions.
This chapter starts by defining the basic concepts: metric, measure, and measurement. It then introduces the different scales of measurement, allowing one to classify the types of values assigned by measures. Next, definitions of the most common performance metrics are presented. The chapter continues with a detailed discussion of the quality attributes of good metrics. Finally, the different types of averages are introduced, while showing how they can be used to define composite metrics and aggregate results from multiple benchmarks.
This chapter starts by introducing the fundamentals of power and efficiency measurements and then describes a power and performance benchmark methodology developed by the SPECpower committee for commodity servers. It is designed to characterize and rate the energy efficiency of a system under test for multiple load levels, showcasing load level differences in system behavior regarding energy efficiency. The methodology was first implemented in the SPECpower_ssj 2008 benchmark and later extended with more workloads, metrics, and other application areas for the SPEC Server Efficiency Rating Tool (SERT) suite. The SERT suite was developed to fill the need for a rating tool that can be utilized by government agencies in their regulatory programs, for example, the U.S. Environmental Protection Agency (EPA) for the use in their ENERGY STAR program for servers.
This chapter introduces statistical approaches for quantifying the variability and precision of measurements. The chapter starts by introducing the most common indices of dispersion for quantifying the variability, followed by defining basic concepts such as accuracy, precision, and resolution of measurements, as well as the distinction between systematic and random measurement errors. A model of random errors is introduced and used to derive confidence intervals for estimating the mean of a measured quantity of interest based on a sample of measurements. Finally, statistical tests for comparing alternatives based on measurements are introduced. The cases of paired and unpaired observations are covered separately.
A report is made on a study of mechanical properties and fractographic characteristics of laser powder bed fusion (PBF‐LB/M)‐built Inconel 718, performing heat treatments and hot‐isostatic pressing. For this, tensile components are heat‐treated by different processes, as namely stress relief (SR), SR and double aging (SR + DA), and hot‐isostatic pressing are conducted. For the mechanical testing, the ultimate tensile strength (UTS) as well as the fatigue behavior are evaluated, examining differences in maximum load behavior, elongation, and the different regimes of fatigue. As changes in material structure can be observed, the sole SR leads to a diminished UTS, while the combination of SR + DA develops an UTS of Rm = 1277 MPa. Within the fatigue behavior, the HIP shows a very balanced material structure with an increased high cycle and very high cycle regime, as the texture gets homogenized during the heat treatment. The metallographic analysis can quantify the material changes, as the density and the hardness are improved by virtue of the heat treatments. Furthermore, the fractographic analysis shows the differences in fracture behavior, arising due to the microstructural changes, as crack initiation points, crack propagation, and forced fractures can be categorized by scanning electron microscopy.
Myeloproliferative neoplasms (MPN) are associated with a variety of symptoms that severely impact patients’ quality of life and ability to perform daily activities. Recent studies showed differences in the perception of physician- versus patient-reported symptom burden. However, studies directly comparing patient- and physician-reported ratings are lacking. Here, a retrospective analysis on symptom burden of 3979 MPN patients of the Bioregistry of the German Study Group for MPN was conducted to intra-individually compare physician and patient reports collected at the same time. Cohen’s kappa was calculated to assess the degree of agreement between patient and physician reports. Factors influencing baseline symptom severity were identified using linear regression and adjusted Cox models were calculated to investigate the effect of symptom burden on survival. MPN patients had a high symptom burden, which neither decreased over time nor upon cytoreductive therapy. All symptoms were more frequently reported by patients compared to physicians. Agreement remained low and only slightly improved when considering a higher threshold for patient symptom severity. Patients with severe symptom burden had inferior survival compared to patients with less severe symptoms. Assessment of symptom burden in MPN is therefore insufficient and patient-reported outcome tools need to be implemented into clinical routine.
Refractive correction techniques, such as Lenticule Extraction and LASIK, are pivotal in corneal surgery. Precise morphological characterization is essential for identifying post-operative complications, which can be compromised by image noise and low contrast. This study introduces an automated image processing algorithm that integrates non-local denoising, the Sobel gradient method, and Bayesian optimization to accurately delineate lenticule volumes and flap surfaces. Validated on 60 ex vivo porcine eyes treated with the SCHWIND ATOS femtosecond laser, the algorithm demonstrated high accuracy compared to the manual gold standard while effectively reducing variability.
Since the founding of the People’s Republic in 1949, China’s standardization system has undergone continuous change. However, with the opening up of the economy and the growing importance of international trade, a more comprehensive modernization of the Chinese standardization system became inevitable. After initial attempts to renew the standardization system showed little success, China has been pursuing a more holistic approach since 2015 as part of its reform plans for the standardization system. An approach that meets the needs and expectations of a leading economic power in a global market. Compared to other leading standardization powers such as Europe or the USA, China is considered a latecomer to international standardization. However, this has offered the advantage that China can learn from the experience of other nations and build a standardization system that incorporates the best aspects of other systems while simultaneously meeting China’s claim to economic leadership.
Purpose
To determine the size of the effective optical and treatment zones after lenticule extraction procedures.
Design
Retrospective case series.
Methods
A fully automated method to determine the boundaries of the optical and treatment zones of a lenticule of tissue extracted from a cornea has been developed, in which the boundaries of the corrected area are derived from differences between post and preoperative maps of several corneal metrics by determining the smallest cross-over point along each semi-meridian.
Results
The method has been applied to a pilot cohort of 84 eyes, clinical data showing average diameters of 6.56 ± 0.46 mm [5.25 to 7.51] and 8.49 ± 0.56 mm [6.58 to 9.52] for tangential anterior curvature and corneal thickness, respectively.
Conclusions
The method provides a reliable and objective way to determine the size of a lenticule of tissue extracted from a cornea and it can be applied to any topo- or tomographic derived metric.
Translational relevance
To determine the size of the effective optical and treatment zones after laser vision correction, a fully automated method was developed. The method is simple to implement and can be used to determine the actual size of a corneal correction and help titrating the planned size. The advantages of this algorithm over existing methods are its objectivity; automation; speed; resolution, accuracy and precision; the free-form boundary determination; and finally its support in determining centration and circularity.
Flash-annealing (FA) of metallic glasses (MGs) allows one to modulate their disordered structure. Here, we have flash-annealed a CuZr-based MG below the glass transition temperature at different cycles and generated MGs with various heterogeneous structures. We quantified the glassy structure via the relaxation enthalpy, ΔrelH, which did not significantly change for MGs flash-annealed at a low number of cycles. Their hardness monotonically reduced. However, when more than ten FA cycles were applied, ΔrelH, perceivably decreased, while corresponding hardness increased. High-energy x-ray diffraction analysis revealed that the medium-range ordering of the corresponding structure initially rose and then decreased with an increasing number of FA cycles. This structural change is accompanied by first a hardness decrease followed by an increase. Molecular dynamics simulations showed that throughout the shift from low to high cycles, the structural non-uniformity changed from being non-uniform to more uniform. Through a combination of experiments and simulations, we have shown the non-monotonic relationship between the structural heterogeneity of MGs and cyclic treatments, contributing to a better understanding of the relationship between structural control techniques, microstructure, and properties.
Design of experiments (DOE) is an established method to allocate resources for efficient parameter space exploration. Model based active learning (AL) data sampling strategies have shown potential for further optimization. This paper introduces a workflow for conducting DOE comparative studies using automated machine learning. Based on a practical definition of model complexity in the context of machine learning, the interplay of systematic data generation and model performance is examined considering various sources of uncertainty: this includes uncertainties caused by stochastic sampling strategies, imprecise data, suboptimal modeling, and model evaluation. Results obtained from electrical circuit models with varying complexity show that not all AL sampling strategies outperform conventional DOE strategies, depending on the available data volume, the complexity of the dataset, and data uncertainties. Trade-offs in resource allocation strategies, in particular between identical replication of data points for statistical noise reduction and broad sampling for maximum parameter space exploration, and their impact on subsequent machine learning analysis are systematically investigated. Results indicate that replication oriented strategies should not be dismissed but may prove advantageous for cases with non-negligible noise impact and intermediate resource availability. The provided workflow can be used to simulate practical experimental conditions for DOE testing and DOE selection.
Organizations regularly face serious challenges due to pandemics, recessions, and financial crises. One reason some organizations cope better than others may be that their management control systems (MCSs) more effectively foster organizational resilience. Despite considerable literature on MCSs and organizational resilience, there is a lack of research on the impact of an MCS’s use on organizational resilience. This study examines and bridges the literatures on MCSs and organizational resilience to illuminate how organizations can better cope with adversity. To identify potential relationships between management controls, MCSs, and organizational resilience, we systematically review the literature and perform a content analysis. We examine the relationships between organizational resilience measures, capabilities, and management controls. We propose the use of resilience-oriented management controls and discuss whether organizations can increase their resilience by building resilience-oriented MCSs. Based on Simons’s levers of control framework and Duchek’s capability-based conceptualization of organizational resilience, we develop a conceptual organizational resilience/MCS framework. Our study reveals relationships and gaps between the literatures on MCSs and organizational resilience and proposes avenues for future research. Our findings suggest that resilience-oriented MCSs are beneficial to organizational resilience.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information
Address
Aschaffenburg, Germany
Website