
Welf LöweLinnaeus University | lnu · Software Technology Labs (STL)
Welf Löwe
Prof. Dr.
About
190
Publications
46,228
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,798
Citations
Introduction
Additional affiliations
April 1996 - March 2002
Publications
Publications (190)
On-the-fly Garbage Collectors (GCs) are the state-of-the-art concurrent GC algorithms today. Everything is done concurrently, but phases are separated by blocking handshakes. Hence, progress relies on the scheduler to let application threads (mutators) run into GC checkpoints to reply to the handshakes. For a non-blocking GC, these blocking handsha...
This paper discusses algorithms for scheduling task-graphs G=(V,E,τ) to LogP-machines. These algorithms depend on the granularity of G, i.e., on the ratio of computation τ(v) and communication times in the LogP-cost model, and on the structure of G. We define a class of coarse-grained task-graphs that can be scheduled with a performance guarantee o...
Many software engineering applications require points-to analysis. These client applications range from optimizing compilers to integrated program development environments (IDEs) and from testing environments to reverse-engineering tools. Moreover, software engineering applications used in an edit-compile cycle need points-to analysis to be fast an...
We describe the principles of a novel framework for performance-aware composition of sequential and explicitly parallel software components with implementation variants. Automatic composition results in a table-driven implementation that, for each parallel call of a performance-aware component, looks up the expected best implementation variant, pro...
Detection of design pattern occurrences is part of several solutions to software engineering problems, and high accuracy of detection is important to help solve the actual problems. The improvement in accuracy of design pattern occurrence detection requires some way of evaluating various approaches. Currently, there are several different methods us...
Metrics As Scores can be thought of as an interactive, multiple analysis of variance ("ANOVA"). An ANOVA might be used to estimate the goodness-of-fit of a statistical model. Beyond ANOVA, which is used to analyze the differences among hypothesized group means for a single quantity (feature), Metrics As Scores seeks to answer the question of whethe...
Software quality models aggregate metrics to indicate quality. Most metrics reflect counts derived from events or attributes that cannot directly be associated with quality. Worse, what constitutes a desirable value for a metric may vary across contexts. We demonstrate an approach to transforming arbitrary metrics into absolute quality scores by le...
The exhaust gas regeneration (EGR) also called exhaust gas re-circulation system in an engine of construction machine (CM) often gets clogged due to various ways of driving the machines. Currently, there does not exist any model that can predict clogging for maintenance planning. Hence, clogging is only recognized when it has occurred, and often ca...
Anti-patterns are harmful phenomena repeatedly occurring, e.g., in software development projects. Though widely recognized and well-known, their descriptions are traditionally not fit for automated detection. The detection is usually performed by manual audits, or on business process models. Both options are time-, effort- and expertise-heavy, pron...
Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). But then it is also more expensive in terms of memory consumption. For languages with conditions and iterations, the number of contexts grows exponentially with the program size. This problem is not just a theoretical issue. Several papers...
Solutions to multi-objective optimization problems can generally not be compared or ordered, due to the lack of orderability of the single objectives. Furthermore, decision-makers are often made to believe that scaled objectives can be compared. This is a fallacy, as the space of solutions is in practice inhomogeneous without linear trade-offs. We...
With the advancing digitisation of society and industry we observe a progressing blending of computational, physical, and social processes. The trustworthiness and sustainability of these systems will be vital for our society. However, engineering modern computing systems is complex as they have to: i) operate in uncertain and continuously changing...
A quality model is a conceptual decomposition of an abstract notion of quality into relevant, possibly conflicting characteristics and further into measurable metrics. For quality assessment and decision making, metrics values are aggregated to characteristics and ultimately to quality scores. Aggregation has often been problematic as quality model...
Movements of a person can be recorded with a mobile camera and visualized as sequences of stick figures for assessments in health and elderly care, physio-therapy, and sports. However, since the visualizations flicker due to noisy input data, the visualizations themselves and even whole assessment applications are not trusted in general. The presen...
Regression uses supervised machine learning to find a model that combines several independent variables to predict a dependent variable based on ground truth (labeled) data, i.e., tuples of independent and dependent variables (labels). Similarly, aggregation also combines several independent variables to a dependent variable. The dependent variable...
It is a well-known practice in software engineering to aggregate software metrics to assess software artifacts for various purposes, such as their maintainability or their proneness to contain bugs. For different purposes, different metrics might be relevant. However, weighting these software metrics according to their contribution to the respectiv...
With the advancing digitisation of society and industry we observe a progressing blending of computational, physical, and social processes. The trustworthiness and sustainability of these systems will be vital for our society. However, engineering modern computing systems is complex as they have to: i) operate in uncertain and continuously changing...
With the advancing digitisation of society and industry we observe a progressive blending of computational, physical, and social processes. The trustworthiness and sustainability of these systems will be vital for our society.
However, engineering modern computing systems is complex as they have to: i) operate in uncertain and continuously changing...
Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). But then it is also more expensive in terms of memory consumption. For languages with conditions and iterations, the number of contexts grows exponentially with the program size. This problem is not just a theoretical issue. Several papers...
Background
Nowadays, self-reported assessments (SA) and accelerometer-based assessments (AC) are commonly used methods to measure daily life physical activity (PA) in older adults. SA is simple, cost-effective, and can be used in large epidemiological studies, but its reliability and validity have been questioned. Accelerometer measurement has prov...
This book constitutes the refereed proceedings of the 15th International Conference on Software Architecture, ECSA 2021, held in Sweden, in September 2021. Due to COVID-19 pandemic the conference was held virtually.
In the Research Track, 11 full papers presented together with 5 short papers were carefully reviewed and selected from 58 submissions....
Background: Mobility and balance is essential for older adults' well-being and independence and the ability to maintain physically active. Early identification of functional impairment may enable early risk-of-fall assessments and preventive measures. There is a need to find new solutions to assess functional ability in easy, efficient, and accurat...
Source code is changed for a reason, e.g., to adapt, correct, or adapt it. This reason can provide valuable insight into the development process but is rarely explicitly documented when the change is committed to a source code repository. Automatic commit classification uses features extracted from commits to estimate this reason.
We introduce sour...
Source code is changed for a reason, e.g., to adapt, correct, or adapt it. This reason can provide valuable insight into the development process but is rarely explicitly documented when the change is committed to a source code repository. Automatic commit classification uses features extracted from commits to estimate this reason. We introduce sour...
The challenge of having to deal with dependent variables in classification and regression using techniques based on Bayes' theorem is often avoided by assuming a strong independence between them, hence such techniques are said to be naive. While analytical solutions supporting classification on arbitrary amounts of discrete and continuous random va...
Sound assessment and ranking of alternatives are fundamental to effective decision making. Creating an overall ranking is not trivial if there are multiple criteria, and none of the alternatives is the best according to all criteria. To address this challenge, we propose an approach that aggregates criteria scores based on their joint (probability)...
Commit classification, the automatic classification of the purpose of changes to software, can support the understanding and quality improvement of software and its development process. We introduce code density of a commit, a measure of the net size of a commit, as a novel feature and study how well it is suited to determine the purpose of a chang...
Quality assessment Dressler,Danny human movements has many applications in diagnosis and therapy of musculoskeletal insufficiencies and high-performance Liapota,Pavlo. We suggest five purely data-driven assessment methods for Löwe,Welf human movements using inexpensive 3D sensor technology. We evaluate their accuracy by comparing them against a val...
Please follow the DOI to download the Artifact. The artifact belongs to the paper "Quality Models Inside Out: Interactive Visualization of Software Metrics by Means of Joint Probabilities"
The productivity of a (team of) developer(s) can be expressed as a ratio between effort and delivered functionality. Several different estimation models have been proposed. These are based on statistical analysis of real development projects; their accuracy depends on the number and the precision of data points. We propose a data-driven method to a...
Multi-dimensional goals can be formalized in so-called quality models. Often, each dimension is assessed with a set of metrics that are not comparable; they come with different units, scale types, and distributions of values. Aggregating the metrics to a single quality score in an ad-hoc manner cannot be expected to provide a reliable basis for dec...
The productivity of a (team of) developer(s) can be expressed as a ratio between effort and delivered functionality. Several different estimation models have been proposed. These are based on statistical analysis of real development projects; their accuracy depends on the number and the precision of data points. We propose a data-driven method to a...
Selecting the optimum component implementation variant is sometimes difficult since it depends on the component’s usage context at runtime, e.g., on the concurrency level of the application using the component, call sequences to the component, actual parameters, the hardware available etc. A conservative selection of implementation variants leads t...
The ubiquity of sensor, computing, communication, and storage technologies provides us with access to previously unknown amounts of data - Big Data. Big Data has revolutionized research communities and their scientific methodologies. It has, for instance, innovated the approaches to knowledge and theory building, validation, and exploitation taken...
Nowadays, many companies are running digitalization initiatives or are planning to do so. There exist various models to evaluate the digitalization potential of a company and to define the maturity level of a company in exploiting digitalization technologies summarized under buzzwords such as Big Data, Artificial Intelligence (AI), Deep Learning, a...
In an increasingly networked world, the availability of high quality translations is critical for success, especially in the context of international competition. International companies need to provide well translated, high quality technical documentation not only to be successful in the market but also to meet legal regulations. We seek to evalua...
On-the-fly Garbage Collectors (GCs) are the state-of-the-art concurrent GC algorithms today. Everything is done concurrently, but phases are separated by blocking handshakes. Hence, progress relies on the scheduler to let application threads (mutators) run into GC checkpoints to reply to the handshakes. For a non-blocking GC, these blocking handsha...
Service functionality can be provided by more than one service consumer. In order to choose the service with the highest benefit, a selection based on previously measured experiences by other consumers is beneficial. In this paper, we present the results of our evaluation of two machine learning approaches in combination with several learning strat...
To take simple decisions comes naturally and does not require additional considerations but when there are multiple alternatives and criteria to be considered, a decision-making technique is required. The most studied and developed technique is the Analytic Hierarchy Process (AHP). We focus on the practical implementation of AHP and study the set o...
Compaction of memory in long running systems has always been important. The latency of compaction increases in today’s systems with high memory demands and large heaps. To deal with this problem, we present a lock-free protocol allowing for copying concurrent with the application running, which reduces the latencies of compaction radically. It prov...
Designers of context-sensitive program analyses need to take special care of the memory consumption of the analysis results. In general, they need to sacrifice accuracy to cope with restricted memory resources. We introduce χ-terms as a general data structure to capture and manipulate context-sensitive analysis results. A χ-term is a compact repres...
In software engineering, testing is one of the corner-stones of quality assurance. The idea of software testing can be applied to information quality as well. Technical documentation has a set of intended uses that correspond to use cases in a software system. Documentation is, in many cases, presented via software systems, e.g., web servers and br...
Today’s popular languages have a large number of different language constructs and standardized library interfaces. The number is further increasing with every new language standard.Most published analyses therefore focus on a subset of such languages or define a language with a few essential constructs of interest.More recently, program-analysis c...
Sometimes components are conservatively implemented as thread-safe, while during the actual execution they are only accessed from one thread. In these scenarios, overly conservative assumptions lead to suboptimal performance.
The contribution of this paper is a component architecture that combines the benefits of different synchronization mechanism...
Classification is a constitutive part in many different fields of Computer Science. There exist several approaches that capture and manipulate classification information in order to construct a specific classification model. These approaches are often tightly coupled to certain learning strategies, special data structures for capturing the models,...
Indirect metrics in quality models define weighted integrations of direct metrics to provide higher-level quality indicators. This paper presents a case study that investigates to what degree quality models depend on statistical assumptions about the distribution of direct metrics values when these are integrated and aggregated. We vary the normali...
Fine-tuning which data structure implementation to use for a given problem is sometimes tedious work since the optimum solution depends on the context, i.e., on the operation sequences, actual parameters as well as on the hardware available at run time. Sometimes a data structure with higher asymptotic time complexity performs better in certain con...
Points-to analysis is a static program analysis that extracts reference information from programs, e.g., possible targets of a call and possible objects referenced by a field. Previous works evaluating different approaches to context-sensitive Points-to analyses use coarse-grained precision metrics focusing on references between source code entitie...
Parallelization and other optimizations often depend on static dependence analysis. This approach requires methods to be independent regardless of the input data, which is not always the case.
Our contribution is a dynamic analysis "guessing" if methods are pure, i. e., if they do not change state. The analysis is piggybacking on a garbage collecto...
For large software projects, system designers have to adhere to a significant number of functional and non-functional requirements, which makes software development a complex engineering task. If these requirements change during the development process, complexity even increases. In this paper, we suggest recommendation systems based on context-awa...
Most scalable approaches to inter-procedural dataflow analysis do not take into account the order in which fields are accessed, and methods are executed, at run-time. That is, they have no inter-procedural flow-sensitivity. In this chapter we present an approach to dataflow analysis named Simulated Execution. It is flow-sensitive in the sense that...
The context-aware composition approach (CAC) has shown to improve the performance of object-oriented applications on modern multi-core hardware by selecting between different (sequential and parallel) component variants in different (call and hardware) contexts. However, introducing CAC in legacy applications can be time-consuming and requires quit...
Information quality assessment of technical documentation is an integral part of quality management of products and services. Technical documentation is usually assessed using questionnaires, checklists, and reviews. This is cumbersome, costly and prone to errors. Acknowledging the fact that only people can assess certain quality aspects, we sugges...
We describe an approach to parallelize sequential object-oriented general purpose programs automatically adapting well-known analysis and transformation techniques and combined with context-aware composition. First experiments demonstrate the potential speed-up. This approach allows sequential object-oriented programs to benefit from modern hardwar...
Today there exist many programming models and platforms for implementing
real-time stream processing systems. A decision in favor of the wrong
technology might lead to increased development time and costs. It is,
therefore, necessary to decide which alternatives further efforts should
concentrate on and which may be forgotten. Such decisions cannot...
In a service-oriented setting, where services are composed to provide end user functionality, it is a challenge to find the service components with best-fit functionality and quality. A decision based on information mainly provided by service providers is inadequate as it cannot be trusted in general. In this paper, we discuss service compositions...
This paper presents a generalized theory for capturing and manipulating classification information. We define decision algebra which models decision-based classifiers as higher order decision functions abstracting from implementations using decision
trees (or similar), decision rules, and decision tables. As a proof of the decision algebra concept...
Many dynamic analysis tools capture the occurrences of events at runtime. The longer programs are being monitored, the more accurate the data they provide to the user. Then, the runtime overhead must be kept as low as possible, because it decreases the user's productivity. Runtime performance overhead occurs due to identifying events, and storing t...
Context-Aware Composition allows to automatically select optimal variants of algorithms, data-structures, and schedules at runtime using generalized dynamic Dispatch Tables. These tables grow exponentially with the number of significant context attributes. To make Context-Aware Composition scale, we suggest four alternative implementations to Dispa...
In this paper, we present feedback-driven points-to analysis where any classical points-to analysis has its points-to results at certain program points guarded by a-priori upper bounds. Such upper bounds can come from other points-to analyses -- this is of interest when different approaches are not strictly ordered in terms of accuracy -- and from...
Static program analysis supporting software development is often part of edit-compile-cycles, and precise program analysis is time consuming. Points-to analysis is a data-flow-based static program analysis used to find object references in programs. Its applications include test case generation, compiler optimizations and program understanding, and...
Technical documentation has moved from printed booklets to electronic versions that need to be updated continuously to match product development and user demands. There is an imminent need to ensure the quality of technical documentation, i.e., information that follows a product. In order to ensure the quality of technical documentation, it is impo...
Information quality assessment of technical documentation is nowadays an integral part of quality management of products and services. These are usually assessed using questionnaires, checklists, and reviews and consequently work that is cumbersome, costly and prone to errors. Acknowledging the fact that only humans can assess certain quality aspec...
Points-to analysis for large object-oriented systems is cur- rently too imprecise or too slow. This prevents or hampers many useful client analyses, refactorings, or optimizations. In this paper, we present an SSA-based approach to points-to analysis that simulates the actual execution of a program. It is precise since it is both locally and global...
When a new system, such as a knowledge management sys- tem or a content management system is put into production, both the software and hardware are systematically and thoroughly tested while the main purpose of the system | the information | often lacks sys- temic testing. In this paper we study how to extend testing approaches from software and h...