Jan Jürjens’s research while affiliated with RWTH Aachen University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (235)


Too Many Issues: Automatically Prioritizing Analyzer Findings by Tracing Security Importance
  • Article

June 2025

·

4 Reads

ACM Transactions on Software Engineering and Methodology

·

·

·

[...]

·

Jan Jürjens

Code-based analyzers often find too many potentially security-related issues to address them all. Therefore, issues likely to lead to vulnerabilities should be fixed first. Such prioritization requires project-specific knowledge, such as quality requirements, security-related decisions, and design, which is not accessible to code analyzers. We present TraceSEC, an automated technique for prioritizing issues according to their security-related importance to the project. Its core concept is to incorporate available design artifacts and trace links between them, thus considering the project context that the code lacks. We reduce the problem of issue prioritization to a maximum flow problem and quantify the importance of each issue by the flow from user-defined quality aspects to the issue, i.e., quantifying its impact on project-specific security preferences. Our evaluation shows that TraceSEC effectively provides automated prioritization and can be tailored to project-specific quality goals. Its prioritization correlates stronger with manual expert prioritization than SonarQube rule severities, which are commonly used in practice. In particular, TraceSEC has a higher similarity for identifying high-priority issues. TraceSEC scales reasonably well for codebases up to 4 million lines of code, and the initial setup overhead is likely to be recouped after the first automated prioritization.



Correction: MBFair: a model-based verification methodology for detecting violations of individual fairness
  • Article
  • Full-text available

January 2025

·

11 Reads

Software and Systems Modeling

Download


Experiment inputs and steps to compare template systems
Research focus within the requirement quality theory [53]
Quality model for template systems—characteristics in darker color are focused in this work. Dashed arrows imply lateral influences
Experiment set-up for template quality comparison [18]
Attribution of rules and metrics to quality characteristics for template systems [18]

+32

Benchmarking requirement template systems: comparing appropriateness, usability, and expressiveness

August 2024

·

79 Reads

Requirements Engineering

Various semi-formal syntax templates for natural language requirements foster to reduce ambiguity while preserving human readability. Existing studies on their effectiveness focus on individual notations only and do not allow to systematically investigate quality benefits. We strive for a comparative benchmark and evaluation of template systems to assist practitioners in selecting appropriate ones and enable researchers to work on pinpoint improvements and domain-specific adaptions. We conduct comparative experiments with five popular template systems—EARS, Adv-EARS, Boilerplates, MASTeR, and SPIDER. First, we compare a control group of free-text requirements and treatment groups of their variants following the different templates. Second, we compare MASTeR and EARS in user experiments for reading and writing. Third, we analyse all five meta-models’ formality and ontological expressiveness based on the Bunge-Wand-Weber reference ontology. The comparison of the requirement phrasings across seven relevant quality characteristics and a dataset of 1764 requirements indicates that, except SPIDER, all template systems have positive effects on all characteristics. In a user experiment with 43 participants, mostly students, we learned that templates are a method that requires substantial prior training and that profound domain knowledge and experience is necessary to understand and write requirements in general. The evaluation of templates systems’ meta-models suggests different levels of formality, modularity, and expressiveness. MASTeR and Boilerplates provide high numbers of variants to express requirements and achieve the best results with respect to completeness. Templates can generally improve various quality factors compared to free text. Although MASTeR leads the field, there is no conclusive favourite choice, as most effect sizes are relatively similar.



Excerpt from the class and the state machine diagrams of the bank’s software
The semi-automated process of the MBFair methodology
The supported guard syntax by our initializer
The architecture of our prototypical implementations
The initializations generated for the data attributes in our example model
MBFair: a model-based verification methodology for detecting violations of individual fairness

June 2024

·

13 Reads

Software and Systems Modeling

Decision-making systems are prone to discrimination against individuals with regard to protected characteristics such as gender and ethnicity. Detecting and explaining the discriminatory behavior of implemented software is difficult. To avoid the possibility of discrimination from the onset of software development, we propose a model-based methodology called MBFair that allows for verifying UML-based software designs with regard to individual fairness. The verification in MBFair is performed by generating temporal logic clauses, whose verification results enable reporting on the individual fairness of the targeted software. We study the applicability of MBFair using three case studies in real-world settings including a bank services system, a delivery system, and a loan system. We empirically evaluate the necessity of MBFair in a user study and compare it against a baseline scenario in which no modeling and tool support is offered. Our empirical evaluation indicates that analyzing the UML models manually produces unreliable results with a high chance of 46% that analysts overlook true-positive discrimination. We conclude that analysts require support for fairness-related analysis, such as our MBFair methodology.





Citations (60)


... Listing 2 shows examples of such annotations in lines 7 and 13. Security annotations can be used to either clearly define a context in which a method or class should be used, e.g., a security level (Peldszus et al. 2024), or to provide additional functionalities to methods or classes. Like APIs, they are clearly visible within the source code and can be used for tracing security features to locate their implementation. ...

Reference:

A taxonomy of functional security features and how they can be located
UMLsecRT: Reactive Security Monitoring of Java Applications With Round-Trip Engineering
  • Citing Article
  • January 2023

IEEE Transactions on Software Engineering

... An Illustrative Example. We focus on functional requirements that express necessary conditions on the input-output relation of the LC [18] and support requirements that are expressed using structured natural language (SNL) templates [26][27][28][29]. Templates make it easy to extract the precondition and postcondition. ...

A Comparative Evaluation of Requirement Template Systems
  • Citing Conference Paper
  • September 2023

... This includes refining term embeddings with medical definitions, enriching the CAML model with Wikipedia data for rare diseases [12], and utilizing medical ontologies and ICD descriptions to improve predictions [13]. Our earlier work KG-MultiResCNN has outperformed baselines by effectively leveraging structured external knowledge to capture complex medical relationships [14]. ...

Knowledge guided multi-filter residual convolutional neural network for ICD coding from clinical text

Neural Computing and Applications

... However, these mainly focus on incentives to mining in blockchain networks. Jürjens et al. [60] examine the impact of token design on incentives within ecosystems based on decentralized ledger technology. They present two use cases: supply chain management and the personal data market. ...

Tokenomics: Decentralized Incentivization in the Context of Data Spaces

... The second aspect entails selecting an appropriate model that aligns with the assumed conditions and applying it to real-world cases. Lohr et al. [84] and Janin et al. [85] delve into two-party exchange protocols. Analyzing deviations from the Nash equilibrium provides insights into protocol designs. ...

Formalizing Cost Fairness for Two-Party Exchange Protocols using Game Theory and Applications to Blockchain
  • Citing Conference Paper
  • March 2022

... As cryptographic features, such as encryption and hashing mostly use APIs, their usages are easy to locate in principle. However, as also observed in existing works (Tuma et al. 2022), in some cases, APIs use the same method call for the realization of different security features. For instance, Bouncy Castle provides engine classes to realize different ciphers based on the method init() with a mode parameter for switching between encryption and decryption. ...

Checking security compliance between models and code

Software and Systems Modeling

... Some other works [57,137,178,343,371,530,557,652,673] have used generative or latent-space based models to generate counterfactual images belonging to a different target class. They achieved this by minimally perturbing the causal features of the original image, and some also used text associated with the original image [651]. ...

COIN: Counterfactual Image Generation for Visual Question Answering Interpretation

... Natural Language Processing for Requirements Engineering (NLP4RE) is a field that employs techniques from NLP to address challenges faced in the RE domain. Applications of NLP4RE include terminology extraction [28], requirements similiarity and retrieval [1], user story analysis [35], and legal requirements analysis [46]. ...

Abbreviation-Expansion Pair Detection for Glossary Term Extraction
  • Citing Chapter
  • January 2022

Lecture Notes in Computer Science

... Extracting, reusing, and analyzing the semantics hidden in ECSS standards is a core component in the digitalization process to ensure the availability of a common language and a shared understanding of the processes, activities, and work products [51]. Moreover, enabling semantic interoperability among standards will ease the traceability between standards and projects [52]. ...

Requirements document relations: A reuse perspective on traceability through standards

Software and Systems Modeling