Complex Systems Informatics and Modeling Quarterly

Online ISSN: 2255-9922
Information and Communication Technology (ICT) plays an important role in enterprises, public authorities, inter-organizational networks, company clusters and other kinds of distributed organizations. Business, value creation and supporting activities in organizations are dependent on complex, distributed software and service systems operating in dynamic and often changing environments. In order to produce and supply products and services efficiently, organizations must be able to effectively exchange information between internal stakeholder groups and with external collaborators and customers. A high level of agility and interoperability is required when it comes to meeting the changing requirements from market or technical environments. The demand for interoperability exists at technological, business process and knowledge levels. Methods, theories and tools that ease change and adaptation of business processes, organizations and their supporting IT-systems are needed. The content of this issue presents novel research results in business information systems. Most of the articles in the issue originate from the 15th International Conference on Perspectives in Business Informatics Research (BIR 2016) in Prague (Czech Republic) and the 9th International Workshop on Information Logistics and Applications of Semantic Technologies (ILOG), which was co-located with BIR 2016. The five accepted articles address different technical and methodic aspects of business information systems and various their application fields.
Business Informatics is the scientific discipline targeting information processes and related phenomena in their socio-economical business context, including companies, organisations, administrations and society in general. As a field of study, it endeavours to take a systematic and analytic approach in adopting a multi-disciplinary orientation that draws theories and practices from the fields of management science, organisational science, computer science, systems engineering, information systems, information management, social science, and economics information science. The objective of this thematic issue is to bring attention to actual research on Business Informatics, as being publicized on the 19th IEEE Conference on Business Informatics (CBI 2017), July 24-27, 2017, in Thessaloniki, Greece. The conference created a productive forum for researchers and practitioners from the fields that contribute to the construction, use and maintenance of information systems and the organisational context in which they are embedded. This issue of CSIMQ comprises extended version of four CBI papers and one external article.
Globalization, abundant networking possibilities, and advances in artificial intelligence are creating a new business environment where cooperation and competition requires from enterprises high agility and ability to intelligently acquire, analyze, and apply their internal and external data. In this regard, several information system concepts must be reconsidered to adopt various opportunities offered by the new environment. The fourteenth issue of the journal “Complex Systems Informatics and Modeling Quarterly” (CSIMQ) offers a selection of papers addressing research perspectives that concern digital trade infrastructures, business process agility, and the quality of business intelligence tools. The initial versions of these papers were presented and discussed at the 16th International Conference on Perspectives in Business Informatics Research and its associated events on August 28–30, 2017 in Copenhagen, Denmark.
The 16th issue of CSIMQ presents four articles that cover a wide range of research topics. The topics of this issue start with psychological aspects of sustainable behavior change within organizations while new technologies are introduced into a company. The range of topics ends with the discussion of specific algorithms that allow entity clustering for Big Data analysis. The goal of these algorithms is the identification of different notations of references that refer to the same real-world object. This entity resolution is also called dedublication. Additionally, an approach for modelling enterprise architecture visualizations is discussed. It is used to specify and develop an architecture cockpit for a company from the financial sector. Within the range of topics is also a paper about the concepts of shared spaces as basis for building business process support systems. In the paper, a generic model is suggested that supports the comparison, analysis, and design of business process support systems.
This thematic issue of the Complex Systems Informatics and Modeling Quarterly journal is dedicated to using a socio-technical perspective in the Information Systems (IS) field. It contains a selection of extended papers presented at STPIS'18 – 4th International Workshop on Socio-Technical Perspective in IS Development held on June 12, 2018 in Tallinn, Estonia. The articles presented in this thematic issue contain at least 30% new material compared to the initial papers. After the extension, all articles went through two rounds of reviews to ensure the quality of the papers published in this issue. STPIS papers cover both theoretical and practical aspects of using a socio-technical perspective in IS, which is reflected in the current issue that contains both theoretically and practically oriented papers.
The concept of organizational learning receives increasing attention and recognition in recent years as a critical enabler of organizational adaptation, survival, and growth during uncertain times. Our study applies a socio-technical lens to shed light on the organizational learning processes taking place in 40 various sizes and kinds of UK businesses during the critical, volatile, and unprecedented period – February–May 2021. The study identifies learning antecedents and key organizational context enabling and/or impeding learning processes and follow-up evolution within the studied companies. Our research confirms that in an uncertain environment, companies need to develop and apply ad-hoc learning and quick adaptation practices which are critical for survival and growth, and not standard management practices The findings suggest, however, that even if employees have the capability, not all are able to capture and transform intelligence into learning and apply it at a strategic level, reconfiguring purposefully future operational capabilities to respond to environmental changes, as they are not empowered and supported by the organizational management.
The COVID-19 pandemic and measures to contain it pushed many universities to switch to online learning in the spring of 2020. The changes took place very quickly and it became clear that the long-term consequences of such a transformation are uncertain and require more detailed study. This research attempts to analyze the impact of online learning on study success. This research makes use of a triangulation with quantitative and qualitative methods. Quantitatively, it contains path diagram with various factors that have an impact on the study success at a German university, which is based on a quantitative online survey with 1.529 participants. Qualitatively, 49 interviews were analyzed in order to identify reasons for the risk of failing to achieve study success. The relevance of technology becomes evident in the quantitative analysis, as it manifests itself in almost all categories that affect study success. Moreover, a new influencing factor appeared, the “adaption to digital teaching”, which was often considered important qualitatively.
The objective of this thematic issue was twofold. The first it was to present selected research results of the 17th International Conference on Perspectives in Business Informatics Research (BIR2018) in Stockholm, Sweden, in September 24–26, 2018. The conference created a productive forum for researchers and practitioners on the specific topic – Business Resilience, with the intention of exploring organizational and information system resilience in congruence. Top papers were selected by the Program Chairs Prof. Janis Grabis and Prof. Jelena Zdravkovic to submit extended versions for a possible publication to this thematic issue. The second objective was to consider external candidate submissions for bringing attention to recent research in the Business Informatics discipline and thus make this issue even more actual. Five articles were selected that report on research in ecosystem architecture management, strategic coopetition, deriving enterprise architecture model from data, hierarchic viewpoints in control compliance assessments, and operationalizing enterprise IT architecture.
This issue of the journal on Complex Systems Informatics and Modeling Quarterly contains publications that present extended papers from the workshops of the 18th International Conference on Perspectives in Business Informatics Research (BIR 2019), organised in Katowice, Poland, 23-25 September 2019. The theme of the conference was Responsibilities of Digitalisation – Responsible Designing and Shaping the Future of Technology for Digital Preservation, Global Data Storage and Cost-Effective Management. The BIR 2019 workshops captured important and novel topics on security analytics, managed complexity, and information logistics and digital transformation. In this issue we also include one paper from the main BIR conference.
This thematic issue of the Complex Systems Informatics and Modeling Quarterly (CSIMQ) journal is dedicated to fostering a socio-technical perspective in the Information Systems (IS) field. The contemporary perspective has developed a lot over the years and the socio-technical agenda continues to be relevant. The issue contains a selection of extended papers presented at STPIS’19 – 5th International Workshop on Socio-Technical Perspective in IS Development held on June 10, 2019 in Stockholm, Sweden in conjunction with the 27th European Conference on Information Systems (ECIS). The workshop featured presentations of 13 papers and 3 posters and was attended by more than 30 participants from 8 different countries (Sweden, Norway, Finland, UK, Germany, Italy, Switzerland, and the US). Four of the presented papers were extended for publication in this special issue. The articles presented in this thematic issue contain at least 30% new material compared to the papers initially submitted for the workshop. The last included article was submitted as a regular contribution to the CSIMQ journal and was selected to be included into this thematic issue because of its close connection to the topic of socio-technical systems.
Complex systems and their analysis, construction, management or application are the motivation of all articles in this issue of CSIMQ. Different perspectives exist on what actually causes “complexity” in systems. In systems theory, a widely spread view is that complex systems have many components with emergent behavior, i.e. the large number and the dynamics of components are decisive. In business informatics, the complexity of information systems is attributed to their socio-technical nature, which acknowledges the interaction between the human actors and the information technology in an enterprise. Understanding the context of complex systems or their components is supported by modeling and is a key aspect of preparing organizational solutions. Models do not remove the complexity of the real world but help to understand it and to design and develop solutions. All articles in this issue are in some respect concerned with models or modeling. The articles also reflect recent trends in industry and society, such as digital transformation and applications of artificial intelligence, and show that these trends will not necessarily reduce complexity in systems but rather require the combination of proven approaches, such as modeling, and new methods for managing this complexity.
Complex systems consist of multiple interacting parts; some of them (or even all of them) may also be systems. While performing their tasks, these parts operate with multiple data and information flows. Data are gathered, created, transferred, and analyzed. Information based on the analyzed data is assessed and taken into account during decision making. Different types of data and a large number of data flows can be considered as one of the sources of system complexity. Thus, information management, including data control, is an important aspect of complex systems development and management. According to ISO/IEC/IEEE 15288:2015, “the purpose of the Information Management Process is to generate, obtain, confirm, transform, retain, retrieve, disseminate and dispose of information, to designated stakeholders…”. Information management strategies consider the scope of information, constrains, security controls and information life cycle. This means that information management activities should be implemented starting from the level of primitive data gathering and ending with enterprise-level decision making. The articles, which have been recommended by reviewers for this issue of CSIMQ, present contributions in different aspects of information management in complex systems, namely, implementation of harmful environment monitoring and data transmitting by Internet-of-Things (IoT) systems, analysis of technological and organizational means for mitigating issues related to information security and users’ privacy that can lead to changes in corresponding systems’ processes, organization and infrastructure, as well as assessment of potential benefits that a controlled (i.e. based on the up-to-date information) change process can bring to an enterprise.
Business Information Systems research deals with the conceptualization, development, introduction, maintenance and utilization of systems for computer-assisted information processing within companies and enterprise-wide networks. This CSIMQ issue contains five articles that deal with various information system (IS) issues, including IS security, product-related ISs, blockchain technology and sensor technology. The call for this issue was particularly targeted towards extended articles of two workshops that were held in conjunction with the 13th IFIP WG 8.1 working conference on the Practice of Enterprise Modelling. The first is the BES 2020 workshop (i.e., 1st Workshop on Blockchain and Enterprise Systems), which has the ambition to change the way one thinks, designs and implements ISs. The second one is ManComp 2020 (i.e., 5th Workshop on Managed Complexity), which is focused on approaches and methods for managing the complexity of ISs and IS interaction within ecosystems.
The 27th issue of CSIMQ presents four articles that cover a wide range of research topics. The articles are extended versions of previous publications. The topics of this issue start with the discussion of sound workflow specifications. It suggests control-flow-based methods. The range of topics ends with the discussion of literature results about the relations of Big Data algorithms and Microservices. Additionally, an approach for modeling the prediction of bankruptcy for Latvian companies is discussed. Within the range of topics is also an article about identity management in organizations. It discusses a case study and the Fractal Enterprise Model.
The articles, which have been recommended by reviewers for this issue of CSIMQ, present contributions in evaluation of application of the advanced means for evaluation, monitoring and modification of complex systems and similar content mobile applications. The focus of the presented articles is on the adaptation of the above-described means for new contexts and challenges.
The objective of this thematic issue was to show the diversity of research in the field of business informatics, both from the perspective of application areas and from the research methodologies applied. Application areas visible in this issue are product development in manufacturing industries, online learning in universities, innovation activities in networks of museums, and curriculum engineering in educational organizations. Research methods include various quantitative and qualitative approaches combined with prototyping and the design science paradigm.
This thematic issue introduces two structured literature review articles as well as a couple of empirical ones. The authors of the literature reviews move into the broad field of assuring the quality of IT artifacts, focusing on different dimensions of the software engineering process. With the ever-increasing scale of computerization in more and more areas of life, insufficient emphasis on quality is not only associated with significant costs of bug-fixing. After all, considerable risks arise from the possibility of exploiting the vulnerabilities of the target product. In extreme cases, poor quality can lead to loss of health and life. Not surprisingly, academics and practitioners alike have been looking at this challenge for many years, and from numerous perspectives. The quality of IT artefacts also depends on the education of professionals and good understanding of application domains. The empirical papers concern educational issues regarding Enterprise Architecture and deepen our understanding of decentralized autonomous systems.
Overview of the Smart Production Lab [23] and simplified "mobile phone"
Enterprise Systems and EA repository used in the demonstration
Extension of the meta-model of the Activity element in QualiWare's EA repository. Image manipulated due to confidentiality.
Demonstration of AMA4EA at the Industry 4.0 Laboratory at Aalborg University
Model comparison between execution of the process with and without abstraction activity
The transformation towards the Industry 4.0 paradigm requires companies to manage large amounts of data. This poses serious challenges with regard to how effectively to handle data and extract value from it. The state-of-the-art research of Enterprise Architecture (EA) provides limited knowledge on addressing this challenge. In this article, the Automated Modeling with Abstraction for Enterprise Architecture (AMA4EA) method is proposed and demonstrated. An abstraction hierarchy is introduced by AMA4EA to support companies to automatically abstract data from enterprise systems to concepts, then to automatically create an EA model. AMA4EA was demonstrated at an Industry 4.0 laboratory. The demonstration showed that AMA4EA could abstract detailed data from the Enterprise Resource Planning (ERP) system and Manufacturing Execution System (MES) to be relevant for a business process model that provided a useful and simplified visualization of production process data. The model communicated the detailed business data in an easily understandable way to stakeholders. AMA4EA is an innovative and novel method that contributes new knowledge to EA research. The demonstration provides sufficient evidence that AMA4EA is useful and applicable in the Industry 4.0 environment.
The wave of the fourth industrial revolution (Industry 4.0) is bringing a new vision of the manufacturing industry. In manufacturing, one of the buzzwords of the moment is "Smart production". Smart production involves manufacturing equipment with many sensors that can generate and transmit large amounts of data. These data and information from manufacturing operations are however not shared in the organization. Therefore the organization is not using them to learn and improve their operations. To address this problem, the authors implemented in an Industry 4.0 laboratory an instance of an emerging technical standard specific for the manufacturing industry. Global manufacturing experts consider the Reference Architecture Model Industry 4.0 (RAMI4.0) as one of the corner stones for the implementation of Industry 4.0. The instantiation contributed to organizational learning in the laboratory by collecting and sharing up-to-date information concerning manufacturing equipment. This paper discusses and generalizes the experience and outlines future research directions. Full text accessible at
Geographical spread
Minimum requirements
Readiness assessment results
Scores for each dimension
Industry 4.0 is considered to be the fourth industrial revolution and involves virtual and physical systems that are interconnected and collaborate in an autonomous way. Industry 4.0 is a relatively new concept within computer science and raises interest on how to make use of technologies included in the concept and profit from them. This article investigates Industry 4.0 in the context of SMEs: the opportunities and challenges that Industry 4.0 poses upon SMEs, as well as readiness of SMEs for Industry 4.0 are considered. The data collection and analysis methods were literature review with grounded theory. In the result, the main challenges proved being of organizational nature: SMEs need help with company-specific strategies for implementing Industry 4.0; and SMEs need skilled employees. The opportunities are flexibility and openness to innovation, which are pertinent to SMEs; cloud computing; and public investments into technology and adoption of Industry 4.0 by companies. The readiness of SMEs for Industry 4.0 is still somewhat low – they are still learners.
Enterprise architecture (EA), although matured in more than 30 years of ongoing research, receives more importance with the increasing dependency of business in IT and the growing complexity of IT systems. The integrated management of a companies’ goals, structures, and processes with respect to the business and IT elements, as well as the representation of impacts triggered by planned changes is educated in different ways at many universities all over the world. There are several techniques, methods, tools, and approaches to transfer the knowledge from the educators to the students, giving them the qualification to support their future employers in handling the EA challenges modern companies are facing. This work gives a detailed comparative analysis of more than twenty international educational offers regarding Enterprise Architecture Management, carves out the commonalities and finds two prototypical courses as a best-practice combining the strongest matches for Business Informatics and Computer Science studies alike.
Nowadays security has become an important aspect in information systems engineering. A mainstream method for information system security is Role-based Access Control (RBAC), which restricts system access to authorised users. While the benefits of RBAC are widely acknowledged, the implementation and administration of RBAC policies remains a human intensive activity, typically postponed until the implementation and maintenance phases of system development. This deferred security engineering approach makes it difficult for security requirements to be accurately captured and for the system’s implementation to be kept aligned with these requirements as the system evolves. In this paper we propose a model-driven approach to manage SQL database access under the RBAC paradigm. The starting point of the approach is an RBAC model captured in SecureUML. This model is automatically translated to Oracle Database views and instead-of triggers code, which implements the security constraints. The approach has been fully instrumented as a prototype and its effectiveness has been validated by means of a case study.
Policy types involved for accessing protected health information 
Policy relationships on the basis of a terminology service 
System architecture of the Policy Management System 
This article discusses potential clashes between different types of security policies that regulate resource access requests on clinical patient data in hospitals by employees. Attribute-based Access Control (ABAC) is proposed as a proper means for such regulation. A proper representation of ABAC policies must include a handling of policy attributes among different policy types. In this article, we propose a semantic policy model with predefined policy conflict categories. A conformance verification function detects erroneous, clashing or mutually susceptible rules early during the policy planning phase. The model and conflicts are used in a conceptual application environment and evaluated in a technical experiment during an interoperability test event.
McCarthy developed a framework for modeling the economic rationale of different business transactions along the enterprise value chain described in his seminal article “The REA Accounting Model – A Generalized Framework for Accounting Systems in a Shared Data Environment” Originally, the REA accounting model was specified in the entity-relationship (ER) language. Later on other languages – especially in form of generic data models and UML class models (UML language) – were used. Recently, the OntoUML language was developed by Guizzardi and used by Gailly et al. for a metaphysical reengineering of the REA enterprise ontology. Although the REA accounting model originally addressed the accounting domain, it most successfuly is applied as a reference framework for the conceptual modeling of enterprise systems. The primary research objective of this article is to anchor the REA-based models more deeply in the accounting domain. In order to achieve this objective, essential primitives of the REA model are identified and conceptualized in the OntoUML language within the Asset Liability Equity (ALE) context of the traditional ALE accounting domain.
Accurate production planning in both the short and long term is very important in cogeneration plants. Especially if the cogeneration unit operates under free electricity market conditions, which complicates the decision-making process as an additional planning condition with variable heat, fuel, and CO2 costs. On the other hand, when a cogeneration plant uses a heat accumulation system, it is impossible to make a production decision without using a computer system; the human factor in decision-making can lead to erroneous decisions without traceability. The role of modern computer systems is growing and greatly influences the optimal production planning process in cogeneration plants, regardless of the installed capacity and in the operation with heat accumulation. One of the problems solved by the research is the integration of real operating modes and conditions (applied thermal insulation solution) into the production decision algorithms. The developed methodology allows not only to plan the operating modes of the cogeneration plant, but also to evaluate the efficiency of the battery solution. This study shows the developed methodology for calculating heat loss for a heat accumulator depending on the operating mode and the need to introduce a correction coefficient. When determining the total influencing expenses of the cost model of the heat accumulator operation mode, their mutual influence is shown and integrated into the decision-making algorithm for the next day's free-market conditions. The aim of the algorithm is maximally increasing the total gross revenue threshold for the planning of cogeneration operations and to exclude operating modes that may cause losses.
Both researchers and practitioners have recognized the need for developed knowledge about enterprise modeling. Therefore it is necessary to increase the understanding of various actions that are performed during enterprise modeling, their meaning, and their diversity. This paper proposes a taxonomy with a conceptual structure in two dimensions (hierarchy and process) that could be used to increase the knowledge about enterprise modeling actions. The taxonomy introduces a terminology that enables a better understanding of the modeling actions for a clear purpose. One important aspect of the taxonomy is to create visibility and traceability of decisions made during enterprise modeling activities. These modeling decisions have previously been of a more tacit nature and the taxonomy is supposed to make the rationale behind different modeling decisions explicit and understandable.
Situation analytics can be used to recognize the changing behavior, emotional state, cognitive load and environmental context of a user during complex task processing. This article discusses the SitAdapt 2.0 architecture that combines a situation analytics platform with pattern- and model-based user interface construction tools in order to build runtime-adaptive interactive business applications with enhanced user experience and task-accomplishment characteristics. The article focuses on the software components and tools for observing and tracking the user, data types for modeling situations, recognizing situations, and modeling structural changes and actions for generating the dynamic adaptations. The situation recognition capabilities and adaptive functionality of the system are demonstrated for web-applications for long-distance travel booking and a beauty products web-portal.
The user adaptive enterprise application is a software system, which adapts its behavior to an individual user on the basis of nontrivial inferences from information about the user. The objective of this paper is to elaborate a conceptual model of the user adaptive enterprise applications. In order to conceptualize the user adaptive enterprise applications, their main characteristics are analyzed, the meta-model defining the key concepts relevant to these applications is developed, and the user adaptive enterprise application and its components are defined in terms of the meta-model. Modeling of the user adaptive enterprise application incorporates aspects of enterprise modeling, application modeling, and design of adaptive characteristics of the application. The end-user and her expectations are identified as two concepts of major importance not sufficiently explored in the existing research. Understanding these roles improves the adaptation result in the user adaptive applications.
In a changing competitive business landscape, organizations are challenged by traditional processes and static document-driven business architecture models or artifacts. This marks the need for a more adaptive and analytics-enabled approach to business architecture. This article proposes a framework for adaptive business architecture modeling to address this critical concern. This research is conducted in an Australian business architecture organization using the action design research (ADR) method. The applicability of the proposed approach was demonstrated through its use in a health insurance business architecture case study using the Tableau and Jalapeno business architecture modeling platform. The proposed approach seems feasible to process business architecture data for generating essential insights and actions for adaptation.
Blocking of ads divided into age groups
The respondents' diversity of using ad blocking software
User-preferred types of ads
Motivation of adblock usage vs. age, based on [42].
Motivation of adblock usage vs. education. Source: Own authors based on [42].
The article shows the main factors of adblocking software usage. The study was based on data obtained by a web questionnaire. The research was focused on evaluation of adblocking software usage factors in five categories: (1) gender, age, and education; (2) use of advertising and sources of knowledge about advertising; (3) technical and social reasons for blocking online advertisements; (4) usage of an adblock-wall; and (5) type of online advertisement. An evaluation of adblock usage factors revealed four main technical reasons for adblock usage connected with website technology and web development problems – interruption, amount of ads, speed, and security; and one social reason for adblock usage, namely, the problem of privacy.
Screenshot of the evaluation instrument in the part that supports the data collection for APPLIES-preparation
Number of changes in the motivation assessment model (left) and preparation assessment model (right) by type of change
Product lines have emerged in the software industry as an attractive approach to perform planned reuse of code. Nevertheless, a product line solution is not appropriate in all cases and also requires some conditions to be implemented successfully. The literature offers several contributions regarding the adoption of product lines. However, only a few of them are able to support decision-makers in making informed decisions in favor of or against following this approach. We proposed APPLIES, a framework for evaluating the organization’s motivation and preparation for adopting product lines. This article presents the second version of the APPLIES framework as well as the second iteration of the evaluation of this approach. This evaluation consisted of (i) a workshop with a practitioner who had experience in adopting the product line production approach and; (ii) a review of the content by five product line experts. The results obtained from the evaluation resulted in modifications to the framework content, mainly to simplify the statements and eliminate redundant elements. Also, we detected new functionalities and modifications that we expect to be resolved in the following evaluation iterations. Further evaluations and improvements are needed to mature the framework. Moreover, we expect to incorporate APPLIES into a process that covers the aspects that a company must consider before deciding to adopt this production paradigm.
The 5th issue of the journal on Complex Systems Informatics and Modeling (CSIMQ) presents extended versions of five papers selected from the CAiSE Forum 2015. The forum was part of the 27th edition of international Conference on Advanced Information Systems engineering (CAiSE 2015), which took place in June 2015 in Stockholm, Sweden. Information systems engineering draws its foundation from various interrelated disciplines including, e.g., conceptual modeling, database systems, business process management, requirements engineering, human computer interaction, and enterprise computing to address various practical challenges in development and application of information systems. The guiding subjects of CAiSE 2015 were Creativity, Ability, and Integrity. The CAiSE Forum aimed at presenting and discussing new ideas and tools related to information systems Engineering.
Modern organizations need to be sustainable in the presence of dynamically changing business conditions, which require from Information Systems (IS) to address complex challenges for being able to support organizations acting in varying business conditions – changing customer demands, new legislations, new customers, and emerging alliances. From a technical perspective, the gap between business requirements and supporting IS is still present, mostly due to the fact that current IS development approaches operate with artifacts defined on a relatively low abstraction level. To go beyond the state of the art, IS development frameworks used by enterprises need to be structured for solving emerging problems, and enterprises need to have efficient methods for the use of these frameworks to deliver the right IS solutions just-in-time and just-enough. The notion of capability emerged in the beginning of the nineties in the context of competence-based management, military frameworks, and developing organization’s competitive advantage – linguistically, it means the ability or qualities necessary to do something.
Service value chain framework [22]
This article responds to a need for a socio-technical systems (STS) perspective that fits in a world that has changed greatly over the decades since the socio-technical movement began. This article identifies conditions and paradoxes that limit traditional STS approaches in current business practice. A newer work system perspective (WSP) combines aspects of work system theory (WST), WST extensions, and the work system method (WSM). This WSP frames socio-technical thinking in a straightforward way that helps in describing and discussing socio-technical systems. It also provides many ideas that can help in negotiating and designing improvements. After summarizing WSP and some of its possible applications to work systems, this article uses the various topics in its title to indicate how WSP-based socio-technical thinking might be more suitable for today’s world.
The Design Science research method was hereby employed to develop an artifact that demonstrates the experimental “model-aware” software engineering methodology in the context of PHP Web development – a “low code” development approach with code templates generated from technology-specific models. The proof-of-concept consists of two interacting components: a custom diagrammatic modeling environment and model-driven generated PHP pages. The interaction between the two components conforms the engineering method labelled as “Model-aware software engineering” (MASE) – a flavor of model-driven engineering recently introduced in research projects as a hybridization of the Agile Modeling Method Engineering (AMME) framework and the Resource Description Framework (RDF). The experimental MASE method is employed here to demonstrate its feasibility for the common Model-View-Controller (MVC) website development pattern, thus showing potential to support common Web development work.
Achieving interoperability, i.e. creating identity federations between different Electronic identities (eID) systems, has gained relevance throughout the past years. A serious problem of identity federations is the missing harmonization between various attribute providers (APs). In closed eID systems, ontologies allow a higher degree of automation in the process of aligning and aggregating attributes from different APs. This approach does not work for identity federations, as each eID system uses its own ontology to represent its attributes. Furthermore, providing attributes to intermediate entities required to align and aggregate attributes potentially violates privacy rules. To tackle these problems, we propose the use of combined ontology-alignment (OA) approaches and locality-sensitive hashing (LSH) functions. We assess existing implementations of these concepts defining and using criteria that are special for identity federations. Obtained results confirm that proper implementations of these concepts exist and that they can be used to achieve interoperability between eID systems on attribute level. A prototype is implemented showing that combining the two assessment winners (AlignAPI for ontology-alignment and Nilsimsa for LSH functions) achieves interoperability between eID systems. In addition, the improvement obtained in the alignment process by combining the two assessment winners does not impact negatively the privacy of the user’s data, since no clear-text data is exchanged in the alignment process.
Information System evolution requires a well-structured Enterprise Architecture and its rigorous management. The alignment of the elements in the architecture according to various abstraction layers may contribute to the management but appropriate tools are needed. We propose improvements to the Facet technique and we develop accompanying tools to master the difficulties of the alignment of the models used to structure an Enterprise Architecture. This technique has been experimented on many real life cases to demonstrate the effectiveness of our EA alignment method. The tools are already integrated in the Eclipse EMF Facet project.
Eliminating the gap between business and IT within an enterprise, i.e., solving the problem of Business and IT Alignment (BITA), requires an instrument for the multidimensional analysis of an enterprise. Enterprise Modeling (EM) is a practice that supports such analysis and therefore can be used to facilitate BITA. EM serves as a tool that can capture, visualize and analyze different aspects of enterprises. This article presents a framework that describes the role of EM in the context of BITA and presents recommendations in EM for BITA.
A portion of KMW From Figure 2, we understand that the category Knowledge has the subcategories " Mechanics " and " Electrotechnics " ; " Mechanics " consists of " Mechanical drawing " , " Metallurgy " and " Resistance of materials " . Skills has the subcategories " Treatment of materials " , " Use of devices " and " Consultation of normas " . Finally, Attitude has " Rigour in safety regulation " , " Manual skills " and " Accuracy " . The ellipses indicate the subsets of welder qualities useful to construct a Work Order KSA model (shortly indicated by WOKMW), and precisely the work order 9. Notice that, using the only KMW, the welder to be chosen is W2 (the best worker for the work order), while WOKMW proposes W4. This last choice is coherent with that of the leadership (see Table 1), as it indicates the best worker for the assigned work order.  
Number of second choice work teams with more than 50% correspondence using the fusion of KSA models and consensus  
This work proposes a competence-based approach, enriched by consensus, for deciding on team compositions, in particular, time- and quality-dependent contexts, where teams have to perform some assigned activities. This problem is very relevant for the dynamics of logistic networks whose nodes are warehouses, distribution centers, and small family businesses that deal with work orders that have to be satisfied. The approach consists of models that focus on workers’ competences by using the Knowledge, Skills, and Attitudes model for workers’ knowledge representation, and competence models to describe the activities to be performed. Some consensus strategies among workers are then used to obtain the correct choice of teams to assign to the various work orders. According to the concept of value co-creation, this paper proposes an original and hybrid approach, based on competences and enriched by consensus, in order to obtain and select the most “suitable” teams for the activities to be performed. This approach is tested, carefully and in depth, on the real case of a small family business inside a sophisticated logistics area, consisting of a fleet of trucks that, by transporting goods from one point to another, underpins the logistics chain inside the Campania region (Italy). These areas of logistics consider small family businesses, which manage the maintenance and repair or trucks, as highly critical nodes of the system. We show that our approach produces results similar to the decisions made by the leader of such a small family business.
Modeling is one of the most important parts of requirements engineering. Most modeling techniques focus primarily on their pragmatics and pay less attention to their syntax and semantics. Different modeling techniques describe different aspects, for example, Object-Role Modeling (ORM) describes underlying concepts and their relations while System Dynamics (SD) focuses on the dynamic behavior of relevant objects in the underlying application domain. In this paper we provide an inductive definition for a generic modeling technique. Not only do we describe the underlying data structure, we also show how states can be introduced for relevant concepts and how their life cycles can be modeled in terms of System Dynamics. We also show how decomposition can be applied to master complex application domains. As a result we get an integrated modeling technique covering both static and dynamic aspects of application domains. The inductive definition of this integrated modeling technique will facilitate the implementation of supporting tools for practitioners.
Metamodel of the EA description in our EAM scenario  
Comparison of tools we explored
Enterprise Architectures (EA) consist of a multitude of architecture elements, which relate in manifold ways to each other. As the change of a single element hence impacts various other elements, mechanisms for architecture analysis are important to stakeholders. The high number of relationships aggravates architecture analysis and makes it a complex yet important task. In practice EAs are often analyzed using visualizations. This article contributes to the field of visual analytics in enterprise architecture management (EAM) by reviewing how state-of-the-art software platforms in EAM support stakeholders with respect to providing and visualizing the “right” information for decision-making tasks. We investigate the collaborative decision-making process in an experiment with master students using professional EAM tools by developing a research study. We evaluate the students’ findings by comparing them with the experience of an enterprise architect.
Big data attracts researchers and practitioners around the globe in their desire to effectively manage the data deluge resulting from the ongoing evolution of the information systems domain. Consequently, many decision makers attempt to harness the potentials arising with the use of those modern technologies in a multitude of application scenarios. As a result, big data has gained an important role for many businesses. However, as of today, the developed solutions are oftentimes perceived as completed products, without considering that the application in highly dynamic environments might benefit from a deviation of this approach. Relevant data sources as well as the questions that are supposed to be answered by their analysis may change rapidly and so do subsequently the requirements regarding the functionalities of the system. To our knowledge, while big data itself is a prominent topic, fields of application that are likely to evolve in a short period of time and the resulting consequences were not specifically investigated until now. Therefore, this research aims to overcome this paucity by clarifying the relation between dynamic business environments and big data analytics (BDA), sensitizing researchers and practitioners for future big data engineering activities. Apart from a thorough literature review, expert interviews are conducted that evaluate the made inferences regarding dynamic and stable influencing factors, the influence of dynamic environments on BDA applications as well as possible countermeasures. The ascertained insights are condensed into a proposal for decision making, facilitating the alignment of BDA and business needs in dynamic business environments.
If everything is a signal and combination of signals, everything can be represented with Fourier representations. Then, is it possible to represent a signal with a conditional dependency to input data? This research is devoted to the development of Sinusoidal Neural Networks (SNNs). The motivation to develop SNNs is to design an artificial neural network (ANN) algorithm that can learn faster. A short review of the history of biological neurons helps to identify components that should be redesigned in ANNs. After the components are identified, a new neural network algorithm called SNN is proposed. Experiments are conducted to show the practical results of the algorithm. According to the experiments, the proposed neural network can reach high accuracy rates faster than the standard neural networks, while an interesting generalization capacity is obtained for the developed algorithm. Even though the promising results are achieved, further research is necessary to test if SNNs are capable of learning faster than existing algorithms in real-life cases.
Technological advancements are often adopted to financial markets to improve their operations and safety. Blockchain technology has been recognized as one of the potential technologies to be utilized in capital markets. The goal of this article is to evaluate the applicability of using the blockchain technology in securities settlement process. First, the theoretical background of blockchain technology is reviewed and the current financial market infrastructure is examined. Then Central Securities Depositories Regulation and the current securities settlement processes are examined. Blockchain applicability framework designed by Gourisetti, Mylrea and Patangia is applied to assess the blockchain technology’s applicability to securities settlement. The results suggest that blockchain technology can be applied to securities settlement, and the used blockchain type should be a private blockchain with Proof-of-Authority consensus mechanism. A blockchain architecture model, based on a model provided by Zhuang, Chen, Shae and Shyu, and potential node structure for securities settlement are developed, taking into account the existing literature on blockchain technology, financial markets, and Central Securities Depositories Regulation. The proposed blockchain architecture model and node structure are then evaluated against scholar expected benefits and drawbacks of using blockchain for securities settlement and cross-border settlement efficiency. The evaluation reveals that the proposed blockchain technology model can potentially improve some of the current securities settlement issues, such as costly reconciliation and difficult cross-border securities settlement. At the same time, using blockchain technology in securities settlement would be challenging because the practical implementation time would be long and would require market-wide commitment. The main artefacts of this article are the proposed blockchain architecture model and node structure that would allow securities settlement processes to be executed using blockchain technology.
Due to the ongoing trend of digitalization, the importance of software for today’s society is continuously increasing. Naturally, there is also a huge interest in improving its quality, which led to a highly active research community dedicated to this aim. Consequently, a plethora of propositions, tools, and methods emerged from the corresponding efforts. One of the approaches that have become highly prominent is the concept of test-driven development (TDD) that increases the quality of created software by restructuring the development process. However, such a big change to the followed procedures is usually also accompanied by major challenges that pose a risk for the achievement of the set targets. In order to find ways to overcome them, or at least to mitigate their impact, it is necessary to identify them and to subsequently raise awareness. Furthermore, since the effect of TDD on productivity and quality is already extensively researched, this work focuses only on issues besides these aspects. For this purpose, a literature review is presented that focuses on the challenges of TDD. In doing so, challenges that can be attributed to the three categories of people, software, and process are identified and potential avenues for future research are discussed.
Modeling IT architecture is a complex, time consuming, and error prone task. However, many systems produce information that can be used for automating modeling. Early studies show that this is a feasible approach if we can overcome certain obstacles. Often more than one source is needed in order to cover the data requirements of an IT architecture model; and the use of multiple sources means that heterogeneous data needs to be merged. Moreover, the same collection of data might be useful for creating more than one kind of models for decision support. IT architecture is constantly changing and data sources provide information that can deviate from reality to some degree. There can be problems with varying accuracy (e.g. actuality and coverage), representation (e.g. data syntax and file format), or inconsistent semantics. Thus, integration of heterogeneous data from different sources needs to handle data quality problems of the sources. This can be done by using probabilistic models. In the field of truth discovery, these models have been developed to track data source trustworthiness in order to help solving conflicts while making quality issues manageable for automatic modeling. We build upon previous research in modeling automation and propose a framework for merging data from multiple sources with a truth discovery algorithm to create multiple IT architecture models. The usefulness of the proposed framework is demonstrated in a study where models using three tools are created, namely; Archi, securiCAD, and EMFTA.
Main steps of idea life-cycle  
Class distribution
Novel Ideas are the key ingredients for innovation processes, and Idea Management System (IMS) plays a prominent role in managing captured ideas from external stakeholders and internal actors within an Open Innovation process. Considering a specific case study, Lecce-Italy, we have designed and implemented a collaborative environment which provides an ideal platform for government, citizens, etc. to share their ideas and co-create the value of innovative public services in Lecce. The application of IMS in this study with six main steps, including: idea generation, idea improvement, idea selection, refinement, idea implementation and monitoring shows that this, remarkably, helps service providers to exploit the intellectual capital and initiatives of the regional stakeholders and citizens, and assist service providers to stay in line with the needs of society.
We use homeostasis, the maintenance of steady states in an organism, to explain some of the decisions made by participants in a business process. We use Vickers’ Appreciative System to model the homeostatic states with Harel’s statecharts. We take the example of a doctoral student recruitment process formally defined between a faculty member, a graduate student candidate and a doctoral school. We analyze some gaps in the process caused by a misfit between norms of the process participants. We present a rationale for the anticipation and resolution of these misfits. We extend the traditional operational model with an appreciative model. This model represents the appreciative systems of process participants. Understanding these appreciative systems is necessary to make explicit the misfit between the model and the observed reality. The operational model represents the “technical” perspective on the business process, the one that can be automated. The appreciative model represents the “social” perspective, the one that explains the participants’ behavior as a result of their individual and collective norms. By combining these two perspectives, we can appreciate the richness of the development of socio-technical systems.
Financial industries are undergoing a digital transformation of their products, services, overall business models. Part of this digitalization in banking aims at automating most of the manual work in payment handling and integrating the workflows of involved service providers. The focus of the work presented in this paper is on fraud discovery and steps to fully automate it. Fraud discovery in financial transactions has become an important priority for banks. Fraud is increasing significantly with the expansion of modern technology and global communication, which results in substantial damages for the banks. Instant payment (IP) transactions cause new challenges for fraud detection due to the requirement of short processing time. The paper investigates the possibility to use artificial intelligence in IP fraud detection. The main contributions of our work are (a) an analysis of problem relevance from business and literature perspective, (b) a proposal for technological support for using AI in fraud detection of instant payment transactions, and (c) a feasibility study of selected fraud detection approaches.
Top-cited authors
Rihan Hai
  • Delft University of Technology
Christoph Quix
  • Fraunhofer Institute for Applied Information Technology FIT
Janis Stirna
  • Stockholm University
Jelena Zdravkovic
  • Stockholm University
Dierk Jugel
  • Hochschule Reutlingen