Manuel Mazzara

Manuel MazzaraInnopolis University · Institute of Technologies and Software Development

14.09
· PhD in Computing Science
  • About
    Introduction
    Manuel Mazzara is a professor of Computer Science at Innopolis University (Russia) with a research background in software engineering, service-oriented architectures and programming, concurrency theory, formal methods and software veri cation. He cooperated with european and US industry, plus governamental and inter governamental organizations such as the United Nations, always at the edge between science and software production.
    Current Institution
    Institute of Technologies and Software Development
    Kazan
    Current position
    Head of Department
    127
    Research items
    8,711
    Reads
    975
    Citations
    Research Experience
    Jan 2014
    Head of Service Science and Engineering lab
    Innopolis University · Technologies and Software Development
    Kazan, Tatarstan, Russia
    Jan 2014
    Professor (Associate) in Software Engineering
    Innopolis University · Computer Science
    Kazan, Russia
    Jan 2008 - Apr 2012
    Research Associate
    Newcastle University · School of Computing Science
    Newcastle upon Tyne, United Kingdom
    Followers (173)
    View All
    Quan Z. Sheng
    Joanna F. DeFranco
    Silvia Mirri
    Patrizio Pelliccione
    Jean-Michel Bruel
    Raynel Batista
    Airat Khasianov
    Abel Nieva
    Ayk Badalyan
    Florian Galinier
    Following (408)
    View All
    Quan Z. Sheng
    Mads Haahr
    Joao Paulo Carvalho Lustosa da Costa
    Brent D. Moyle
    Joanna F. DeFranco
    Younghee Park
    Elis Kulla
    Ewa Ziemba
    Arif Ali Khan
    Ananda Basu
    Current research
    Projects (10)
    Project
    NEUCOGAR project for validation of cube of NEUCOGAR architecture of affects based on 'Cube of emotions'
    Project
    https://biodynamo.web.cern.ch/
    Research
    Research Items
    In this paper, we investigate how dynamic properties of reputation can influence the quality of users ranking. Reputation systems should be based on rules that can guarantee a high level of trust and help identifying unreliable units. To understand the effectiveness of dynamic properties in the evaluation of reputation, we propose our own model (DIB-RM) that is based on three factors: forgetting, cumulative, and activity period. In order to evaluate the model, we use data from StackOverflow, which also has its own reputation model. We estimate similarity of ratings between DIB-RM and the StackOverflow model so to check our hypothesis. We use two values to calculate our metric: DIB-RM reputation and historical reputation. We found that historical reputation gives better metric values. Our preliminary results are presented for different sets of values of the aforemen-tioned factors in order to analyze how effectively the model can be used for modeling reputation systems.
    BioDynaMo is a biological processes simulator developed by an international community of researchers and software engineers working closely with neuroscientists. The authors have taken part in the development of the physical engine, and they are currently working on gene expression, i.e. the process by which the heritable information in a gene-the sequence of DNA base pairs-is made into a functional gene product, such as protein or RNA. Typically, gene regulatory models employ either statistical or analytical approaches, being the former already well understood and broadly used. In this paper, we utilize analytical approaches representing the regulatory networks by means of differential equations, such as Euler and Runge-Kutta methods. The two solutions are implemented in the BioDynaMo project and are compared for accuracy and performance.
    In our project we introduce open source platform Digital Personal Assistant (DPA). We talk about architecture of the platform and demonstrate potential of DPA on example interaction with Smart Home manager.
    The Internet of Things makes possible to connect each everyday object to the Internet, making computing pervasive like never before. From a security and privacy perspective, this tsunami of connec-tivity represents a disaster, which makes each object remotely hackable. We claim that, in order to tackle this issue, we need to address a new challenge in security: education.
    The Internet of Things makes possible to connect each everyday object to the Internet, making computing pervasive like never before. From a security and privacy perspective, this tsunami of connectivity represents a disaster, which makes each object remotely hackable. We claim that, in order to tackle this issue, we need to address a new challenge in security: education.
    Cloud computing is steadily growing and, as IaaS vendors have started to offer pay-as-you-go billing policies, it is fundamental to achieve as much elasticity as possible, avoiding over-provisioning that would imply higher costs. In this paper, we briefly analyse the orchestration characteristics of PaaSSOA, a proposed architecture already implemented for Jolie microservices, and Kubernetes, one of the various orchestration plugins for Docker; then, we outline similarities and differences of the two approaches, with respect to their own domain of application. Furthermore, we investigate some ideas to achieve a federation of the two technologies, proposing an architectural composition of Jolie microservices on Docker Container-as-a-Service layer.
    In this paper we report the experience of using AutoProof for static verification of a small object oriented program. We identify the problems that emerge by this activity and classify them according to their nature. In particular, we distinguish between tool-related and methodology-related issues, and propose necessary changes to simplify both the tool and the method.
    In this paper we propose an algorithm, Simple Hebbian PCA, and prove that it is able to calculate the principal component analysis (PCA) in a distributed fashion across nodes. It simplifies existing network structures by removing intralayer weights, essentially cutting the number of weights that need to be trained in half.
    This paper discusses a roadmap to investigate Domain Objects being an adequate formalism to capture the peculiarity of microservice architecture, and to support Software development since the early stages. It provides a survey of both Microservices and Domain Objects, and it discusses plans and reflections on how to investigate whether a modeling approach suited to adaptable service-based components can also be applied with success to the microservice scenario.
    In this paper we offer an overview on the topic of Microservices Science and Engineering (MSE) and we provide a collection of bibliographic references and links relevant to understand an emerging field. We try to clarify some misunderstandings related to microservices and Service-Oriented Architectures, and we also describe projects and applications our team have been working on in the recent past, both regarding programming languages construction and intelligent buildings.
    The 2016 is remembered as the year that showed to the world how dangerous Distributed Denial of Service attacks can be. Gauge of the disruptiveness of DDoS attacks is the number of bots involved: the bigger the botnet, the more powerful the attack. This character, along with the increasing availability of connected and insecure IoT devices, makes DDoS and IoT the perfect pair for the malware industry. In this paper we present the main idea behind AntibIoTic, a palliative solution to prevent DDoS attacks perpetrated through IoT devices.
    Cloud computing is steadily growing and, as IaaS vendors have started to offer pay-as-you-go billing policies, it is fundamental to achieve as much elasticity as possible, avoiding over-provisioning that would imply higher costs. In this paper, we briefly analyse the orchestration characteristics of PaaSSOA, a proposed architecture already implemented for Jolie microservices, and Kubernetes, one of the various orchestration plugins for Docker; then, we outline similarities and differences of the two approaches, with respect to their own domain of application. Furthermore, we investigate some ideas to achieve a federation of the two technologies, proposing an architectural composition of Jolie microservices on Docker Container-as-a-Service layer.
    Software development is a very complex activity in which the human factor has a paramount importance. Moreover, since this activity requires the collaboration among different stakeholders, coordination problems arise. Different development methodologies address these problems in different ways. Agile Methods address them embedding coordination mechanisms inside the process itself rather than defining the development process on one side and then superimposing coordination through additional practices or tools.
    This book provides an effective overview of the state-of-the art in software engineering, with a projection of the future of the discipline. It includes 13 papers, written by leading researchers in the respective fields, on important topics like model-driven software development, programming language design, microservices, software reliability, model checking and simulation. The papers are edited and extended versions of the presentations at the PAUSE symposium, which marked the completion of 14 years of work at the Chair of Software Engineering at ETH Zurich. In this inspiring context, some of the greatest minds in the field extensively discussed the past, present and future of software engineering. It guides readers on a voyage of discovery through the discipline of software engineering today, offering unique food for thought for researchers and professionals, and inspiring future research and development.
    Catastrophic forgetting has a significant negative impact in reinforcement learning. The purpose of this study is to investigate how pseudorehearsal can change performance of an actor-critic agent with neural-network function approximation. We tested agent in a pole balancing task and compared different pseudorehearsal approaches. We have found that pseudore-hearsal can assist learning and decrease forgetting.
    BioDynaMo is a biological processes simulator developed by an international community of researchers and software engineers working closely with neuroscientists. The authors have taken part in the development of the physical engine, and they are currently working on gene expression, i.e. the process by which the heritable information in a gene - the sequence of DNA base pairs - is made into a functional gene product, such as protein or RNA. Typically, gene regulatory models employ either statistical or analytical approaches, being the former already well understood and broadly used. In this paper, we utilize analytical approaches representing the regulatory networks by means of differential equations, such as Euler and Runge-Kutta methods. The two solutions are implemented in the BioDynaMo project and are compared for accuracy and performance.
    Multiplayer computer games play a big role in the ever-growing entertainment industry. Being competitive in this industry means releasing the best possible software, and reliability is a key feature to win the market. Computer games are also actively used to simulate different robotic systems where reliability is even more important, and potentially critical. Traditional software testing approaches can check a subset of all the possible program executions, and they can never guarantee complete absence of errors in the source code. On the other hand, during more than twenty years, Model Checking has demonstrated to be a powerful instrument for formal verification of large hardware and software components. In this paper, we contribute with a novel approach to formally verify computer games. We propose a method of model construction that starts from a computer game description and utilizes Model Checking technique. We apply the method on a case study: the game Penguin Clash. Finally, an approach to game model reduction (and its implementation) is introduced in order to address the state explosion problem.
    Microservices is an architectural style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey primarily addresses newcomers to the discipline, while offering an academic viewpoint on the topic. In addition, we investigate some practical issues and point out some potential solutions.
    This paper discusses a roadmap to investigate Domain Objects being an adequate formalism to capture the peculiarity of microser-vice architecture, and to support Software development since the early stages. It provides a survey of both microservices and Domain Objects, and it discusses plans and reflections on how to investigate whether a modeling approach suited to adaptable service-based components can also be applied with success to the microservice scenario.
    Static verification of a program source code correctness is an important element of software reliability. Formal verification of software programs involves proving that a program satisfies a formal specification of its behavior. Many languages use both static and dynamic type checking. With such approach, the static type checker verifies everything possible at compile time, and dynamic checks the remaining. The current state of the Jolie programming language includes a dynamic type system. Consequently, it allows avoidable run-time errors. A static type system for the language has been formally defined on paper but lacks an implementation yet. In this paper, we describe a prototype of Jolie Static Type Checker (JSTC), which employs a technique based on a SMT solver. We describe the theory behind and the implementation, and the process of static analysis.
    This paper summarizes the experience of teaching an introductory course to programming by using a correctness by construction approach at Innopolis University, Russian Federation. We discuss the data supporting the idea that a division in beginner and advanced groups improves the learning outcomes.
    This paper summarizes the experience of teaching an introductory course to programming by using a correctness by construction approach at Innopolis University, Russian Federation. We discuss the data supporting the idea that a division in beginner and advanced groups improves the learning outcomes.
    Writing requirements for embedded software is pointless unless they reflect actual needs and the final software implements them. In usual approaches, the use of different notations for requirements (often natural language) and code (a programming language) makes both conditions elusive. To address the problem, we propose to write requirements in the programming language itself. The expected advantages of this seamless approach, called AutoReq include: avoiding the potentially costly miss due to the use of different notations; facilitating software change and evolution, by making it easier to update code when requirements change and conversely; benefiting from the remarkable expressive power of modern object-oriented programming languages, while retaining a level of abstraction appropriate for requirements; leveraging, in both requirements and code, the ideas of Design by Contract, including (as the article shows) applying Hoare-style assertions to express temporal-logic-style properties and timing constraints; and taking advantage of the powerful verification tools that have been developed in recent years. The last goal, verification, is a focus of this article. While the idea of verifying requirements is not widely applied, the use of a precise formalism and a modern program prover (in our case, AutoProof for Eiffel) makes it possible at a very early stage to identify errors and inconsistencies which would, if not caught in the requirements, contaminate the final code. Applying the approach to a well-documented industrial example (a landing gear system) allowed a mechanical proof of consistency and uncovered an error in a previously published discussion of the problem.
    Static verification of a program source code correctness is an important element of software reliability. Formal verification of software programs involves proving that a program satisfies a formal specification of its behavior. Many languages use both static and dynamic type checking. With such approach, the static type checker verifies everything possible at compile time, and dynamic checks the remaining. The current state of the Jolie programming language includes a dynamic type system. Consequently, it allows avoidable run-time errors. A static type system for the language has been formally defined on paper but lacks an implementation yet. In this paper, we describe a prototype of Jolie Static Type Checker (JSTC), which employs a technique based on a SMT solver. We describe the theory behind and the implementation, and the process of static analysis.
    Microservices have seen their popularity blossoming with an explosion of concrete applications in real-life software. Several companies are currently involved in major refactoring of their back-end systems in order to improve scalability. In this paper, we present an experience report of a real world case study in order to demonstrate how scalability is positively affected by re-implementing a monolithic architecture into microservices. The case study is based on the FX Core system, a mission critical system of Danske Bank, the largest bank in Denmark and one of the leading financial institutions in Northern Europe.
    Microservices is an emerging development paradigm where software is obtained by composing autonomous entities, called (micro)services. However, mi-croservice systems are currently developed using general-purpose programming languages that do not provide dedicated abstractions for service composition. Instead , current practice is focused on the deployment aspects of microservices, in particular by using containerization. In this chapter, we make the case for a language-based approach to the engineering of microservice architectures, which we believe is complementary to current practice. We discuss the approach in general, and then we instantiate it in terms of the Jolie programming language.
    In this paper we offer an overview on the topic of Microservices Science and Engineering (MSE) and we provide a collection of bibliographic references and links relevant to understand an emerging field. We try to clarify some misunderstandings related to microservices and Service-Oriented Architectures, and we also describe projects and applications our team have been working on in the recent past, both regarding programming languages construction and intelligent buildings.
    In this paper we propose an algorithm, Simple Hebbian PCA, and prove that it is able to calculate the principal component analysis (PCA) in a distributed fashion across nodes. It simplifies existing network structures by removing intralayer weights, essentially cutting the number of weights that need to be trained in half.
    Formal modelling languages play a key role in the development of software since they enable users to prove correctness of system properties. However, there is still not a clear understanding on how to map a formal model to a specific programming language. In order to propose a solution, this paper presents a source-to-source mapping between Event- B models and Eiffel programs, therefore enabling the proof of correctness of certain system properties via Design-by-Contract (natively supported by Eiffel), while still making use of all features of O-O programming.
    Catastrophic forgetting is of special importance in reinforcement learning, as the data distribution is generally non-stationary over time. We study and compare several pseudorehearsal approaches for Q-learning with function approximation in a pole balancing task. We have found that pseudorehearsal seems to assist learning even in such very simple problems, given proper initialization of the rehearsal parameters.
    A number of formal methods exist for capturing stimulus-response requirements in a declarative form. Someone yet needs to translate the resulting declarative statements into imperative programs. The present article describes a method for specification and verification of stimulus-response requirements in the form of imperative program routines with conditionals and assertions. A program prover then checks a candidate program directly against the stated requirements. The article illustrates the approach by applying it to an ASM model of the Landing Gear System, a widely used realistic example proposed for evaluating specification and verification techniques.
    In this paper we report the experience of using AutoProof to statically verify a small object oriented program. We identified the problems that emerged by this activity and we classified them according to their nature. In particular, we distinguish between tool-related and methodology-related issues, and propose necessary changes to simplify both tool and method.
    Catastrophic forgetting has a serious impact in reinforcement learning, as the data distribution is generally sparse and non-stationary over time. The purpose of this study is to investigate whether pseudorehearsal can increase performance of an actor-critic agent with neural-network based policy selection and function approximation in a pole balancing task and compare different pseudorehearsal approaches. We expect that pseudorehearsal assists learning even in such very simple problems, given proper initialization of the rehearsal parameters.
    Microservices is an architectural style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey primarily addresses newcomers to the discipline, while offering an academic viewpoint on the topic. In addition, we investigate some practical issues and point out some potential solutions.
    In this paper we report the experience of using AutoProof to statically verify a small object oriented program. We identified the problems that emerged by this activity and we classified them according to their nature. In particular, we distinguish between tool-related and methodology-related issues, and propose necessary changes to simplify both tool and method.
    A number of formal methods exist for capturing stimulus-response requirements in a declarative form. Someone yet needs to translate the resulting declarative statements into imperative programs. The present article describes a method for specification and verification of stimulus-response requirements in the form of imperative program routines with conditionals and assertions. A program prover then checks a candidate program directly against the stated requirements. The article illustrates the approach by applying it to an ASM model of the Landing Gear System, a widely used realistic example proposed for evaluating specification and verification techniques.
    The microservices paradigm aims at changing the way in which software is perceived, conceived and designed. One of the foundational characteristics of this new promising paradigm, compared for instance to monolithic architectures, is scalability. In this paper, we present a real world case study in order to demonstrate how scalability is positively affected by re-implementing a monolithic architecture into microservices. The case study is based on the FX Core system, a mission critical system of Danske Bank, the largest bank in Denmark and one of the leading financial institutions in Northern Europe.
    Top-k shortest path routing problem is an extension of finding the shortest path in a given network. Shortest path is one of the most essential measures as it reveals the relations between two nodes in a network. However, in many real world networks, whose diameters are small, top-k shortest path is more interesting as it contains more information about the network topology. In this paper, we apply an efficient top-k shortest distance routing algorithm to the link prediction problem and test its efficacy. We compare the results with other base line and state-of-the-art methods as well as with the shortest path. Our results show that using top-k distances as a similarity measure outperforms classical similarity measures such as Jaccard and Adamic/Adar.
    Music holds a significant cultural role in social identity and in the encouragement of socialization. Technology, by the destruction of physical and cultural distance, has lead to many changes in musical themes and the complete loss of forms. Yet, it also allows for the preservation and distribution of music from societies without a history of written sheet music. This paper presents early work on a tool for musicians and ethnomusicologists to transcribe sheet music from monophonic voiced pieces for preservation and distribution. Using FFT, the system detects the pitch frequencies, also other methods detect note durations tempo, time signatures and generates sheet music. The final system is able to be used in mobile platforms allowing the user to take recordings and produce sheet music in situ to a performance.
    In this paper, we apply an efficient top-k shortest distance routing algorithm to the link prediction problem and test its efficacy. We compare the results with other base line and state-of-the-art methods as well as with the shortest path. Our results show that using top-k distances as a similarity measure outperforms classical similarity measures such as Jaccard and Adamic/Adar.
    Jolie is a service-oriented programming language which comes with the formal specification of its type system. However, there is no tool to ensure that programs in Jolie are well-typed. In this paper we provide the results of building a type checker for Jolie as a part of its syntax and semantics formal model. We express the type checker as a program with dependent types in Agda proof assistant which helps to ascertain that the type checker is correct.
    Project - DEPLOY
    Update
    Project - Dynamic Reconfiguration of Business Workflows
    Update
    The microservice architecture is a style inspired by service-oriented computing that has recently started gaining popularity and that promises to change the way in which software is perceived, conceived and designed. In this paper, we describe the main features of microservices and highlight how these features improve scalability.
    Static verification of source code correctness is a major milestone towards software reliability. The dynamic type system of the Jolie programming language, at the moment, allows avoidable run-time errors. A static type system for the language has been exhaustively and formally defined on paper, but still lacks an implementation. In this paper, we describe our steps toward a prototypical implementation of a static type checker for Jolie, which employs a technique based on a SMT solver.
    Analysis of data related to software development helps to increase quality, control and predictability of software development processes and products. However, collecting such data for is a complex task. A non-invasive collection of software metrics is one of the most promising approaches to solve the task. In this paper we present an approach which consists of four parts: collect the data, store all collected data, unify the stored data and analyze the data to provide insights to the user about software product or process. We employ the approach to the development of an architecture for non-invasive software measurement system and explain its advantages and limitations.
    A purely reductionist approach to neuroscience has difficulty in providing intuitive explanations and cost effective methods. Following a different approach, much of the mechanics of the brain can be explained purely by closer study of the relation of the brain to its environment. Starting from the laws of physics, genetics and easily observable properties of the biophysical environment we can deduce the need for dreams and a dopaminergic system. We provide a rough sketch of the various a priori assumptions encoded in the mechanics of the nervous system. This indicates much more can be learnt by studying the statistical priors exploited by the brain rather than its specific mechanics of calculation.
    There are many different approaches to understanding human consciousness. By conducting research to better understand various biological mechanisms, these can be redefined and utilized for technological purposes. Advanced Research on Biologically Inspired Cognitive Architectures is an essential reference source for the latest scholarly research on the biological elements of human cognition and examines the applications of consciousness within computing environments. Featuring exhaustive coverage on a broad range of innovative topics and perspectives, such as artificial intelligence, bio-robotics, and human-computer interaction, this publication is ideally designed for academics, researchers, professionals, graduate students, and practitioners seeking current research on the exploration of the intricacies of consciousness and different approaches of perception.
    The spatial organization is a core challenge for all large agent-based models with local interactions. In biological tissue models the spatial search and reinsertion are frequently reported the most expensive steps of the simulation. One of the main methods utilized in order to maintain both favourable algorithmic complexity and accuracy is spatial hierarchies. With this paper we seek to clarify to which extent the choice of spatial tree affects performance and also identify which spatial tree families that are optimal for such scenarios. We make use of a prototype of the new BioDynaMo tissue simulator for evaluation of the performance as well as implementation characteristics of several different trees.
    Jolie is a programming language that follows the microservices paradigm. As an open source project, it has built a community of developers worldwide-both in the industry as well as in academia-taken care of the development, continuously improved its usability, and therefore broadened the adoption. In this paper, we present some of the most recent results and work in progress that has been made within our research team.
    Music holds a significant cultural role in social identity and in the encouragement of socialization. Technology, by the destruction of physical and cultural distance, has lead to many changes in musical themes and the complete loss of forms. Yet, it also allows for the preservation and distribution of music from societies without a history of written sheet music. This paper presents early work on a tool for musicians and ethnomusicologists to transcribe sheet music from monophonic voiced pieces for preservation and distribution. Using FFT, the system detects the pitch frequencies, also other methods detect note durations tempo, time signatures and generates sheet music. The final system is able to be used in mobile platforms allowing the user to take recordings and produce sheet music in situ to a performance.
    A large percentage of buildings, domestic or special-purpose, is expected to become increasingly “smarter” in the future, due to the immense benefits in terms of energy saving, safety, flexibility, and comfort, that relevant new technologies offer. However, concerning the hardware, software, or platform levels, no clearly dominant standard frameworks currently exist. Here, we will present a prototype platform for supporting multiple concurrent applications for smart buildings, which is utilizing an advanced sensor network as well as a distributed micro services architecture, centrally featuring the Jolie language. The architecture and benefits of our system are discussed, as well as a prototype containing a number of nodes and a user interface, deployed in a real-world academic building environment. Our results illustrate the promising nature of our approach, as well as open avenues for future work towards it wider and larger scale applicability.
    A large percentage of buildings in domestic or special-purpose is expected to become increasingly "smarter" in the future, due to the immense benefits in terms of energy saving, safety, flexibility, and comfort, that relevant new technologies offer. As concerns hardware, software, or platform level, however, no clearly dominant standards currently exist. Such standards, would ideally, fulfill a number of important desiderata, which are to be touched upon in this paper. Here, we will present a prototype platform for supporting multiple concurrent applications for smart buildings, which is utilizing an advanced sensor network as well as a distributed microservices architecture, centrally featuring the Jolie programming language. The architecture and benefits of our system are discussed, as well as a prototype containing a number of nodes and a user interface, deployed in a real-world academic building environment. Our results illustrate the promising nature of our approach, as well as open avenues for future work towards its wider and larger scale applicability.
    It is well known that the software process impacts the quality of the resulting product. There are also anecdotal claims that agile processes result in higher level of quality than traditional methodologies. However, still solid evidence of this is missing. This work reports in an empirical analysis of the correlation between software process and software quality with specific reference to agile and traditional processes. More than 100 software developers and engineers from 21 countries have been surveyed with an online questionnaire. We have used the percentage of satisfied customers estimated by the software developers and engineers as the main dependent variable. The results evidence some interesting patterns: architectural styles may not have a significant influence on quality, agile methodologies might result in happier customers, larger companies and shorter projects seems to produce better products.
    We formalize timed workflow with abnormal behavior management (i.e. recovery) and demonstrate how temporal logics and model checking are methodologies to iteratively revise the design correct-by construction system. We define a formal semantics by compiling generic workflow patterns into an extension of LTL with dense time clocks (CLTLoc). CLTLoc allows us to define the first logical formalization of workflows that can be practically employed in verification tools and to avoid the use of well-known automata based formalisms dealing with real-time. We use an ad-hoc bound model checker to prove requirements validity on a business process. The working assumption is that lightweight approaches easily fit into processes that are already in place so that radical change of procedures, tools and people’s attitudes are not needed. The complexity of formalisms and invasiveness of methods have been demonstrated to be one of the major drawback and obstacle for deployment of formal engineering techniques into mundane projects.
    The microservice architecture is a style inspired by service-oriented computing that has recently started gaining popularity and that promises to change the way in which software is perceived, conceived and designed. In this paper we offer a short overview intended as a collection of bibliographic references and links in the field of Microservices Science and Engineering (MSE).
    Jolie is a programming language that follows the microservices paradigm. As an open source project, it has built a community of developers worldwide - both in the industry as well as in academia - taken care of the development, continuously improved its usability, and therefore broadened the adoption. In this paper, we present some of the most recent results and work in progress that has been made within our research team.
    Computer simulations have become a very powerful tool for scientific research. In order to facilitate research in computational biology, the BioDynaMo project aims at a general platform for biological computer simulations, which should be executable on hybrid cloud computing systems. This paper describes challenges and lessons learnt during the early stages of the software development process, in the context of implementation issues and the international nature of the collaboration.
    This paper is a brief update on developments in the BioDynaMo project, a new platform for computer simulations for biological research. We will discuss the new capabilities of the simulator, important new concepts simulation methodology as well as its numerous applications to the computational biology and nanoscience communities.
    In this paper we present the next step in our approach to neurobiologically plausible implementation of emotional reactions and behaviors for real-time autonomous robotic systems. The working metaphor we use is the "day" and the "night" phases of mammalian life. During the "day phase" a robotic system stores the inbound information and is controlled by a light-weight rule-based system in real time. In contrast to that, during the "night phase" information that has been stored is transferred to a supercomputing system to update the realistic neural network: emotional and behavioral strategies.
    Project - BioDynaMo
    Update
    The project partners met at Innopolis University in Russia to discuss the main issues of the BioDynaMo project that is focused on using of cloud technologies for designing a prototype able to simulate biological tissue growth, with a particular focus on human brain development. 
    Project - BioDynaMo
    Update
    Computer simulations have become a very powerful tool for scientific research. Given the vast complexity that comes with many open scientific questions, a purely analytical or experimental approach is often not viable. For example, biological systems (such as the human brain) comprise an extremely complex organization and heterogeneous interactions across different spatial and temporal scales. In order to facilitate research on such problems, the BioDynaMo project (\url{https://biodynamo.web.cern.ch/}) aims at a general platform for computer simulations for biological research. Since the scientific investigations require extensive computer resources, this platform should be executable on hybrid cloud computing systems, allowing for the efficient use of state-of-the-art computing technology. This paper describes challenges during the early stages of the software development process. In particular, we describe issues regarding the implementation and the highly interdisciplinary as well as international nature of the collaboration. Moreover, we explain the methodologies, the approach, and the lessons learnt by the team during these first stages.
    This paper reviews the major lessons learnt during two significant pilot projects by Bosch Research during the DEPLOY project. Principally, the use of a single formalism, even when it comes together with a rigorous refinement methodology like Event-B, cannot offer a complete solution. Unfortunately (but not unexpectedly), we cannot offer a panacea to cover every phase from requirements to code; in fact any specific formalism or language (or tool) should be used only where and when it is really suitable and not necessarily (and somehow forcibly) over the entire lifecycle.
    Nowadays, business enterprises often need to dynamically reconfigure their internal processes in order to improve the efficiency of the business flow. However, modifications of the workflow usually lead to several problems in terms of deadlock freedom, completeness and security. A solid solution to these problems consists in the application of model checking techniques in order to verify if specific properties of the workflow are preserved by the change in configuration. Our goal in this work is to develop a formal verification procedure to deal with these problems. The first step consists in developing a formal definition of a BPMN model of a business workflow. Then, a given BPMN model is translated into a formal model specified in Promela. Finally, by using the SPIN model checker, the correctness of the reconfigured workflow is verified.
    Computer simulations have become a very powerful tool for scientific research. Given the vast complexity that comes with many open scientific questions, a purely analytical or experimental approach is often not viable. For example, biological systems (such as the human brain) comprise an extremely complex organization and heterogeneous interactions across different spatial and temporal scales. In order to facilitate research on such problems, the BioDynaMo project (https: //biodynamo.web.cern.ch/) aims at a general platform for computer simulations for biological research. Since the scientific investigations require extensive computer resources, this platform should be executable on hybrid cloud computing systems, allowing for the efficient use of state-of-the-art computing technology. This paper describes challenges during the early stages of the software development process. In particular, we describe issues regarding the implementation and the highly interdis-ciplinary as well as international nature of the collaboration. Moreover, we explain the methodologies, the approach, and the lessons learnt by the team during these first stages.
    Project - BioDynaMo
    Update
    BioDynaMo plenary meeting at Innopolis, June 2016
    The microservice architecture is a style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey addresses mostly newcomers to the discipline and offers an academic viewpoint on the topic. In addition, practical aspects are investigated and solutions proposed.
    The microservice architecture is a style inspired by service-oriented computing that has recently started gaining popularity. Before presenting the current state-of-the-art in the field, this chapter reviews the history of software architecture, the reasons that led to the diffusion of objects and services first, and microservices later. Finally, open problems and future challenges are introduced. This survey addresses mostly newcomers to the discipline and offers an academic viewpoint on the topic. In addition, practical aspects are investigated and solutions proposed.
    This paper proposes a model which aim is providing a more coherent framework for agents design. We identify three closely related anthropo-centered domains working on separate functional levels. Abstracting from human physiology, psychology, and philosophy we create the $P^3$ model to be used as a multi-tier approach to deal with complex class of problems. The three layers identified in this model have been named PhysioComputing, MindComputing, and MetaComputing. Several instantiations of this model are finally presented related to different IT areas such as artificial intelligence, distributed computing, software and service engineering.
    This paper reports on the experience of the authors in quantitatively assessing the development process of an Eastern European software SME (Small or Medium Size Enterprise). The company produces a very successful workflow and documentation tool, features about 30 full time developers and has a customer base of about 40 major organizations. It has hired the authors as consultants to address quality and productivity issues raised by the upper management and customers. The adopted approach is based on systemic analysis, and starts with a comprehensive GQM session with the top managers of the company, to fully define the scope of work, and progresses analysing the documentation, interviewing the manager and the lead developers, and quantitatively analysing the issue tracking system in place. Specific attention is placed in identifying "schismogenesis", situations that may lead to unresolvable conflicts. The approach has been proven successful in providing a result in short forecasted timeframe, and systemic analysis has been effective in spotting the most critical situations present in the company. The result has been a set of prioritized recommendations, centered first in eliminating the schismogenetic situations and then ranging from adopting a more quantitative process control, to streamline the activities, to organize a line of product.
    Many verification tools come out of academic projects, whose natural constraints do not typically lead to a strong focus on usability. For widespread use, however, usability is essential. Using a well-known benchmark, the Tokeneer problem, we evaluate the usability of a recent and promising verification tool: AutoProof. The results show the efficacy of the tool in verifying a real piece of software and automatically discharging nearly two thirds of verification conditions. At the same time, the case study shows the demand for improved documentation and emphasizes the need for improvement in the tool itself and in the Eiffel IDE.
    The recent advent of ambient intelligence is enabled by parallel technological advancements in sensing, context recognition, embedded systems and communications. This paper focuses on the communication issues of embedded systems, particularly the latency Quality of Service (QoS) metric and the multi-hop communications with Bluetooth standard, to examine the viability of communications of embedded systems in AmI environments and applications. Bluetooth is a worldwide radio license-free technology that enables the creation of low-power multi-hop networks interconnecting multiple devices. Bluetooth sets the procedure to establish piconets (point-to-point and point-to-multipoint links) and scatternets (multi-hop communications), and hence, Bluetooth nodes can be interconnected to form wireless networks. This paper presents research on multi-hop latency that was conducted using a custom-built test platform. Moreover, an empirical model is derived to calculate the latency over asynchronous links when links in scatternets are always active or in sniff mode. The designers of ambient intelligent devices and networks can take advantage of the model and the estimation of the delay in Bluetooth multi-hop networks presented in this paper.
    In this position paper we present a novel approach to neurobiologically plausible implementation of emotional reactions and behaviors for real-time autonomous robotic systems. The working metaphor we use is the "day" and "night" phases of mammalian life. During the "day" phase a robotic system stores the inbound information and is controlled by a light-weight rule-based system in real time. In contrast to that, during the "night" phase the stored information is been transferred to the supercomputing system to update the realistic neural network: emotional and behavioral strategies.
    In this position paper we present a novel approach to neuro-biologically plausible implementation of emotional reactions and behaviors for real-time autonomous robotic systems. The working metaphor we use is the " day " and " night " phases of mammalian life. During the " day " phase a robotic system stores the inbound information and is controlled by the lightweight rule-based system in real time. In contrast to that, during the " night " phase the stored information is been transferred to the supercomputing system to update the realistic neural network: emotional and behavioral strategies.
    It is well known that the software process in place impacts the quality of the resulting product. However, the specific way in which this effect occurs is still mostly unknown and reported through anecdotes. To gather a better understanding of such relationship, a very large survey has been conducted during the last year and has been completed by more than 100 software developers and engineers from 21 countries. We have used the percentage of satisfied customers estimated by the software developers and engineers as the main dependent variable. The results evidence some interesting patterns, like that quality attribute of which customers are more satisfied appears functionality, architectural styles may not have a significant influence on quality, agile methodologies might result in happier customers, larger companies and shorter projects seems to produce better products.
    Jolie is the first language for microservices and it is currently dynamically type checked. This paper considers the opportunity to integrate dynamic and static type checking with the introduction of refinement types, verified via SMT solver. The integration of the two aspects allows a scenario where the static verification of internal services and the dynamic verification of (potentially malicious) external services cooperates in order to reduce testing effort and enhancing security.
    This paper introduces a new model of artificial cognitive architecture for intelligent systems, the Neuromodulating Cognitive Architecture (NEUCOGAR). The model is biomimetically inspired and adapts the neuromodulators role of human brains into computational environments. This way we aim at achieving more efficient Artificial Intelligence solutions based on the biological inspiration of the deep functioning of human brain, which is highly emotional. The analysis of new data obtained from neurology, psychology philosophy and anthropology allows us to generate a mapping of monoamine neuromodulators and to apply it to computational system parameters. Artificial cognitive systems can then better perform complex tasks (regarding information selection and discrimination, attention, innovation, creativity,…) as well as engaging in affordable emotional relationships with human users.
    This book constitutes the refereed proceedings of the 10th International Andrei Ershov Informatics Conference, PSI 2015, held in Kazan and Innopolis, Russia, in August 2015. The 2 invited and 23 full papers presented in this volume were carefully reviewed and selected from 56 submissions. The papers cover various topics related to the foundations of program and system development and analysis, programming methodology and software engineering and information technologies.
    Microservices is an architectural style inspired by service-oriented computing that has recently started gaining popularity. Jolie is a programming language based on the microservices paradigm: the main building block of Jolie systems are services, in contrast to, e.g., functions or objects. The primitives offered by the Jolie language elicit many of the recurring patterns found in microservices, like load balancers and structured processes. However, Jolie still lacks some useful constructs for dealing with message types and data manipulation that are present in service-oriented computing. In this paper, we focus on the possibility of expressing choices at the level of data types, a feature well represented in standards for Web Services, e.g., WSDL. We extend Jolie to support such type choices and show the impact of our implementation on some of the typical scenarios found in microservice systems. This shows how computation can move from a process-driven to a data-driven approach, and leads to the preliminary identification of recurring communication patterns that can be shaped as design patterns.
    In this paper we present a new neurobiologically-inspired affective cog-nitive architecture: NEUCOGAR (NEUromodulating COGnitive ARchitecture). The objective of NEUCOGAR is the identification of a mapping from the influence of serotonin, dopamine and noradrenaline to the computing processes based on Von Neuman's architecture, in order to implement af-fective phenomena which can operate on the Turing's machine model. As basis of the modeling we use and extend the Lövheims Cube of Emotion with parameters of the Von Neumann architecture. Validation is conducted via simulation on a computing system of dopamine neuromodulation and its effects on the Cortex. In the experimental phase of the project, the increase of computing power and storage redistribution due to emotion stimulus modulated by the dopamine system, confirmed the soundness of the model.
    This paper proposes a model which aim is providing a more coherent framework for agents design. We identify three closely related anthropo-centered domains working on separate functional levels. Abstracting from human physiology, psychology, and philosophy we create the P 3 model to be used as a multi-tier approach to deal with complex class of problems. The three layers identified in this model have been named PhysioComputing, MindComputing, and MetaComputing. Sev- eral instantiations of this model are finally presented related to different IT areas such as artificial intelligence, distributed computing, software and service engineering.
    This volume contains the proceedings of the First Workshop on Logics and Model-checking for self-* systems (MOD* 2014). The worshop took place in Bertinoro, Italy, on 12th of September 2014, and was a satellite event of iFM 2014 (the 11th International Conference on Integrated Formal Methods). The workshop focuses on demonstrating the applicability of Formal Methods on modern complex systems with a high degree of self-adaptivity and reconfigurability, by bringing together researchers and practitioners with the goal of pushing forward the state of the art on logics and model checking.
    We describe a business workflow case study with abnormal behavior management (i.e. recovery) and demonstrate how temporal logics and model checking can provide a methodology to iteratively revise the design and obtain a correct-by construction system. To do so we define a formal semantics by giving a compilation of generic workflow patterns into LTL and we use the bound model checker Zot to prove specific properties and requirements validity. The working assumption is that such a lightweight approach would easily fit into processes that are already in place without the need for a radical change of procedures, tools and people's attitudes. The complexity of formalisms and invasiveness of methods have been demonstrated to be one of the major drawback and obstacle for deployment of formal engineering techniques into mundane projects.
    Three formalisms of different kinds - VDM, Maude, and basic CCSdp - are evaluated for their suitability for the modelling and verification of dynamic software reconfiguration using as a case study the dynamic reconfiguration of a simple office workflow for order processing. The research is ongoing, and initial results are reported.
    Logics and model-checking have been successfully used in the last decades for modeling and verification of various types of hardware (and software) systems. While most languages and techniques emerged in a context of monolithic systems with a limited self-adaptability, modern systems require approaches able to cope with dynamically changing requirements and emergent behaviors. The emphasis on system reconfigurability has not been followed by an adequate research effort, and the current state of the art lacks logics and model checking paradigms that can describe and analyze complex modern systems in a comprehensive way. This paper describes a case study involving the dynamic reconfiguration of an office workflow. We state the requirements on a system implementing the workflow and its reconfiguration and we prove workflow reconfiguration termination by providing a compilation of generic workflows into LTL, using the Bound model checker Z{double-strok}ot. The objective of this paper is demonstrating how temporal logics and model checking are effective in proving properties of dynamic, reconfigurable and adaptable systems. This simple case study is just a "proof of concept" to demonstrate the feasibility of our ideas.
    The success of a number of projects has been shown to be significantly improved by the use of a formalism. However, there remains an open issue: to what extent can a development process based on a singular formal notation and method succeed. The majority of approaches demonstrate a low level of flexibility by attempting to use a single notation to express all of the different aspects encountered in software development. Often, these approaches leave a number of scalability issues open. We prefer a more eclectic approach. In our experience, the use of a formalism-based toolkit with adequate notations for each development phase is a viable solution. Following this principle, any specific notation is used only where and when it is really suitable and not necessarily over the entire software lifecycle. The approach explored in this article is perhaps slowly emerging in practice - we hope to accelerate its adoption. However, the major challenge is still finding the best way to instantiate it for each specific application scenario. In this work, we describe a development process and method for automotive applications which consists of five phases. The process recognizes the need for having adequate (and tailored) notations (Problem Frames, Requirements State Machine Language, and Event-B) for each development phase as well as direct traceability between the documents produced during each phase. This allows for a stepwise verification/validation of the system under development. The ideas for the formal development method have evolved over two significant case studies carried out in the DEPLOY project.
    Web Services provide interoperable mechanisms for describing, locating and invoking services over the Internet; composition further enables to build complex services out of simpler ones for complex B2B applications. While current studies on these topics are mostly focused - from the technical viewpoint - on standards and protocols, this paper investigates the adoption of formal methods, especially for composition. We logically classify and analyze three different (but interconnected) kinds of important issues towards this goal, namely foundations, verification and extensions. The aim of this work is to individuate the proper questions on the adoption of formal methods for dependable composition of Web Services, not necessarily to find the optimal answers. Nevertheless, we still try to propose some tentative answers based on our proposal for a composition calculus, which we hope can animate a proper discussion.
    Nowadays, acquisition of trustable information is increasingly important in both professional and private contexts. However, establishing what information is trustable and what is not, is a very challenging task. For example, how can information quality be reliably assessed? How can sources? credibility be fairly assessed? How can gatekeeping processes be found trustworthy when filtering out news and deciding ranking and priorities of traditional media? An Internet-based solution to a human-based ancient issue is being studied, and it is called Polidoxa, from Greek "poly", meaning "many" or "several" and "doxa", meaning "common belief" or "popular opinion". This old problem will be solved by means of ancient philosophies and processes with truly modern tools and technologies. This is why this work required a collaborative and interdisciplinary joint effort from researchers with very different backgrounds and institutes with significantly different agendas. Polidoxa aims at offering: 1) a trust-based search engine algorithm, which exploits stigmergic behaviours of users? network, 2) a trust-based social network, where the notion of trust derives from network activity and 3) a holonic system for bottom-up self-protection and social privacy. By presenting the Polidoxa solution, this work also describes the current state of traditional media as well as newer ones, providing an accurate analysis of major search engines such as Google and social network (e.g., Facebook). The advantages that Polidoxa offers, compared to these, are also clearly detailed and motivated. Finally, a Twitter application (Polidoxa@twitter) which enables experimentation of basic Polidoxa principles is presented.
    The BP-calculus is a formalism based on the π-calculus, which is encoded in WS-BPEL. The BP-calculus is intended to specifically model and verify Service Oriented Applications SOA. One important feature of SOA is the ability to compose services which may evolve dynamically or at runtime. Dynamic reconfiguration of services increases their availability but, simultaneously, it complicates validation, verification, and evaluation to some extent. In this paper, we formally model and analyze dynamic reconfigurations and their requirements in BP-calculus and we show how reconfigurable components can be modeled using handlers that are essential parts of WS-BPEL language. Besides, we consider security rules and their formal specification as required to implement dynamic reconfiguration.
    Top co-authors
    View All