Essential Software Architecture
Abstract
Job titles like "Technical Architect" and "Chief Architect" nowadays abound in the software industry, yet many people suspect that "architecture" is one of the most overused and least understood terms in professional software development. Gorton's book helps resolve this predicament. It concisely describes the essential elements of knowledge and key skills required to be a software architect. The explanations encompass the essentials of architecture thinking, practices, and supporting technologies. They range from a general understanding of software structure and quality attributes, through technical issues like middleware components and documentation techniques, to emerging technologies like model-driven architecture, software product lines, aspect-oriented design, service-oriented architectures, and the Semantic Web, all of which will influence future software system architectures. All approaches are illustrated by an ongoing real-world example. So if you work as an architect or senior designer (or want to someday), or if you are a student in software engineering, here is a valuable and yet approachable source of knowledge. "Ian's book helps us to head in the right direction through the various techniques and approaches... An essential guide to computer science students as well as developers and IT professionals who aspire to become an IT architect". (Anna Liu, Architect Advisor, Microsoft Australia)
Chapters (11)
The last 15 years have seen a tremendous rise in the prominence of a software engineering subdiscipline known as software
architecture. Technical Architect and Chief Architect are job titles that now abound in the software industry. There’s an International Association of Software Architects, and
even a certain well-known wealthiest geek on earth used to have “architect” in his job title in his prime. It can’t be a bad
gig, then?
This chapter introduces the case study that will be used in subsequent chapters to illustrate some of the design principles
in this book. Very basically, the application is a multiuser software system with a database that is used to share information
between users and intelligent tools that aim to help the user complete their work tasks more effectively.
Much of a software architect’s life is spent designing software systems to meet a set of quality attribute requirements. General
software quality attributes include scalability, security, performance and reliability. These are often informally called
an application’s “-ilities” (though of course some, like performance, don’t quite fit this lexical specification).
I’m not really a great enthusiast for drawing strong analogies between the role of a software architect and that of a traditional
building architect. There are similarities, but also lots of profound differences. But let’s ignore those differences for
a second, in order to illustrate the role of middleware in software architecture.
The previous three chapters have described the basic middleware building blocks that can be used to implement distributed
systems architectures for large-scale enterprise systems. Sometimes, however, these building blocks are not sufficient to
enable developers to easily design and build complex architectures. In such cases, more advanced tools and designs are needed,
which make it possible to address architectural issues with more powerful middleware technologies. This chapter describes
two of these, namely message brokers and workflow engines, and analyses the strengths and weaknesses of these approaches.
The role of an architect is much more than simply carrying out a software design activity. The architect must typically:
Work with the requirements team: The requirements team will be focused on eliciting the functional requirements from the application stakeholders. The architect plays an important role in requirements gathering by understanding the overall systems needs and ensuring that the appropriate quality attributes are explicit and understood. Work with various application stakeholders: Architects play a pivotal liaison role by making sure all the application’s stakeholder needs are understood and incorporated into the design. For example, in addition to the business user requirements for an application, system administrators will require that the application can be easily installed, monitored, managed and upgraded. Lead the technical design team: Defining the application architecture is a design activity. The architect leads a design team, comprising system designers (or on large projects, other architects) and technical leads in order to produce the architecture blueprint.Work with the project management: The architect works closely with project management, helping with project planning, estimation and task allocation and scheduling.
In this chapter, a design for the ICDE case study described in Chap. 2 is given. First, a little more technical background to the project is given, so that the design details are more easily digested. Then the design description is presented, and is structured using the architecture documentation template introduced in the previous chapter. The only section that won’t be included in the document is the first, the “Project Context”, as this is basically described in Chap. 2. So, without further ado, let’s dive into the design documentation.
In many application domains in science and engineering, data produced by sensors, instruments, and networks is naturally processed
by software applications structured as a pipeline. Pipelines comprise a sequence of software components that progressively
process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from
text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document.
The second step may parse the raw text to break it down into its constituent grammatical parts, such as nouns, verbs, and
so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced
on a time line.
The world of software technology is a fast moving and ever changing place. As our software engineering knowledge, methods
and tools improve, so does our ability to tackle and solve more and more complex problems. This means we create “bigger and
better” applications, while still stressing the limits of our ever-improving software engineering skills. Not surprisingly,
many in the industry feel like they are standing still. They don’t seem to be benefiting from the promised quality and productivity
gains of improved development approaches. I suspect that’s destined to be life for all of us in the software industry for
at least the foreseeable future.
The ICDE system is a platform for capturing and disseminating information that can be used in different application domains.
However, like any generically applicable horizontal technology, its broad appeal is both a strength and weakness. The weakness
stems from the fact that a user organization will need to tailor the technology to suit its application domain (e.g., finance),
and make it easy for their users to learn and exploit. This takes time and money, and is hence a disincentive to adoption.
... However, these quality attributes affect each other; for example, the implementation of advanced security features produce the suffering of the performance of a software system. Similarly, on the other side, frequent modifiability has a negative impact on security 19 . ...
... Security pattern detection can assure the existence of security patterns and provide the knowledge of their structure and location in the software system source code. 19 The primary research work for the detection of patterns was first carried out in the area of the design patterns. Therefore, a basic guideline exists for pattern detection. ...
Security patterns are one of the reusable building blocks of a secure software architecture that provide solutions to particular recurring security problems in given contexts. Incomplete or nonstandard implementation of security patterns may produce vulnerabilities and invite attackers. Therefore, the detection of security patterns improves the quality of security features. In this paper, we propose a security pattern detection (SPD) framework and its internal pattern matching techniques. The framework provides a platform for data extraction, pattern matching, and semantic analysis techniques. We implement ordered matrix matching (OMM) and non-uniform distributed matrix matching (NDMM) techniques. The OMM technique detects a security pattern matrix inside the target system matrix (TSM). The NDMM technique determines whether the relationships between all classes of a security pattern are similar to the relationships between some classes of the TSM. The semantic analysis is used to reduce the rate of false positives. We evaluate and compare the performance of the proposed SPD framework using both matching techniques based on four case studies independently. The results show that the NDMM technique provides the location of the security patterns, and it is highly flexible, scalable and has high accuracy with acceptable memory and time consumption for large projects.
... Architectural diagrams provide a visual representation of complex systems, aiding stakeholders in understanding the system's structure and relationships between its different parts [3]. These diagrams are designed for various abstraction levels and purposes, covering various areas of software and system design. ...
Architecture diagrams are required tools for software development, system design, and communication. They facilitate the understanding of complex systems by providing a visual representation of components, relationships, and data flow. However, creating and interpreting these diagrams can be time-consuming and require significant expertise. Generative Artificial Intelligence (AI) offers a potential solution to automate the creation process and improve comprehension. This paper explores how generative AI can be leveraged to automatically generate diverse architectural diagrams from textual descriptions and code repositories. Additionally, the research investigates how AI techniques can assist in understanding and analyzing existing diagrams, thereby easing maintenance, documentation, and stakeholder communication. This paper discusses existing approaches, emerging techniques, challenges, and future directions in this evolving field. Our findings indicate that generative AI can significantly reduce effort in diagram creation and improve analysis while also exploring the limitations of current models.
... Quality attributes are part of the non-functional requirements of an application that capture the many perspectives of how the functionalities/functional requirements of an application are achieved (Gorton, 2011). This section discusses the QAs addressed by smart city architectures in the reviewed literature with respect to research question 1 (RQ1). ...
Background: A Smart City leverages on Information and Communications Technologies (ICTs), and several other infrastructures for improvement of citizens' quality of life, efficiency in managing all aspects of city's operations and services. Having the right architecture in developing smart city applications is paramount to achieving the minimum set of Quality Attributes (QAs). Several architectures and frameworks were proposed that are aimed at satisfying different set of QAs. However, there is a little or no effort in developing a product line architecture that satisfies all QAs that are considered common and essential to smart city applications. Aim: This work is aimed at reviewing existing smart city architectures and frameworks to identify the QAs each of these architecture and frameworks satisfy, categorizing these QAs into high level QAs as well as proposing key QAs for smart city. Method: To achieve this objective, a Systematic Literature Review (SLR) was conducted and two research questions (RQs) were defined, and the result was analyzed using descriptive statistics techniques. Results: Sixteen (16) architectures/frameworks were reviewed, and identified eight (8) high-level QAs, among which four (4) were proposed as key Quality Attributes for smart city.
... The most common security related requirements for software are shown in Figure 1. They are [4]: ...
... The most common security related requirements for software are shown in Figure 1. They are [4]: ...
There has been a rapid growth in the development of mobile application resulting from its wide usage for online transaction, data storage and exchange of information. However, an important issue that has been overlooked is the lack of emphasis on the security issues at the early stage of the development. In fact, security issues have been kept until the later stage of the implementation of mobile apps. Requirements engineers frequently ignore and incorrectly elicit security‐related requirements at the early stage of mobile
application development. This scenario has led to the failure of developing secure and safe mobile application based on the needs of the users. As such, this paper intends to provide further understanding of the real challenges in extracting security attributes for mobile application faced by novice requirements engineers. For this purpose, two experiments on eliciting security attributes requirements of textual requirements scenario were conducted. The performance related to the correctness and time taken to elicit
the security attributes were measured and recorded. It was found that the process of eliciting correct security attributes for mobile application requires effort, knowledge and skills. The findings indicate that an automated tool for correct elicitation security attributes requirement could help to overcome the challenges in eliciting security attributes requirements, especially among novice requirements engineers
The decision-making process for attaining Sustainable Development Goals (SDGs) can be enhanced through the use of predictive modelling. The application of predictive tools like deep neural networks (DNN) empowers stakeholders with quality information and promotes open data policy for curbing corruption. The anti-corruption drive is a cardinal component of SDG 16 which is aimed at strengthening state institutions and promoting social justice for the attainment of all 17 SDGs. This study examined the implementation of the SDGs in Nigeria and modelled the 2017 national corruption survey data using a DNN. We experimentally tested the efficacy of DNN optimizers using a standard image dataset from the Modified National Institute of Standards and Technology (MNIST). The outcomes validated our claims that predictive analytics could enhance decision-making through high-level accuracies as posted by the optimizers: Adam 98.2%; Adadelta 98.4%; SGD 94.9%; RMSProp 98.1%; Adagrad 98.1%.
Classical computing, which gave us the current digital age, is about to be overriden by a more exciting, powerful, and radically distinct form of computing technology termed as quantum computing. Quantum-based computing may eventually be many times faster than the computing capability that we all use today in our smart phones, laptop computers, and other devices. By leveraging the fundamentals of quantum mechanics, quantum potential is initially focused in this research paper. A baseline has been defined to get through the fundamentals of quantum computing. To get insights, currently available quantum computing platforms or environments are described. Software quality models are investigated to enlist detailed software quality attributes and their relevance for different software application types. We have presented characteristics of quantum computers or quantum processors that may be pertinent to understanding or reasoning about software quality attributes. How these quantum computing characteristics (QCCs) may impact quality of quantum software, is analyzed in the end.
The tabular structure of legal texts and the principles of their drafting result in the frequent use of various types of references, which has a negative impact on the comprehensibility of the law. As legal texts are nowadays drafted and made available in electronic format, it is reasonable to try to develop automated mechanisms for checking the correctness of references contained in these texts. The paper shows how a particular type of automated and dedicated information management mechanisms, offered by the so-called adaptive hypertexts, can be used for this purpose. The authors focus primarily on describing the specificity of this type of tools and on analyzing the possibilities, principles and prospects of their use in order to improve the quality of legal texts, in particular their comprehensibility.
Assigning developers for highly secured software projects requires identifying developers’ tendency to contribute towards vulnerable software codes called developer-centric security vulnerability to mitigate issues on human resource management, financial and project timelines. There are problems in assessing the previous codebases in evaluating the developer-centric security vulnerability level of each developer. Thus, this paper suggests a method to evaluate this through the techno-behavioral features of their previous projects. Consequently, we present results of an exploratory study of the developer-centric security vulnerability level prediction using a dataset of 1827 developers by logically selecting 13 techno-behavioral features. Our results depict that there is a correlation between techno-behavioral features and developer-centric security vulnerability with 89.46% accuracy. This model enables to predict developer-centric security vulnerability level of any developer if the required techno-behavioral features are available avoiding the analysis of his/her previous codebases.
The concept of the Internet of Things (IoT) has been a recurrent view of the physical technological environment, in the light of which it is expected that everyday artifacts are connected, enhancing the availability and ubiquity of “smart” services. Higher education institutions can be seen as a privileged ecosystem for the development of intelligent and smart solutions, due to its dynamic and everyday changing environment, which includes not only physical infrastructures, digital services, but also people, i.e., students, researchers, lecturers, and staff. This work introduces an Application-oriented Architecture-AoA that has been designed to streamline the design and development of “smart” solutions inside the campus, by focusing on the Application side and reshaping the concept of “service” to a piece of “functionality” with a clear and objective purpose, rather than the classic and conventional approach, more focused on the development or technical sides. The proposed approach provides the mechanism to have multiple applications interacting and sharing data and functionalities, ensuring coexistence between new and legacy systems that are in use on the campus, removing the major drawbacks that basic monolithic applications typically require. The generic AoA model is described and the procedure to create a new application is systematized. Lastly, three case studies (RnMonitor, Refill_H2O, and BiRa) are presented end elaborated using the AoA procedure designed to create a new application.
Research in sustainable development, program design and monitoring, and evaluation requires data analytics for the Sustainable Developments Goals (SDGs) not to suffer the same fate as the Millennium Development Goals (MDGs). The MDGs were poorly implemented, particularly in developing countries. In the SDGs dispensation, there is a huge amount of development-related data that needs to be harnessed using predictive analytics models such as deep neural networks for timely and unbiased information. The SDGs aim at improving the lives of citizens globally. However, the first six SDGs (SDGs 1-6) are more relevant to developing economies than developed economies. This is because low-resourced countries are still battling with extreme poverty and unacceptable levels of illiteracy occasioned by corruption and poor leadership. Inclusive innovation is a philosophy of SDGs as no one should be left behind in the global economy. The focus of this study is the implementation of SDGs 1-6 in less developed countries. Given their peculiar socio-economic challenges, we proposed a design for a low-budget deep neural network-based sustainable development goals 1-6 (DNNSDGs 1-6) system. The aim is to empower actors implementing SDGs in developing countries with data-based information for robust decision making.
In this work, we adopt an engineering problem-solving approach to the open-air defecation health problem. We model social and behaviour change communication intervention among other components of a water-sanitation-hygiene (WASH) system in response to the menace of open defecation in rural and urban communities globally. We also used experimental outcomes to show empirically that patterns in data captured in the WASH process could be learnt for effective decision making using deep learning neural networks as an intelligent software engineering technique. Eradicating open defecation is one of the indicators used for measuring progress made towards the attainment of Sustainable Development Goal 6 (SDG 6). We use the Adum-Aiona community in Nigeria as case study in designing community-based total sanitation programs using software model-driven engineering approaches with the aim of promoting their implementation. This is because even when toilets and other sanitary infrastructure are available, behavior and social change efforts are needed to promote their large-scale use. Also, we demonstrate that besides being used to model software systems, computational models (software architecture) are useful in documenting and promoting understanding of concepts in virtually all fields of human endeavour. Our motivation is that enhancing understanding of open defecation through software modelling would help SDG 6 implementors and actors attain set sanitation goals in both rural and urban communities towards the SDGs target year 2030.
The localization process has always been regarded as a black box that functions on its own, with a team of project managers and translators who never get involved in software development methodologies or technologies. This is mainly due to the way in which the software localization industry has developed. In many cases, and for many of the main software publishers and platform developers, localization is a peripheral process that is only invoked when text strings need translation. Including localization late in the development process not only brings enormous problems when translating strings, on many occasions it becomes impossible to launch the product in other markets which have the need to accommodate languages and cultures that do not fit within the features that were developed and included in the software. This dissertation begins by making a historical account of the development of computers and describes how the three main programming platforms—namely mainframes, minicomputers, and personal computers—came into being. We shall see that, as hardware developed and new features were added, a variety of programming languages and strategies for developing software emerged. As features, application, and use of hardware further expanded, the need for organizing strategies and processes for creating programs became the basis for formulating and establishing software development strategies and methodologies. Along with these developments, the introduction of personal computers eventually promoted the need for creating products that serve not only markets in the United States, but other markets that communicate in different languages and exhibit particular needs of their own. Thus, the localization industry was born. It is at this point that translation and computers begin to come together and interact. Up until now, research regarding software localization has had programmers as a starting point. These programmers have researched what multilingual programs are and what needs to be done to create them. Our proposal comes from the other “side,” from the localizer’s point of view; a localizer that approaches programming as an expert in languages and intercultural mediation. This expert knows the problems within localization’s black box, but is also prepared to participate in the processes that take place before translation, the processes that take place in order to create new software. These “internationalizers,” as we refer to them, have all the necessary knowledge and skills to enable them to become part of a software development team from the beginning all the way through to the final stages. Their knowledge and presence will help integrate into this process the necessary requirements that will allow for smooth software localization, when the decision is made to launch the application into other markets that have diverse linguistic, legal, and cultural needs. In this century, mainly guided by communication, it is unthinkable to develop software that only attends to the needs of a single market. The internationalizer can help the software development team to accomplish this.
The promise of AIs that can target, shoot at, and eliminate enemies in the blink of an eye, brings about the possibility that such AIs can turn rogue and create an adversarial “Skynet.” The main danger is not that AIs might turn against us because they hate us, but because they think they want to be like us: individuals. The solution might be to treat them like individuals. This should include the right and obligation to do unto others as any AI would want other AIs or humans to do unto them. Technically, this involves an N-version decision making process that takes into account not how good or efficient the decision of an AI is, but how likely the AI is to show algorithmic “respect” to other AIs or human rules and operators. In this paper, we discuss a possible methodology for deploying AI decision making that uses multiple AI actors to check on each other to prevent “mission individuation,” i.e., the AIs wanting to complete the mission even if the human operators are sacrificed. The solution envisages mechanisms that demand the AIs to “do unto others as others would do onto them” in making final solutions. This should encourage AIs to accept critique and censoring in certain situations and most important it should lead to decisions that protect both human operators and the final goal of the mission.
In this chapter, the authors describe their experiences in designing, developing, and teaching a course on Software Architecture that tested both in an academic context with their graduate Computer Science students and in an advanced context of professional updating and training with scores of system engineers in a number of different companies. The course has been taught in several editions in the last five years. The authors describe its rationale, the way in which they teach it differently in academia and in industry, and how they evaluate the students’ learning in the different contexts. Finally, the authors discuss the lessons learnt and describe how this experience is inspiring for the future of this course.
The purpose of this work is to apply the servicization of enterprise information systems in maintenance, particularly in the management of the maintenance process of the component parts of trains. Service Oriented Architecture (SOA) is an architectural approach that permits servicization since it provides a flexible set of design principles used during the modeling practices (abstraction and realization). With a view to supporting the model-driven engineering of software systems, Mode Driven Architecture (MDA) is a design approach delivering a set of guidelines for the configuring of specifications in systems development. Therefore, the combination of these two approaches can be fruitful to address the challenging issues the enterprise information system is facing today. Our study is based on a methodological approach using the MDA models for the automatic generation of web service. The case study concerns a Railways Maintenance Workshop (RMW) at Sidi Bel Abbes (Algeria). Finally, the information system for the management of maintenance of the component parts of passengers and baggage railcars, using the generated solution, is realized and deployed. This software helps to have better management of the RMW by the effective planning of interventions, improve performance by increasing reliability, traceability, and availability of the equipment (parts).
In this chapter, the authors describe their experiences in designing, developing, and teaching a course on Software Architecture that tested both in an academic context with their graduate Computer Science students and in an advanced context of professional updating and training with scores of system engineers in a number of different companies. The course has been taught in several editions in the last five years. The authors describe its rationale, the way in which they teach it differently in academia and in industry, and how they evaluate the students’ learning in the different contexts. Finally, the authors discuss the lessons learnt and describe how this experience is inspiring for the future of this course.
The role of automation in sustainable development is not in doubt. Computerization in particular has permeated every facet of human endeavour, enhancing the provision of information for decision-making that reduces cost of operation, promotes productivity and socioeconomic prosperity and cohesion. Hence, a new field called information and communication technology for development (ICT4D) has emerged. Nonetheless, the need to ensure environmentally friendly computing has led to this research study with particular focus on green computing in Africa. This is against the backdrop that the continent is feared to suffer most from the vulnerability of climate change and the impact of environmental risk. Using Nigeria as a test case, this paper gauges the green computing awareness level of Africans via sample survey. It also attempts to institutionalize green computing maturity model with a view to optimizing the level of citizens awareness amid inherent uncertainties like low bandwidth, poor network and erratic power in an emerging African market. Consequently, we classified the problem as a stochastic optimization problem and applied metaheuristic search algorithm to determine the best sensitization strategy. Although there are alternative ways of promoting green computing education, the metaheuristic search we conducted indicated that an online real-time solution that not only drives but preserves timely conversations on electronic waste (e-waste) management and energy saving techniques among the citizenry is cutting edge. The authors therefore reviewed literature, gathered requirements, modelled the proposed solution using Universal Modelling Language (UML) and developed a prototype. The proposed solution is a web-based multi-tier e-Green computing system that educates computer users on innovative techniques of managing computers and accessories in an environmentally friendly way. We found out that such a real-time web-based interactive forum does not only stimulate the interest of the common man in environment-related issues, but also raises awareness about the impact his computer-related activities have on mother earth. This way, he willingly becomes part of the solution to environment degradation in his circle of influence.
Este trabajo se ocupa de confrontar dos tecnologías para el desarrollo de aplicaciones móvilesbuscando determinar cuál de ellases la mejor con respecto al rendimiento. La comparación se llevó a cabo mediante una aplicación que fue construidaen ambas tecnologías, para organizar y monitorear la comparación se utilizoel proceso de evaluación propuesto en el estándar ISO/IEC 14598-1junto con el modelo de calidad del producto de ISO/IEC 9126-1. Los resultados permitieron concluir que una aplicación construida con tecnología de desarrollo nativa tiene mejor rendimiento que aquella que se desarrolla con tecnología basada en web.
A functional perspective of software systems, at the architectural level allows developers to maintain a consistent understanding of the relationships between different functionalities of their system as it evolves, and allows them to analyze the system at a functional-chunk level rather than at the traditional, structural levels more typically presented by IDEs.
This paper describes the derivation, implementation and evaluation of a prototype tool built to obtain this functional perspective from existing systems. The tool supports developers as they first attempt to locate specific functionalities in the source code. This support is based on preliminary design principles identified by observing experienced software developers in-vivo, as they performed this task manually. After the code associated with several such functionalities is located in the code, a graphical view allows the developer to assess the source code dependencies between the identified features and with the rest of the system. This helps developers understand the inter-functional interfaces and can be reviewed over time, as features are added and removed, to ensure on-going consistency between the architect's perspective of the features in the system and the code-base.
Bedingt durch stetig wachsende Herausforderungen ist die flexible und kostengünstige Entwicklung von Informationssystemen für Unternehmen heutzutage von großer Bedeutung. Die systematische Ableitung und Gestaltung der Software-Bausteine eines Systems ist dabei unabdingbar. Vor dem Hintergrund der Komponentenorientierung verspricht die BCI-Methode die Identifikation geeigneter Software-Bausteine zu gewährleisten. Die unabhängige Anwendbarkeit von der Art der zugrunde liegenden konzeptionellen Modelle erweist sich dabei als Vorteil der Methode. Da jedoch die Güte der gefundenen Lösung maßgeblich durch die Qualität der Modelle determiniert ist, stellt sich die Frage nach der Eignung unterschiedlicher Modellarten. Mit der SOM-Methodik liegt ein ganzheitlicher und etablierter Ansatz zur systematischen Beschreibung des Fachkonzepts vor, welcher im Rahmen des Beitrags in Kombination mit der BCI-Methode hinsichtlich der Erlangung valider Ergebnisse untersucht wird. Im Vordergrund steht dabei insbesondere die Untersuchung, welche Modelle zu betrachten sowie welche Beziehungen zu unterscheiden sind.
The research focus is set in investigating the efficiency, performance and identifying the impacting factors of e-Service Information System in facilitating students during registration and examination periods in the aspect of efficiency, performance and usability. There is a gap in the published research regarding the understanding of Universities of impacting factors in the interconnection of administration with the academic specifics using a management information system. As Case Study is chosen University for Business and Technology-UBT, where the system has been designed and tailored to, implemented, tested and evaluated. The research study tries to identify the impacting factors, efficiency, performance and whether the developed model shows improvement of student services, data centralization, data security, and the entire process of student eservices and can this be generalized and reused from others as well. Through the Case Study investigated several impacting factors. Afterwards evaluated the usability and user-friendliness of the developed UBT model and solutions of e-Service Information System and used regression analyses to determine the impact. The research study tries to contribute with identified impacting factors for such systems, analyses of the improvement in data centralization, data security, efficiency, performance, and usability. Insights and recommendations are provided.
This paper reports on an interview-based study of 18 authors of different chapters of the two-volume book "Architecture of Open-Source Applications". The main contributions are a synthesis of the process of authoring essay-style documents (ESDs) on software architecture, a series of observations on important factors that influence the content and presentation of architectural knowledge in this documentation form, and a set of recommendations for readers and writers of ESDs on software architecture. We analyzed the influence of three factors in particular: the evolution of a system, the community involvement in the project, and the personal characteristics of the author. This study provides the first systematic investigation of the creation of ESDs on software architecture. The observations we collected have implications for both readers and writers of ESDs, and for architecture documentation in general.
What is the role of a software architecture when building a large software system, for instance a command and control system for defense purposes? How it is created? How it is managed? In which way can we form software architects for this kind of systems? We describe an experience of designing and teaching several editions of a course on software architecture in the context of a large system integrator of defense mission-critical systems—ranging from air traffic control to vessel traffic control systems—namely SELEX Sistemi Integrati, a company of the Finmeccanica group. In the last few years, the company has been engaged in a comprehensive restructuring initiative for valorizing existing software assets and products and enhancing their productivity for software development. The course was intended as a baseline introduction to a School of Complex Software Systems for the many software engineers working in the company.
In this chapter, the authors describe their experiences in designing, developing, and teaching a course on Software Architecture that tested both in an academic context with their graduate Computer Science students and in an advanced context of professional updating and training with scores of system engineers in a number of different companies. The course has been taught in several editions in the last five years. The authors describe its rationale, the way in which they teach it differently in academia and in industry, and how they evaluate the students' learning in the different contexts. Finally, the authors discuss the lessons learnt and describe how this experience is inspiring for the future of this course.
Component-based software engineering (CBSE) has elicited research interests in recent times in different industrial sectors, including the educational domain because of its perceived advantage over traditional development approaches. However, there is need to empirically justify this claim through case study reports from several industrial domains. A university as a complex enterprise needs an Enterprise Resource Planning (ERP) system to automate its complex operational and administrative procedures for efficiency and effectiveness. However, the peculiarity of each university makes it difficult to obtain commercial off-the-shelf ERPS that perfectly suits their requirements. This paper, reports the application of the CBSE paradigm for the development of a university ERP - specifically an e-Administration System. The research provides a basis to empirically compare the merits of CBSE and traditional development approaches. The result of the case study yielded a usable ERP for a Nigerian university, and concrete empirical data that confirmed the superiority of CBSE over traditional software development.
Many large-scale software systems must service thousands or millions of concurrent requests. These systems must be load tested to ensure that they can function correctly under load (i.e., the rate of the incoming requests). In this paper, we survey the state of load testing research and practice. We compare and contrast current techniques that are used in the three phases of a load test: (1) designing a proper load, (2) executing a load test, and (3) analyzing the results of a load test. This survey will be useful for load testing practitioners and software engineering researchers with interest in the load testing of large-scale software systems.
“The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-22389-6_23”
As the computer literacy spreads among public servants, the focus of
communication between system analysts and users is moved on specification
forms that appears formal and semi-formal documents, and spreadsheet like
descriptions. The documents are going to be planned to serve both clients and
officers of government, both external and internal processing. For system
and business analyst, there is a new situation that requires the polishing and
improving the readily available methods and methodologies. For accurate
interpretations of valid requirements, the system analyst needs approaches that
are grounded in formal methods. In e-government environment, the specification
of requirements happens through calculation spreadsheet and/or office docu-
ment. So that there is a need for a systematic and at least semi-formal approach
that focuses on the ubiquitous documents. The models deduced from the
document-centric point of view should be placed into an overall Information
Systems Architecture. The linkage between the models provides the opportunity
for cross validation and verification to keep up the integrity and consistency.
The convoluted relationships among the models can be adequately represented
by generalized hypergraphs that offer the chances a disciplined and correct
systems analysis and design procedure.
The modeling of Information Systems in general, and Web Information Systems (WIS) especially, is a permanent issue so that there have been already several attempts and proposals for representing various facets of WIS. In our proposed approach, we focus on the organizational and business activity modeling and we concentrate on documents that represent the information of enterprises in the form of unstructured and semi-structured documents. The compilation of documents mirrors implicitly or explicitly the structure of enterprises, the interrelationship of business processes, and activities and tasks within processes. The documents represent, at the same time, the system roles along with tasks and activities. Our modeling approach concentrates on the co-existence and co-operation of documents and activities of business. The Story Algebra, or more generally the process algebra approach provides a formal framework that promises a formal describing method for modeling precisely the event triggered processes coupled with data in document format within an Enterprise Architecture Framework.
Many software systems today have system-of systems (SoS) architectures comprising interrelated and heterogeneous systems, which are developed by multiple teams and companies. Such systems emerge gradually and it is hard to analyze or predict their behavior due to their scale and complexity. In particular, certain behavior only emerges at runtime due to complex interactions between the involved systems and their environment. Monitoring the behavior of SoS at runtime is thus essential during development and evolution. However, existing monitoring approaches are often limited to particular architectural styles or technologies and are thus hard to apply in SoS architectures. In this paper we first analyze the challenges for monitoring SoS based on an industrial SoS for the automation of metallurgical plants. We then propose a flexible framework for monitoring heterogeneous systems within a SoS. We demonstrate its feasibility by applying it to two systems of an industrial SoS. We also report results of an evaluation assessing the framework's performance and scalability.
Presented is a mobile-health m-health system architecture utilizing Bluetooth and web technologies for remote health data acquisition and monitoring. The proposed system aims at improving chronic disease management and diabetes in particular using a combination of existing patients' medical sensors, PDAs and PDAs-related Bluetooth technology. This offers a relatively low-cost solution compared to equivalent customized health data acquisition systems. At the patient end, the health data is acquired serially from the medical sensors and sent to the mobile devices using Bluetooth connectivity. The acquired data are then transmitted to a remote health hub using core IP network. Design and implementation of the Wireless Data Acquisition Module WDAM, wireless connectivity protocols, and architecture of the remote health hub applications are presented in this paper. The distributed nature of the proposed system allows for continuous acquisition and monitoring of patients' data anytime, anywhere and by anyone with modest technological background. Likewise, the patient's health providers can continuously monitor the acquired data of their patients for better disease management. Performance of the developed data acquisition system is assessed experimentally and seamless data acquisition and monitoring have been demonstrated.
This paper presents the design, implementation and the overall lifecycle of a software system that includes mobile and web components and that evolved having the following aspects in mind: (1) System Requirements and Architectural Design, (2) System Implementation and Deployment, and (3) System Assessment and Usability Testing. During the three years of development efforts three software prototypes were implemented utilizing service-oriented approaches. These efforts have been tested with more than 200 users during this period. The outcomes of these activities led to the design and implementation of a system architecture that relies on service- oriented approaches and open standards. Moreover, extensive prototyping with incremental development stages helped to find the balance between the design and implementation of the system while reflecting to rapid changes of software and web- based technologies. Finally, user testing for assessment and testing of the software system were employed in order to cope with the dynamic user requirements. The main outcomes of the efforts described in this paper are presented and summarized in the form of Architectural Concepts that pave the way towards an open, extensible architecture.
To successfully compete in today’s volatile business environments, enterprises need to consolidate, flexibly adapt, and extend their information systems (IS) with new functionality. Component-based development approaches can help solving these challenges as they support the structuring of IS landscapes into business components with a loosely coupled business functionality. However, the structuring process continues to pose research challenges and is not adequately supported yet. Current approaches to support the structuring process typically rely on procedures that cannot be customized to the designer’s situational preferences. Furthermore, they do not allow the designer to identify and reflect emerging conflicts during the structuring. In this paper, we therefore propose a new method that introduces a rational, reflective procedure to systematically derive an optimized structuring according to situational preferences. Using a design science approach, we show (i) how the derivation of business components can be formulated as a customizable multi-criteria decision making problem and (ii) how conceptual models can be used to derive business components with an optimized functional scope. To evaluate the feasibility of the proposed method, we describe its application in a complex case that was taken from a German DAX-30 automobile manufacturer.
On the basis of C/S and B/S architecture is analyzed and compared, the browser/server/database architecture is presented as the software architecture of the manufacturing integrated service platform based on SOA, which is the architecture based on multi-level distributed web computing model. The service providers, service consumers and service supporters close ranks the platform operators of MISP all around by means of the Web browser in the client. Some applications come from the multiple cross-boundary and uncertain organizations are deployed in the Web server, which include function modules of MISP, such as preponderant manufacturing resource sharing subsystem, intelligentized and independent design subsystem for the industry, virtual design and manufacture center, electronic business and logistics service subsystem, public service and management for the platform and so on. Large amount of data can be dynamically integrated and managed in the database server. Using this model to build the platform, the partners and applications located in different place can be integrated effectively, it is easily learning, using and extending in the future, and ensure that its operators can uninterruptedly provide excellent services.
Dealing with non-functional requirements (NFRs) has posed a challenge onto software engineers for many years. Over the years, many methods and techniques have been proposed to improve their elicitation, documentation, and validation. Knowing more about the state of the practice on these topics may benefit both practitioners' and researchers' daily work. A few empirical studies have been conducted in the past, but none under the perspective of software architects, in spite of the great influence that NFRs have on daily architects' practices. This paper presents some of the findings of an empirical study based on 13 interviews with software architects. It addresses questions such as: who decides the NFRs, what types of NFRs matter to architects, how are NFRs documented, and how are NFRs validated. The results are contextualized with existing previous work.
This paper describes our experience with providing architecture as a service to application developers. The approach is an effective way to implement the architecture process especially, but not only, in the context of agile development. In their role as stakeholders of non-functional system qualities, architects prepare and support the developers through participating in coding activities and play a key role in communicating the architecture throughout the lifetime of the project. Especially in an agile context, it is important to build up the trust which is encouraged by the idea of the architects offering their practical-oriented “services” to the development team. Within our agile projects we found that in addition to a customer representative it is important to have an “architecture representative” counterpart who is responsible for the system qualities to keep an adequate balance between the functional and non-functional requirements in the course of the project.
ResearchGate has not been able to resolve any references for this publication.