PreprintPDF Available

(Do Not) Trust in Ecosystems

Preprints and early-stage research may not have been peer reviewed yet.
Preprint

(Do Not) Trust in Ecosystems

Abstract and Figures

In the context of Smart Ecosystems, systems engage in dynamic cooperation with other systems to achieve their goals. Expedient operation is only possible when all systems cooperate as expected. This requires a level of trust between the components of the ecosystem. New systems that join the ecosystem therefore first need to build up a level of trust. Humans derive trust from behavioral reputation in key situations. In Smart Ecosystems (SES), the reputation of a system or system component can also be based on observation of its behavior. In this paper, we introduce a method and a test platform that support virtual evaluation of decisions at runtime, thereby supporting trust building within SES. The key idea behind the platform is that it employs and evaluates Digital Twins, which are executable models of system components, to learn about component behavior in observed situations. The trust in the Digital Twin then builds up over time based on the behavioral compliance of the real system component with its Digital Twin. In this paper, we use the context of automotive ecosystems and examine the concepts for building up reputation on control algorithms of smart agents dynamically downloaded at runtime to individual autonomous vehicles within the ecosystem.
Content may be subject to copyright.
(Do Not) Trust in Ecosystems
Emilia Cioroaica
Embedded Systems Quality Assurance
Fraunhofer IESE
Kaiserslautern, Germany
emilia.cioroaica@iese.fraunhofer.de
Thomas Kuhn
Embedded Systems
Fraunhofer IESE
Kaiserslautern, Germany
thomas.kuhn@iese.fraunhofer.de
Barbora Buhnova
Faculty of Informatics
Masaryk University
Brno, Czech Republic
buhnova@mail.muni.cz
Abstract—In the context of Smart Ecosystems, systems engage
in dynamic cooperation with other systems to achieve their goals.
Expedient operation is only possible when all systems cooperate
as expected. This requires a level of trust between the components
of the ecosystem. New systems that join the ecosystem therefore
first need to build up a level of trust. Humans derive trust from
behavioral reputation in key situations. In Smart Ecosystems
(SES), the reputation of a system or system component can also
be based on observation of its behavior.
In this paper, we introduce a method and a test platform
that support virtual evaluation of decisions at runtime, thereby
supporting trust building within SES. The key idea behind the
platform is that it employs and evaluates Digital Twins, which
are executable models of system components, to learn about
component behavior in observed situations. The trust in the
Digital Twin then builds up over time based on the behavioral
compliance of the real system component with its Digital Twin.
In this paper, we use the context of automotive ecosystems
and examine the concepts for building up reputation on control
algorithms of smart agents dynamically downloaded at runtime
to individual autonomous vehicles within the ecosystem.
Index Terms—Smart Ecosystems, Automotive, Virtual Evalu-
ation, Building Trust, Malicious Behavior
I. INTRODUCTION
Until recently, the software architectures of automotive
platforms focused solely on the static deployment of control
functions. Control functions were deployed and integrated by
engineers of automotive Original Equipment Manufacturers
(OEMs). During system operation, only minor changes were
performed. Future automotive platforms based, e.g., on the
new POSIX-based Adaptive AUTOSAR standard [1] will
support dynamic deployment of automotive smart agents. This
will be an enabler for smart ecosystems, which comprise a
changing number of participants pursuing different goals and
interacting with each other to achieve these goals in the best
possible manner. A smart ecosystem could, for example, force
vehicles entering a city to download smart agents that activate
a speed limit in these vehicles that allow a maximum of 30
km/h.
The resulting dynamic architectures will yield new chal-
lenges for the software engineering of ecosystems. For ex-
ample, the admission of smart agents into existing systems
will become a relevant future topic. The admission of a
smart agent into an ecosystem will enable this component
to join an ecosystem, and for example permit a smart agent
to supervise specific control functions of a vehicle, such as
ensuring silent operation at night. Ecosystem admission should
be based on functional correctness and on trust in the third-
party component entering the ecosystem. In our work, we
focus on the latter, as building trust in ecosystem components
requires a new approach to testing, one that predicts the
trustworthiness of a component to handle situations properly,
instead of testing its correct implementation.
In this paper we propose a virtual Hardware-in-the-Loop
(vHiL) testbed that supports rapid runtime evaluation of
smart agents in smart (automotive) ecosystems. Based on this
testbed, we introduce a novel strategy for building trust in
ecosystems and ecosystem components by computing infor-
mation about the reputation of smart agents. To this end, we
employ the concept of Digital Twins (DT) of smart agents,
which are used during operation of the smart agents to assess
their runtime behavior. The evaluation results are then used
to override detected malicious behavior before it takes effect,
and to build up agent reputation over time. In this paper, we
outline the whole approach and propose a solution for two
specific challenges associated with it.
II. EMERGING TRENDS
During the transition of individual embedded systems to
ecosystem participants, formerly isolated systems are equipped
with open interfaces that enable communication and collab-
oration. This opens up possibilities for achieving extended
business goals and facilitating innovation, as third-party ap-
plications may connect to these systems. At the same time,
however, it introduces new research challenges related to
safety and security risks, especially in the context of safety-
critical smart ecosystems. A safety-critical smart ecosystem,
for example, needs to guarantee that the driving speeds in the
vecinity of schools in cities are low.
A. Formation of Digital Ecosystems
Considerable work has been done in analyzing and describ-
ing Software Ecosystems (SECOs) formed around software
products [2]. The recent classification from 2015 [3] emerged
from a similar classification of Systems of Systems [4].
As presented by us in [5], there is a distinction between
Smart Ecosystems (SES) and Software Ecosystems (SECOs).
SES are formed around cyber-physical systems (CPS) and
their software and hardware components can be provided by
different actors, such as the government or an organization,
but also by a software company or an individual developer.
The hardware-software interaction within an SES thus requires
additional checks of trust.
The key difference between digital ecosystems and systems
of systems is that digital ecosystems involve actors with
goals, which significantly influences the dynamics within an
ecosystem [6]. In cooperation, the actors might have not only
collaborative goals, but also competitive goals, which may
influence the health of the ecosystem [7], [8]. In smart ecosys-
tems, where hardware and software components of cyber-
physical systems are provided by different actors, malicious
behavior can be introduced along with software components
by actors who join a smart ecosystem based on declared
collaborative goals, but who are actually acting in competition.
Until now, admission to a digital ecosystem has been based
on the actors’ commitment to published roadmaps organized
and provided by an ecosystem orchestrator for the long term
[9]. Safety-critical ecosystems, however, are particularly faced
with the challenge of intended malicious behavior which may
be hidden in the smart agents. As a consequence, besides being
functionally correct, a safety-critical ecosystem also needs to
assess the participants’ trustworthiness before granting them
admission. Assessing the trustworthiness of ecosystem partic-
ipants requires new platforms that enable behavior evaluation
at runtime.
B. Formation of Trust in Ecosystems
While the risks associated with maliciously behaving actors
in smart ecosystems have been recognized there is still no
satisfactory solution for the issue yet. We argue that one of
the most promising directions in this context can be driven by
the formation of trust in smart agents.
Looking at the beginnings of development of autonomic
computing systems, reputation can be a good indicator of the
level of trust in a system. The authors of [10] propose the
possibility to store information about a system’s reputation
in order to address the need for trustworthiness in potential
partners. However, this approach implies the execution of a
system’s behavior in the real world and hence entails the
risks associated with executing malicious behavior that can
be introduced along with smart agents. A solution for building
reputation in a system without executing its behavior in the real
world has become possible by recent advancements in the field
of automation. The notion of a Digital Twin (DT) has been
introduced by NASA [11] as a realistic digital representation
of a flying object used in lab-testing activities. Since then, the
notion of DT has also been adopted in the emerging Industry
4.0 [12] for representing the status of production devices and
to enable forecast of change impacts.
While the concept of DT is so far only being employed
for simulation and testing detached from object operation, we
see great potential in the incorporation of its usage into an
object’s operation time (the smart agent in our case), which
we are proposing in this paper. In this way, we can create safe
conditions for building reputation via multiple observations
of a component’s behavior in specific situations. In the smart
ecosystem domain, the reputation of a smart agent can be built
based on collected evidence about the decisions that the smart
agent makes in these specific situations.
III. MET HO D FO R BUILDING TRUS T
In this section, we introduce our approach to building trust
in smart agents, which is based on the concept of Digital Twins
and their safe real-time evaluation during the runtime of smart
agents. Although the approach is general, we narrow it down
to the context of automotive smart ecosystems, which allows
us to be more specific in the explanation.
In automotive smart ecosystems, a vehicle receives a new
smart agent which interacts with the automotive control soft-
ware as a black box. The vehicle executes this agent on one
of its Electronic Control Units (ECU). Building trust in this
black box requires reputation from a trusted source, in our case
the ecosystem of the city where the car is driving. In order
to build up this trust, our platform (introduced in Section IV)
evaluates a DT of the smart agent in a virtual environment in
three steps:
1) The smart agent is downloaded to the vehicle together
with its corresponding DT. The DT is an executable
description of the algorithm that can be controlled in
a simulated environment. Complementary to the algo-
rithm, the DT defines an acceptable behavior range for
the combination of input and output values and the
internal state of the algorithm.
2) Phase 1 of our approach validates both the correctness
and the trustworthiness of the smart agent by evaluating
its DT behavior in the context of a simulation. The DT
shows a projection of the behavior of the smart agent’s
control algorithm in all situations. This projection yields
an abstracted behavior that reflects the control algo-
rithm’s behavior with bounded accuracy. In this way,
the process of building trust in the smart agent does not
require software execution on a real ECU, but merely
evaluation of the behavior of the DT in a secured virtual
environment (cf. Fig. 1. phase 1).
3) Phase 2 builds trust in the conformity of the smart agent
with its DT (cf. Fig. 1. phase 2). This phase requires
execution of the smart agent on the ECU. Conformity
is checked by validating the behavior of the DT against
the behavior of the real smart agent. This also requires
a trustworthy monitoring platform.
In this paper, we focus on the concept of Phase 1. It
can be realized with two possible simulation strategies: (1)
Pure predictive simulation is based on a set of well-defined
situations that evaluate DT behavior in a virtual environment,
while (2) linked predictive simulation virtualizes the vehicle’s
current situation and predicts sensor data to reflect a forecast
situation in the near future. Linked predictive simulation evalu-
ates the DT in situations that are not covered by pure predictive
simulation. For example, when approaching a crossing at an
intersection, our platform monitors the DT. The system needs
sufficient time to react to the decisions of the smart agent and
to possibly override them. Therefore, the platform evaluates
the smart agent’s DT with faster simulation speed during
Fig. 1. Phases of the Method
vehicle operation. If the DT decides to continue driving at
full speed, the outcome of this decision needs to be evaluated
in advance in order to safely override the behavior of the smart
agent, in case it leads to hazardous situations.
For evaluating the outcome of a control algorithm, a holistic
scenario of the situation in which the system operates needs to
be created. When evaluation happens at runtime (i.e., linked
predictive simulation), the situation encountered at runtime
needs to be represented accurately. This is accomplished by
using DTs of systems that are part of the ecosystem and form
the technical situation in which the smart agents are evaluated.
IV. TES TI NG PL ATFORM
Our test platform evaluates the trustworthiness of a smart
agent before it is fully admitted to a smart ecosystem. For
this reason, it must ensure that the control algorithm under
evaluation cannot detect that it is under evaluation, i.e., that
it cannot distinguish whether it is interacting with a real
system, or with a simulation model. This includes addressing
the uncertainties of real and simulated sensors, as well as the
creating realistic scenarios that accurately reflect the technical
situation encountered at runtime or that are anticipated in
the near future. Based on this, our platform needs to deploy
mechanisms that accurately compute the level of trust in an
agent.
The metamodel that describes the logical structure of our
test platform is depicted in Fig. 2. Smart agents are software
components that are dynamically deployed on systems. Digital
Twins represent the behavior of smart agents with bounded
accuracy. DTs are necessary because when a smart agents
trustworthiness is evaluated, its DT must not detect whether
it is interacting with real or virtual entities. Consequently,
malicious behavior will be visible in the virtual world before
it takes effect in the real world.
A. (Do Not) Trust the Virtual Evaluation
One possible sandboxing approach is to ensure that the
smart agent cannot detect that it is under evaluation in a
simulated environment. In our opinion, such a sandbox is
almost impossible to create in a safe and secure manner if
the smart agent can implement complete Turing behavior.
Therefore, we propose reducing the expressiveness of the DT
for the simulation. This assures that the DT cannot implement
detection algorithms.
The simulation is a virtual representation of a vehicle that
is composed of multiple Software Components that imple-
ment for example control software. Virtual HW Components
are specialized software components that implement hard-
ware models, for example sensors, actuators, microprocessors,
memories, etc.
Fig. 2. Metamodel of the Test Platform@Runtime
Given the abstracted behavior of the virtualized hardware
resources, the frequency and value range of the output signals,
as well as signal quality can differ considerably from real-
world components. This offers an opportunity to detect that the
evaluation is happening in a virtual environment. Therefore,
Output Processors as the interface between the DT and the
hardware resource implement signal preprocessing.
Abstract platform simulations create the opportunity for
malicious software algorithms to detect whether they are
interacting with virtual components or real systems, e.g., by
sending commands to challenge the platform and provoke
known responses. For example, by monitoring response times,
or monitoring the consistency of sensor signals, a DT can
detect a simulation. Therefore, besides reducing the expres-
siveness of the language for programming a DT, we also need
to reduce its abilities to monitor its surroundings. The Software
Wrapper component provides mechanisms that assure that
DTs are not able to retain their state, neither monitor nor
observe the passing of time. The platform interface of the
Communication Middleware permits only very specific and
limited interactions between DTs and platforms. Furthermore,
the Monitor Behavior of SW Component monitors the behavior
of software components in order to detect suspicious interac-
tion patterns.
B. (Do Not) Trust the Smart Agent
Trust in a smart agent is built up over time via the concept
of reputation. Our platform enables predictive simulation,
which involves the execution of algorithm behavior at much
higher speeds than wall clock time, as well as simulation
of the same situations over and over again. This detects, for
example, hidden behavior whose activation is quite unlikely.
The reputation of a smart agent is influenced by monitoring
Functional Properties and Non Functional Properties, such
as reaction times, of the smart agent behavior. Actors, i.e.
organizations that provide the system or components and users
define Expectations towards actors. The reputation of an actor
is therefore also liked to the Expectations that another actor
has in the context of a collaboration. Therefore, the reputation
of an actor introducing a system to the market needs to be
considered as well.
Smart agents start with an initial level of trust when they
join the ecosystem. This level is updated according to evidence
continuously derived from the virtual evaluation of the smart
agents Digital Twins. This includes multiple behavior execu-
tions in a virtual environment, uncommon situations, reference
behavior, or situations that have been recorded before. This
increases the probability that malicious behavior will show up
in simulations and not when controlling real systems.
V. DISCUSSION AND CONCLUSION
Technological progress opens up opportunities for business
growth but despite the existence of safety standards it also
leads to higher safety and security risks. With every new major
accident, standards are reconsidered and improved. However,
the process of improving the standards takes much longer than
the deployment of methods and tools that can reduce these
risks. Deployment of platforms that enable virtual verification
of smart agents at runtime represents our vision tackling risks
and building trust in safety-critical ecosystems.
A. State of the Work and Preliminary Results
Our platform is based on FERAL simulator [13] and comes
as an extension of the platform we describe in [5], which
enables testing of automotive smart ecosystems. The platform
already supports the testing of control algorithms to avoid ob-
stacles. Virtual entities and real world entities collaborate with
each other within the platform to jointly avoid obstacles. Our
method and platform extension presented in this paper enhance
the level of trust in smart agents with special attention on the
importance of performing trustworthy virtual evaluation.
B. Future and Ongoing Work
The scope of the entire concept encompasses many inter-
esting research questions for further investigation such as (a)
checking conformity between the DT and the smart agent
and (b) creating Digital Twins that accurately represent the
behavior of the smart agent. In addition, we are currently
researching a DT language with reduced expressiveness for
functional models. The language must not be Turing complete
so that detection of simulated environments can be prevented,
and it should enable identification of malicious smart agents
based on their Digital Twins. Furthermore, our platform could
identify situations in which malicious behavior is exposed.
Future work will include the implementation of phase 2 which
guarantees conformity of smart agents with DT behavior
and implements countermeasures in case of non conforming
behavior.
ACKNOWLEDGMENT
This work has been funded by the German Ministry of
Education and Research (BMBF) through the research project
CrESt (Collaborative Embedded Systems). The contribution
of B. Buhnova was supported by ERDF/ESF ”CyberSecurity,
CyberCrime and Critical Information Infrastructures Center of
Excellence” (No. CZ.02.1.01/0.0/0.0/16019/0000822).
REFERENCES
[1] “AUTOSAR,” https://www.autosar.org/, [Online; accessed 03-
September-2018].
[2] K. Manikas, “Revisiting software ecosystems research: A longitudinal
literature study,Journal of Systems and Software, vol. 117, pp. 84–103,
2016.
[3] K. Manikas, K. Wnuk, and A. Shollo, “Defining decision making
strategies in software ecosystem governance,Department of Computer
Science, University of Copenhagen, 2015.
[4] “Office of the deputy under secretary of defense for acquisition and
technology, systems and software engineering. systems engineering
guide for systems of systems,” Version 1.0. Washington, DC, ODUSD(A
and T)SSE, 2008.
[5] E. Cioroaica, T. Kuhn, and T. Bauer, “Prototyping automotive smart
ecosystems,” in 2018 48th Annual IEEE/IFIP International Conference
on Dependable Systems and Networks Workshops (DSN-W). IEEE,
2018.
[6] K. Manikas and K. M. Hansen, “Reviewing the health of software
ecosystems–a conceptual framework proposal,” in Proceedings of the
5th International Workshop on Software Ecosystems (IWSECO), 2013,
pp. 33–44.
[7] K. M. Popp, “Goals of software vendors for partner ecosystems–a
practitioner s view,” in International Conference of Software Business.
Springer, 2010, pp. 181–186.
[8] J. Bosch and H. H. Olsson, “Ecosystem traps and where to find them,”
Journal of Software: Evolution and Process, p. e1961, 2018.
[9] J. Bosch and P. Bosch-Sijtsema, “From integration to composition:
On the impact of software product lines, global development and
ecosystems,” Journal of Systems and Software, vol. 83, no. 1, pp. 67–76,
2010.
[10] J. O. Kephart and D. M. Chess, “The vision of autonomic computing,”
Computer, no. 1, pp. 41–50, 2003.
[11] M. Shafto, M. Conroy, R. Doyle, E. Glaessgen, C. Kemp, J. LeMoigne,
and L. Wang, “Modeling, simulation, information technology & process-
ing roadmap,” National Aeronautics and Space Administration, 2012.
[12] R. Rosen, G. Von Wichert, G. Lo, and K. D. Bettenhausen, “About the
importance of autonomy and digital twins for the future of manufactur-
ing,” IFAC-PapersOnLine, vol. 48, no. 3, pp. 567–572, 2015.
[13] T. Kuhn, T. Forster, T. Braun, and R. Gotzhein, “Feralframework for
simulator coupling on requirements and architecture level,” in Formal
Methods and Models for Codesign (MEMOCODE), 2013 Eleventh
IEEE/ACM International Conference on. IEEE, 2013, pp. 11–22.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Today, companies operate in business ecosystems where they collaborate, compete, share, and learn from others with benefits such as to present more attractive offerings and sharing innovation costs. With ecosystems being the new way of operating, the ability to strategically reposition oneself to increase or shift power balance is becoming key for competitive advantage. However, companies run into a number of traps when trying to realize strategical changes in their ecosystems. In this paper, we identify 5 traps that companies fall into. First, the “descriptive versus prescriptive trap” is when companies assume that current boundaries between partners are immutable. Second, the “assumptions trap” is when powerful ecosystem partners assume that they understand what others regard as value‐adding without validating their assumptions. Third, the “keeping it too simple trap” is when companies overlooks the effort required to align interests. Fourth, the “doing it all at once trap” is when companies disrupt an ecosystem assuming that all partners can change direction at the same time. Finally, the “planning trap” is when companies are unable to move forward without a complete plan. We provide empirical evidence for each trap, and we propose an ecosystem engagement process for how to avoid falling into these.
Conference Paper
Full-text available
Making the right decisions is an essential part of software ecosystem governance. Decisions related to the governance of a software ecosystem can influence the health of the ecosystem and can result in fostering the success or greatly contributing to the failure of the ecosystem. However, very few studies touch upon the decision making of software ecosystem governance. In this paper, we propose decomposing software ecosystem governance into three activities: input or data collection, decision making, and applying actions. We focus on the decision making activity of software ecosystem governance and review related literature consisted of software ecosystem governance, organizational decision making , and IT governance. Based on the identified studies, we propose a framework for defining the decision making strategies in the governance of software ecosystems. We identify five decision areas for software ecosystem governance and four archetypes describing the way decisions are taken for each decision area. We explain this matrix-based framework by providing examples from existing software ecosystems.
Technical Report
Full-text available
A mission team’s ability to plan, act, react and generally accomplish science and exploration objectives resides partly in the minds and skills of engineers, crew, and operators, and partly in the flight and ground software and computing platforms that realize the vision and intent of those engineers, crew and operators. Challenges to NASA missions increasingly concern operating in remote and imperfectly understood environments, and ensuring crew safety in the context of unprecedented human spaceflight mission concepts. Success turns on being able to predict the fine details of the remote environment, possibly including interactions among humans and robots. A project team needs to have thought through carefully what may go wrong, and have a contingency plan ready to go, or a generalized response that will secure the crew, spacecraft, and mission. Balanced against these engineering realities are always-advancing science and exploration objectives, as each mission returns exciting new results and discoveries, leading naturally to new science and exploration questions that in turn demand greater capabilities from future crew, spacecraft and missions. This healthy tension between engineering and system designs on the one hand, and science investigations and exploration objectives on the other results in increasing demands on the functionality of mission software and the performance of the computers that host the on-board software and analyze the rapidly growing volume of science data. Mission software and computing must become more sophisticated to meet the needs of the missions, while extreme reliability and safety must be preserved. Mission software and computing also are inherently cross-cutting, in that capabilities developed for one mission are often relevant to other missions as well, particularly those within the same class.
Article
‘Software ecosystems’ is argued to first appear as a concept more than 10 years ago and software ecosystem research started to take off in 2010. We conduct a systematic literature study, based on the most extensive literature review in the field up to date, with two primarily aims: (a) to provide an updated overview of the field and (b) to document evolution in the field. In total, we analyze 231 papers from 2007 until 2014 and provide an overview of the research in software ecosystems. Our analysis reveals a field that is rapidly growing, both in volume and empirical focus, while becoming more mature. We identify signs of field maturity from the increase in: (i) the number of journal articles, (ii) the empirical models within the last two years, and (iii) the number of ecosystems studied. However, we note that the field is far from mature and identify a set of challenges that are preventing the field from evolving. We propose means for future research and the community to address them. Finally, our analysis shapes the view of the field having evolved outside the existing definitions of software ecosystems and thus propose the update of the definition of software ecosystems.
Article
Industrie 4.0 - the “brand” name of the German initiative driving the future of manufacturing - is one of several initiatives around the globe emphasizing the importance of industrial manufacturing for economy and society. Besides the socio-economical if not political question which has to be answered - including the question about the future of labor - there are a couple of substantial technical and technological questions that have to be taken care of as well.
Conference Paper
The health of a software ecosystem is an indication of how well the ecosystem is functioning. The measurement of health can point to issues that need to be addressed in the ecosystem and areas for the ecosystem to improve. However, the software ecosystem field lacks an applicable way to measure and evaluate health. In this work, we review the literature related to the concept of software ecosystem health and the literature that inspired the software ecosystem health literature (a total of 23 papers) and (i) identify that the main source of inspiration is the health of business ecosystems while also influenced by theories from natural ecosystems and open source, (ii) identify two areas where software ecosystems differ from business and natural ecosystems, and (iii) propose a conceptual framework for defining and measuring the health of software ecosystems.
Conference Paper
Simulation technologies are imperative for embedded systems development. They enable the evaluation of decisions already early in development processes. Simulators are focused on a subset of effects that affect the operation of embedded systems. Accurate prediction of embedded system behavior on system level, however, requires the consideration of multiple effects, e.g. communication behavior, system environments, and functional behavior of all relevant system components. This requires the coupling of specialized simulators to create holistic simulation scenarios. In this paper, we present FERAL, our framework for simulator coupling, which enables the integration of simulators with heterogeneous simulation models. We describe the overall coupling approach of FERAL, its simulation model, and its approach for the horizontal and vertical integration of simulation models. We show the applicability of FERAL by a realistic example that demonstrates the potential of simulator coupling for early fault detection.
Conference Paper
There is literature about large software vendors building ecosystems around themselves [1,2]. This paper looks at goals they try to achieve with partner ecosystems. KeywordsBusiness ecosystems-software industry-partnership models-goals of partnership models-partner programs
Article
Three trends accelerate the increase in complexity of large-scale software development, i.e. software product lines, global development and software ecosystems. For the case study companies we studied, these trends caused several problems, which are organized around architecture, process and organization, and the problems are related to the efficiency and effectiveness of software development as these companies used too integration-centric approaches. We present five approaches to software development, organized from integration-centric to composition-oriented and describe the areas of applicability.