Content uploaded by Martin Engert
Author content
All content in this area was uploaded by Martin Engert on May 08, 2020
Content may be subject to copyright.
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
1
Partner Programs and Complementor
Assessment in Platform Ecosystems:
A Multiple-Case Study
Completed Research
Martin Engert
fortiss GmbH &
Technical University of Munich
martin.engert@tum.de
Andreas Hein
Technical University of Munich
andreas.hein@tum.de
Helmut Krcmar
Technical University of Munich
helmut.krcmar@tum.de
Abstract
Digital platform ecosystems are an omnipresent phenomenon. Compared to traditional modes of
interaction, digital pltaforms rely on complementary produts and services that autonomous partners
provide . However, adequate measures to assess the output of complementors are not readily available and
lack theoretical grounding. Thus, the goal of this paper is to explore and organize criteria and related metrics
for the assessment of complementor outputs. We conduct a multiple-case study on 14 partner programs of
B2B software platforms. Then, we develop a taxonomy comprising different complementor outputs in
digital platform ecosystems. The taxonomy comprises 26 criteria for two complementor roles and respective
metrics applied by platform owners for their evaluation. Furthermore, we describe characteristics of partner
programs such as variations in assessment modes and intervals. Our findings support platform owners
when creating and updating their partner programs and provide the basis for future work on the assessment
of complementor ouput.
Keywords
Digital Platform Ecosystem, Partner Management, Governance, Assessment, Multiple Case Study
Introduction
Digital platforms have fundamentally changed the way we interact and conduct business. Through the
digitization of business processes and increased availability, the trend toward a software-driven economy
has been further accelerated. Companies with a digital platform such as Apple, Google, Microsoft, and
Amazon are dominating the global economy, a circumstance referred to as the platform economy (Evans
and Gawer 2016). The main reason for this dominating role is the utilization of network effects through the
orchestration of interactions among multiple parties (Tiwana et al. 2010). In this role, the platform owner
develops, oversees, and grows an ecosystem of autonomous actors around a stable and reliable core
(Staykova 2018). The platform core provides a key functionality, which is consumed by users and extended
through complementary services and applications (summarized as complements). Third-party developers
(subsumed as complementors or partners) create those complements based on a focal value proposition
(Hein et al. 2018; Manner et al. 2013; Tiwana et al. 2010). For instance, Salesforce provides a Customer-
Relationship-Management (CRM) tool as the platform core functionality. In turn, third-party developers
extend the CRM tool with consumable add-on services such as accounting, billing, and task monitoring
applications. To facilitate value-creating mechanisms in the platform ecosystem, the platform owner
implements governance mechanisms (Hein et al. 2019a). While governance mechanisms support value co-
creation within the ecosystem, platform owners are challenged with limited transparency of complementor
activities and knowledge on the status quo of the ecosystem (Fotrousi et al. 2014; Plakidas et al. 2017;
Tiwana 2014). In this regard, partner programs are a governance mechanism that is being applied in the
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
2
majority of software ecosystems, which creates transparency on complementors’ contributions and
performance levels using the assessment of complementors. Based on predefined requirements including
revenue thresholds and platform certifications, platform owners can segment complementors individually
into different partner levels. Each level is then associated with certain benefits for complementors such as
rebates, closer collaboration opportunities, and conference invitations (Avila and Terzidis 2016; Wareham
et al. 2014). Thus, partner programs categorize complementors based on an individual assessment.
However, while metrics-based approaches to ecosystem governance yield great potential for platform
owners to make well-informed decisions, these approaches lack theoretical grounding (Fotrousi et al. 2014;
Plakidas et al. 2017). Prior work on ecosystem governance mainly focused on qualitative aspects of
governance, such as identification of mechanisms and processes (Hein et al. 2019c; Weiß et al. 2018). A
platform owner, experiencing a decline in-app users, for instance, needs information on the app and service
quality changes, such as customer satisfaction scores. Thus, the assessment of complementors as a basis for
monitoring and decision-making is an important activity for platform owners, receiving modest attention
in IS research. Therefore, we pose the following research question: What are metrics-based approaches for
the governance of complementors in digital platform ecosystems in practice?
To answer this research question, we conduct a multiple case study of digital business-to-business (B2B)
ecosystems and analyze their respective partner management programs to identify requirements and
subsequent metrics for their partner assessment strategies. We present our findings in a taxonomy for
complementor assessment in digital platform ecosystems.
Background
Digital Platform Ecosystems and Governance of Complementors
The concept of digital platforms and their ecosystems have been studied for more than a decade. In our
understanding of digital platform ecosystems, we adhere to the recent definition of Hein et al. (2019a), who
define digital platform ecosystems to comprise “[..] a platform owner that implements governance
mechanisms to facilitate value-creating mechanisms on a digital platform between the platform owner and
an ecosystem of autonomous complementors and consumers.” This definition highlights governance
mechanisms and their evolution as a central aspect to digital platform ecosystems from a platform owner’s
perspective.
Along the prototypical platform lifecycle following Tan et al. (2015), platform owners need different sets of
capabilities for managing complementors. In a platform’s early stages, owners must attract complementors
to join the ecosystem and enable them to interact with the demand side to initiate network effects (Engert
et al. 2019; Tan et al. 2015). Prominent examples of platforms failing to coordinate a sufficient level of
interactions are Google Video and Yahoo Video (Schirrmacher et al. 2017). In the formative stage that
follows, providing and refining boundary resources for complementors such as Software Development Kits
(SDKs) and Application Programming Interfaces (APIs) is one of the main tasks for platform owners to
align participants and steer platform evolution to be more open (Ghazawneh and Henfridsson 2013; Tan et
al. 2015). At the same time, platform owners must focus on balancing different upcoming tensions in
platform governance. Central tensions occur in the context of individual and standardized governance
modes (Huber et al. 2017), competitive as opposed to cooperative approaches (Foerderer et al. 2018), and
higher levels of autonomy compared to strict control (Boudreau 2010). An example of a platform failing to
balance competitive and cooperative approaches was Sega in the videogame industry. It provided technical
support and programming tools to internal developers before making them generally available, giving in-
house production studios a significant heads-up. This made external developers leave the ecosystem
because of an uneven playing field and ultimately fueled Sega’s demise (Cennamo 2018). Finally, the
maturity stage challenges owners with strengthening relationships among ecosystem participants and
promoting collectivism within the platform, increasing dependability, and fostering lock-in (Tan et al.
2015).
Although this cycle does not apply to every platform setting, establishing and adjusting ecosystem-wide
rules and norms along the evolution of a platform is essential for ecosystem governance of native and
incumbent companies (Hein et al. 2019b; Schreieck et al. 2018).
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
3
Partner Programs for Governing Complementors
Partner management programs, often divided into partnership levels comprising specific entry-
requirements and benefits, are one of the core mechanisms for platform owners to manage complementors
(Wareham et al. 2014). These programs describe the rules for complementors to join ecosystems and
participate in the transactions across the platform. Structuring and tailoring individual rights and duties
within partner management programs is a common practice in digital ecosystems such as Salesforce,
Microsoft Azure, Magento, and many others. Furthermore, these programs explicitly state the desired
activities and contributions of complementors toward the ecosystem, thus being a rule-book for individual
ecosystem behaviors as the basis for value co-creation (Sarker et al. 2012). These third-party activities
include the development of new applications, selling and implementing products and services, as well as
co-marketing activities (Hein et al. 2019a). Two key-characteristics of partner programs are to distinguish
between different groups of partners based on their contributions to the ecosystem and their performance.
For instance, partners with specific achievements may obtain prime access to new platform features, code
libraries, or priority listing in a marketplace (Wareham et al. 2014). To be able to distribute complementors
into these different partner levels, partner programs define entry requirements and thresholds for
complementor performance. This poses a challenge to define meaningful criteria and metrics to assess
complementors.
Assessment and Evaluation in Digital Platform Ecosystems
Research on digital platform ecosystems to date has focused on ecosystem-wide evaluation instead of
individual assessments as shown in Table 1.
Concept
Measures
Description / Metrics
Source
Ecosystem
Health
Productivity
Total Factor Productivity, Productivity Improvement Over Time and Delivery
of Innovations
Iansiti and
Levien
(2004)
Robustness
Survival Rates of Participants, Persistence of Ecosystem S tructure, Limited
Obsolescence and Continuity of Use Experience & Use Ca ses
Niche Creation
Variety, Value Creation
Ecosystem
Evolution
Resilience
Recovery Time After Outside Failure
Tiwana
(2014)
Scalability
Subsystem Latency, Responsiveness and Shift of Subsystem Financial Break-
Even Point per 1.000 Users
Composability
Integration Effort [h] per internal change
Stickiness
Change in Hours per End-User Session, Change in Averaged End-User
Sessions per Week over Time and Change in API Calls Made by an App on Avg.
Over Time
Platform Synergy
Change in Number of Functions Called by App to APIs Unique to Platform
Plasticity
Avg. Count of Major Features Added per Release Over Lifetime
Envelopment
Count of Successful Envelopmen t Moves, Count of Envelopment Attacks
Rebuffed and Percentage of New Subsystem Adopters Using Enveloped
Functionality
Durability
Change [%] of a Subsystems Initial Adopters Remaining Active Users, Change
[%] of Apps Released that are Subsequently Updated at Least Once a Year
Mutation
Number of Unrelated Derivative Platforms Relative to Rival Platforms,
Carryover Users [%] at outset of Derivative Subsystems and Growth of an App
Into a Platform
Complementor Assessment
Engagement Level, Customer Satisfaction, Service Quality, Lead Conversion
Rate, Continuity, Sustainability of Partner activities and Training Participation
Avila and
Terzidis
(2016)
Table 1 Prior Work on Assessment and Evaluation in Digital Ecosystems
Ecosystems are particularly difficult to assess because of their complexity and the level of signals and noise,
which platform owners must make sense of (Tiwana 2014). However, metrics remain crucial for tracking
and steering ecosystem evolution. Thus, Tiwana (2014) proposes nine criteria of evolution in platform
ecosystems. Further, building on the early work of Iansiti and Levien (2004), several contributions have
dealt with the assessment of ecosystem health as an overarching concept (e.g., den Hartigh et al. 2013
and Jansen 2014). Ecosystem health is assessed via productivity, robustness, and niche creation (Iansiti
and Levien 2004). Despite the mentioned contributions, Hyrynsalmi and Mäntymäki (2018) note that the
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
4
measures used to assess ecosystem health are mainly based on easy-to-collect metrics, and the fuzzy
terminology around “ecosystems” leads to problems when comparing the results. Moreover, Fotrousi et al.
(2014) conducted a literature analysis and found different KPIs for software ecosystems, clustering them
along with their objectives, measured entities, and measurement attributes. They found that ecosystem
actors — i.e., complementors as one measured entity — are regularly evaluated regarding their
fulfilled tasks, decisions, and financial performance. This aspect has been stressed by Avila and Terzidis
(2016), who emphasize the importance of continuous performance measurement as a core task in partner
management of digital platform ecosystems. Furthermore, they highlight that a partner evaluation must be
comprehensive and include an assessment of various criteria.
In sum, the literature on the evaluation and assessment is still scarce and focuses primarily on evaluating
the overall ecosystem and its health or evolution. Further, work on individual assessment of complementors
is in its infancy, with Avila and Terzidis (2016) being one of the few contributions mentioning metrics
suitable for complementor assessment. Thus, this research aims at extending our understanding by
examining assessment strategies for complementors from practice.
Research Design
To better understand the strategies applied by digital platform owners when assessing their complementors
within partner programs, we conducted a multiple case analysis on partner programs of digital platform
ecosystems with 14 cases, as shown in Table 2 (Eisenhardt 1989; Yin 2018). We used 11 partner programs
of B2B software platforms, which are publicly accessible and augmented with three partner programs that
ask complementors to request information on partnering possibilities. Thus, we pseudonymized the names
of these companies. The cases selected are digital platform ecosystems in the B2B domain, more precisely
all of them are software platforms. The cases differ, however, regarding their specific industry, overall size,
type of partners, and the number of partners. These differences allow us to draw important cross-case
results, leading to generalizable conclusions.
#
Company Name
#
Company Name
#
Company Name
1
Dell Technologies
6
ServiceNow
11
Magento
2
Proofpoint
7
Snow Software
12
Commerce Corp.
3
Red Hat
8
Vidyo
13
Pricing Corp.
4
Salesforce
9
VMware
14
Security Corp.
5
SAS
10
Zuora
Table 2 Companies included in the Multiple Case Study
A multiple-case analysis is suitable for this inquiry because we aim to describe complementor assessment
criteria and associated metrics from practice to extend research on the governance of digital platform
ecosystems (Benbasat et al. 1987). Based on the multiple case study, we develop a taxonomy for
complementor assessment in three iterations, applying an empirical-to-conceptual approach as proposed
by Nickerson et al. (2013). As the meta-characteristic, we chose the criteria for partner management
programs for assessing complementors. In addition to the objective ending conditions proposed by
Nickerson et al. (2013), we defined three subjective ending conditions. First, the final taxonomy should be
comprehensive for all partner programs we examined. Second, the taxonomy must be extendible for future
work on complementor assessment. Lastly, the final taxonomy must be concise regarding its single items.
The objective ending conditions ensure the generalizability and completeness of our findings from all cases.
Partner programs are a rich source of information on complementor governance and the rules and
measures applied by platform owners. Information on partner programs is communicated via company
websites using explicit partner program guides or partner program presentations. We followed the
guidelines of Yin (2018) regarding sampling strategy, data collection, and analysis. Further, we augmented
this approach with selected procedures for coding from grounded theory, according to Corbin and Strauss
(1990). Thus, we applied open, axial, and selective coding when deriving the taxonomy from the available
data, as shown in Table 3 and iterating to arrive at an exhaustive taxonomy, which met our initial ending
criteria.
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
5
Excerpt from Partner Programs
Concepts
Categories
„Platinum partners must name an executive
sponsor to discuss partnership status and the
joint business plan on a regular basis with their
SAS executive sponsor.“
1. Executive sponsor
[y/n]
2. Joint business plan
[y/n]
Assignment of executive
sponsor, [y/n]
Joint business planning
[y/n; quarterly/annually]
“Gold and Platinum partners are expected to
participate in quarterly business reviews
jointly with Proofpoint.”
3. Business review
[quarterly]
Joint business planning
[y/n; quarterly/annually]
Table 3 Illustration of Coding Scheme
Results
Characteristics of Partner Programs in Digital Platform Ecosystems
Based on the analysis of the partner management programs, we first identified four general characteristics
of partner management programs in the domain of B2B software platforms. First, all partner management
programs comprise several partner levels. The most common structure of partner levels is three-level or
four-level systems. Usually, there is a Basic or Registered level for newly registered entrants with only
minimal requirements such as signing an agreement, choosing a partner category, and creating a partner
profile. Further, the programs comprise two or three partner levels beyond the basic level with similar
activities but different requirements regarding their performance levels. These levels are usually labeled
Bronze, Silver, and Gold or Platinum. Overall, programs have little differences in structure.
Second, most partner management programs we studied differentiated two partner roles complementors
can take. On one hand, there are Sales and Implementation partners, which are characteristic of B2B
contexts. Complementors in this role are often technical consultancies, which approach potential customer
firms to sell the platform core product and some additional features to them. Additionally, they implement
and fit the product sold to these customers’ needs and the current IT landscape. On the other hand, there
are Development partners. These partners provide applications and digital services to the customers via the
platform. Salesforce AppExchange is a well-known example of such a marketplace. While development
partners significantly broaden the scope of the platform value proposition, not all platforms have this kind
of partnership in their partner programs. However, every partner program we studied provided for sales
and implementation partners.
Third, the assessment interval is one critical differentiator between partner programs. The basic
assessment period for partners in all partner programs is one year. That is the time at which platform
owners evaluate if a partner may stay in their assigned level or must move up or down. Many programs
have an annually fixed assessment interval, with a determined date for partner evaluation. Opposed to fixed
assessment, rolling assessment is a dynamic approach to partner performance evaluation. Rolling
assessment is based either on a quarterly or a daily performance measure, considering the performance of
the last four quarters or 365 days of the partnership, respectively. Only two of our studied cases applied
daily rolling assessment intervals within their partner programs, showing the increasing complexity of daily
performance assessment.
Lastly, we found that partner programs differ in their assessment mode. Most platforms apply a checklist
approach, meaning a static check if all requirements for a certain partner-level have been met. If one
requirement could not be met, the complementor fails to move up or stays within their current level.
Further, we found that four of the examined cases applied aggregated assessment modes. The most
prominent case performing an aggregated complementor performance assessment is Salesforce with its
“Consulting Partner Trailblazer Score.” Salesforce uses a scoring system with predefined and weighted
categories and sub-categories as well as maximum points to be achieved in these categories. This leaves
complementors the choice of specialization to accumulate points in different areas, increasing
complementor heterogeneity within the ecosystem.
Evaluation Criteria and their Metrics in Performance Assessment
The evaluation criteria and related metrics within partner management programs to assess complementors
differ from platform to platform and depend on the chosen partner role of the respective complementor. At
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
6
the same time, these criteria are evaluated in every assessment interval using a predefined assessment
mode. Some criteria are assessed only for complementors in more advanced partner levels. Table 4 depicts
the taxonomy of the complementor assessment derived from the multiple case study.
General Criteria
As general criteria, we identified measures that are important to establish a close, collaborative
relationship between platform owner and complementor. As such, almost all platforms required
complementors of higher partner levels to engage in annual or quarterly joint business planning for close
strategic alignment. For example, SAS uses joint business planning to set revenue goals, marketing
activities, and support activities to get there. Another criterion is the assignment of dedicated employees to
coordinate the partnership efforts, which are assessed via the number and positions of the assigned
employees. Higher-level partners must further assign an executive-level sponsor, who joins business
planning and discuss partnership status to force higher levels of partner commitment. Snow Software,
which provides software for software asset management, includes a Platinum Plus Partnership. This
demands partners to work with Snow Software exclusively. This criterion is unique to Snow Software’s
Partner Program.
Criteria for Sales and Implementation Partners
For partners in the sales and implementation role, we distinguish between three categories of criteria, as
shown in Table 4, which are expertise, performance and marketing-related.
Criteria
Metric / Example
Relationship
Joint Business Planning
y / n [quarterly / annually]
Assignment of Employees
# of Employees Assigned; Types of Employees Assigned
Exclusivity to Platform
y /n
Assignment of Executive Sponsor
y/n
Sales & Implementation
Expertise
Certification of Organization
# of Certifications; Types of Certifications
Certification of Employees
# of Certified Employees; # of Certifications of Employees [overall
or individually]; Types of Certifications; # of Certified Employees
Growth in Certified Employees
% Growth-Rate of # of Certified Employees
Training for Employees
# of Trainings Taken
Type of Training
Performance
Successful Implementation s
# of Successful Implementations
Referenceable Customers
# of Referenceable Customers
Basic Revenue
in US$
Annual Contract Value (ACV)
in US$
New Business Proportion of Revenue
in US$
Deal Volume of Referrals/Deals
in US$
Growth of Revenue / ACV
% Growth-Rate of Revenue/ACV
Customer Success Stories Submitted
# of Customer Success Stories Submitted
Customer Satisfaction Score (CSAT)
Min. Points within Predefined Scale
Marketing
Provision of Marketing Material
Sales Battlecard; Data Sheet; Presentation; Service Catalog
Co-Marketing Activities
Co-Branding on Website; # of Co-Marketing Activities/Campaigns
Financial Marketing Commitment
in US$
Development
Expertise
Use of Latest Platform Technology
y/n
Certification of Application
Types of Certifications; Show Self-Validation Test Plan and Results
Performance
Provision of Application
y/n
Users of Application
# of Application Users
Installations by New Customers
# of New Customer Installations
Provision of Customer Support
y/n
Table 4 Taxonomy of Complementor Assessment
For one, complementors engaging in sales and implementation activities are assessed regarding their
respective expertise in different areas. Every platform we examined provided specific certification
programs for complementors to acquire and demonstrate their expertise in technical and sales-related
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
7
fields. Certifications can usually be acquired for individual employees or the complementor as an
organization. Metrics for evaluation of these criteria are based on the number and type of certification
and/or the number of newly certified employees. Notably, Salesforce used the growth in the number of
certified employees as a separate criterion, providing one of the few relative measures found in our analysis.
However, the core assessment for sales and implementation complementors is related to performance
measures. Typically, the total annual revenue created or annual contract value is evaluated, often in
combination with a particular growth-related assessment component. This urges partners to onboard new
customers to the ecosystem, thus fueling ecosystem growth. Additionally, two criteria related to customer
success were identified. First, partners must submit a certain number of customer success stories, often to
be published or used as references in marketing by the platform owner. Second, a minimum level of
customer satisfaction ratings based on customers’ reviews of complementors is a prominent measure in
complementor performance assessment and an important tool for the platform owner to ensure platform
quality. Besides expertise and performance-related measures, platform owners included marketing-
related activities in their assessments. These range from a fixed financial contribution of complementors to
the platform owner for marketing activities to organizing co-marketing activities via a predefined number
of coordinated campaigns. Further, the provision of marketing material to the platform owners as product
and service catalogs, presentations, and other documents is a common criterion, having checklist format.
Criteria for Development Partners
Complementors that chose the development role are assessed along with expertise and performance
categories. First, assessment of complementor performance in this role surrounds expertise in
development. This category comprises whether a developing complementor uses the latest platform
technology. This is important to control the compatibility of complements and the provision of the latest
platform functionalities to consumers via the complements. Additionally, development expertise is assessed
via the certification of applications, either via certain types of certifications such as app performance and
security or provision of a self-validated test plan and its results. Certification of applications in the
marketplace helps platform owners secure platform quality.
The second category refers to development performance. Development performance is assessed using
four criteria. Partner programs first require a complementor to provide an application as a first prerequisite
for entering the development partner tier. Further, we find that performance is assessed through the
number of users of a complementor’s applications and the number of new customer installations. The
fourth criterion used to assess the performance of developing complementors listed in the partner programs
we examined was whether a complementor provided customer support for their applications and services.
Discussion
Integration with Measures for Ecosystem Health
The results of our empirical study show a strong focus on individual complementor assessment compared
to existing evaluation approaches for ecosystem health and ecosystem evolution. In particular, we find
similarities in the evaluation of productivity in the context of ecosystem health (Iansiti and Levien 2004),
which is equivalent to the assessment of performance in our findings. However, our results found no
suitable measure for robustness, which is the second subset of ecosystem health assessment (see Table 1).
This aspect may be added to partner programs of digital platform ecosystems to account for the robustness
of individual relationships. Still, suitable metrics need to be defined to assess the robustness of individual
relationships. Further, prior work has applied ‘variety of projects’ in open source communities as a metric
for variety within Niche Creation (Jansen 2014), which is the third subset of ecosystem health (see Table 1).
We propose the use of measures of expertise such as certifications to assess the variety of resources available
to the ecosystem (see Table 4), instead of the variety of ongoing activities, such as projects. This new
perspective will advance our understanding of ecosystem health as a measure of the resources available
instead of the activities at a certain point in time.
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
8
Internal and External Evaluation in Digital Platform Ecosystems
Prior work on the assessment of digital platform ecosystems has focused on internal evaluation criteria,
solely accounting for interactions within the ecosystem. For instance, research on ecosystems used
ecosystem health as an internal measure (e.g., Jansen 2014 and den Hartigh et al. 2013), excluding factors
external to the ecosystem, such as competition. Tiwana (2014) proposes measures to track ecosystem
evolution using only internal ecosystem characteristics as measures (see Table 1). Research on assessing
individual complementors faces a similar constraint. When consolidating KPIs for software ecosystems,
Fotrousi et al. (2014) identify only three criteria for complementors: fulfilled tasks, decisions, and profits.
However, these criteria again are focusing on ecosystem internal activities. Finally, Avila and Terzidis
(2016) posit that comprehensive partner management must evaluate complementors’ engagement level,
new customer acquisitions through complementors, lead conversion rate, continuity, sustainability,
customer acquisition, and participation in trainings. While these criteria also take an external perspective
while still focusing on internal ecosystem interactions. Our results showed that, except for marketing
campaigns, complementors’ assessment criteria mainly focus on internal ecosystem interactions. External
interactions, which might create value for the ecosystem, are not included in these evaluations. Thus, we
recommend extending the focus of complementor evaluation to include additional external engagement
behaviors, which bears opportunities for a comprehensive evaluation of complementor engagement.
Interactions of Complementor Roles
Partner programs largely distinguish between two general roles for complementors. Sales and
implementation partners focus on new customer acquisition and subsequent implementation and
customization. In contrast, development partners are tasked with the creation of complements for the core
product such as applications, analysis or related services. Complements are either easily integrated into the
system implementation of users via download and installation or must be integrated into these systems by
specialists. This situation creates dependencies and interactions between complementors of both roles.
First, sales and implementation partners need the complements of development partners when creating
sales leads and highlighting the value proposition of the software product. Second, development partners
need sales and implementation partners to advocate their solutions as important platform features and,
possibly, their integration in customer systems. Following Avila and Terzidis (2016), assessing the
engagement level of complementors is key for effective partner management. Therefore, platform owners
must include the interactions of complementors with each other, particularly with complementors taking
other roles. Interactions among complementors can be evaluated via collaboration-related measures such
as documentation or training offered by developing complementors to selling and implementing
complementors to support their activities. In turn, selling and implementing complementors’ interactions
with developing complementors can be assessed, for instance, using feature or app requests made via a
central forum to developing complementors. Enabling and controlling these exchanges through the
provision of tools for open communication between the groups is a key priority for the platform owner.
Conclusion, Opportunities for Future Work and Limitations
Applying a multiple case study approach, this work investigated criteria and metrics for assessment of
complementors in digital platform ecosystems based on an analysis of requirements within partner
programs of B2B software platforms. By following the guidelines of Nickerson et al. (2013), we developed a
taxonomy for complementor assessment. We identified characteristics of partner programs and their
respective manifestations. Furthermore, we found and organized criteria for complementor assessment and
their respective metrics.
Our insights have important implications for platform owners and complementors alike. First, platform
owners must produce suitable partner management programs when creating new platforms. Further,
continuous evolution and regular updates to the program’s policies and structure are important to engage
complementors. Thus, building on our typology of requirements and possible metrics for their assessment
is greatly helpful for creating and innovating partner programs. Second, among others, complementors can
use these metrics to self-track their performance before and after entering digital platform ecosystems.
Providing measures for complementors based on the metrics that are used in a diverse set of digital
ecosystems helps complementors assess suitable ecosystems to join. We contribute to research on digital
platform ecosystems through an analysis of partner programs as a mechanism for governance of third-
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
9
parties. Particularly, this study contributes to ongoing work on assessment and KPIs in software ecosystems
through analysis of 14 cases and organizing criteria and metrics used in practice. Thus, this contribution
serves as the basis for future work on the assessment of complementors and governance of digital platform
ecosystems.
Based on our findings, we propose two opportunities for future research. First, the current assessment
approaches focus on evaluating internal ecosystem activities. Future work may evaluate the potential of
extending this focus to complementor engagement behaviors outside ecosystems, such as knowledge
sharing and its value to the ecosystem. Developing research on engagement, researchers may collect and
systemize possible complementor engagement behaviors and evaluate their value toward ecosystems.
Relevant activities could be integrated into existing practices of complementor assessment. Second,
research on the management of digital platform ecosystems based on different metrics and KPIs remains
scarce. Platform owners need tools to assess and analyze complementors individually and collectively to
monitor their ecosystems and draft effective strategies. Therefore, future work should investigate data and
metrics available to platform owners and how they can be used and combined to provide valuable
information and knowledge on digital platform ecosystems and the individual complementors within them.
Nonetheless, this work has several limitations. First, the sampling of our multiple case study was limited
through the restricted access to partner programs of some platforms and, thus, may be subject to a sampling
bias. We mitigated this drawback by an increased number of cases to adjust for possible sampling errors.
Second, platform owners may use additional metrics for assessing complementors in their ecosystems than
stated in their partner programs. Future research may extend this case study with more data for further
validation.
Acknowledgments
This work was supported by the Bavarian Ministry of Economic Affiards, Regional Development and Energy
under the BayernCloud III project (grant. No. 20-13-3410-I.01A-2017).
References
Avila, A., and Terzidis, O. 2016. “Management of Partner Ecosystems in the Enterprise Software
Industry,” in Eighth International Workshop on Software Ecosystems co-located with Tenth
International Conference on Information Systems (ICIS 2016), Dublin, Ireland, 2016.
Benbasat, I., Goldstein, D. K., and Mead, M. 1987. “The Case Research Strategy in Studies of Information
Systems,” MIS Quarterly (11:3), pp. 369–386.
Boudreau, K. J. 2010. “Open platform strategies and innovation: Granting access vs. devolving control,”
Management Science (56:10), pp. 1849–1872.
Cennamo, C. 2018. “Building the Value of Next-Generation Platforms: The Paradox of Diminishing
Returns,” Journal of Management (44:8), pp. 3038–3069.
Corbin, J. M., and Strauss, A. 1990. “Grounded theory research: Procedures, canons, and evaluative
criteria,” Qualitative Sociology (13:1), pp. 3–21.
den Hartigh, E., Visscher, W., Tol, M., and Salas, A. J. 2013. “Measuring the health of a business
ecosystem,” in Software Ecosystems: Analyzing and Managing Business Networks in the Software
Industry, Slinger Jansen, M. A. Cusumano and S. Brinkkemper (eds.): Edward Elgar Publishing, pp.
221–246.
Eisenhardt, K. M. 1989. “Building Theories from Case Study Research,” The Academy of Management
Review (14:4), pp. 532–550.
Engert, M., Pfaff, M., and Krcmar, H. 2019. “Adoption of Software Platforms: Reviewing Influencing
Factors and Outlining Future Research,” in Twenty-Third Pacific Asia Conference on Information
Systems, Xi'An, China.
Evans, P. C., and Gawer, A. 2016. The Rise of the Platform Enterprise A Global Survey. Technical Report,
Center for Global Enterprise. https://www.thecge.net/app/uploads/2016/01/PDF-WEB-Platform-
Survey_01_12.pdf. Accessed 21 April 2020.
Foerderer, J., Kude, T., Mithas, S., and Heinzl, A. 2018. “Does Platform Owner’s Entry Crowd Out
Innovation? Evidence from Google Photos,” Information Systems Research (29:2), pp. 444–460.
Partner Programs and Complementor Assessment in Platform Ecosystems
Americas Conference on Information Systems
10
Fotrousi, F., Fricker, S. A., Fiedler, M., and Le-Gall, F. 2014. “KPIs for Software Ecosystems: A Systematic
Mapping Study,” in Fifth International Conference on Software Business, C. Lassenius and K.
Smolander (eds.), Paphos, Cyprus, pp. 194–211.
Ghazawneh, A., and Henfridsson, O. 2013. “Balancing platform control and external contribution in third-
party development: The boundary resources model,” Information Systems Journal (23:2), pp. 173–
192.
Hein, A., Böhm, M., and Krcmar, H. 2018. “Tight and Loose Coupling in Evolving Platform Ecosystems:
The Cases of Airbnb and Uber,” in Business Information Systems: BIS 2018, W. Abramowicz and A.
Paschke (eds.), Cham, Switzerland: Springer, pp. 295–306.
Hein, A., Schreieck, M., Riasanow, T., Setzke, D. S., Wiesche, M., Böhm, M., and Krcmar, H. 2019a.
“Digital platform ecosystems,” Electronic Markets (30:1), pp. 87–98.
Hein, A., Schreieck, M., Wiesche, M., Böhm, M., and Krcmar, H. 2019b. “The emergence of native multi-
sided platforms and their influence on incumbents,” Electronic Markets (29:4), pp. 631–647.
Hein, A., Weking, J., Schreieck, M., Wiesche, M., Böhm, M., and Krcmar, H. 2019c. “Value co-creation
practices in business-to-business platform ecosystems,” Electronic Markets (29:3), pp. 503–518.
Huber, T. L., Kude, T., and Dibbern, J. 2017. “Governance practices in platform ecosystems: Navigating
tensions between cocreated value and governance costs,” Information Systems Research (28:3), pp.
563–584.
Hyrynsalmi, S., and Mäntymäki, M. 2018. “Is Ecosystem Health a Useful Metaphor? Towards a Research
Agenda for Ecosystem Health Research,” in Challenges and Opportunities in the Digital Era - IFIP
I3E 2018, S. A. Al-Sharhan, A. C. Simintiras, Y. K. Dwivedi, M. Janssen, M. Mäntymäki, L. Tahat, I.
Moughrabi, T. M. Ali and N. P. Rana (eds.), Cham, Switzerland: Springer, pp. 141–149.
Iansiti, M., and Levien, R. 2004. “Strategy as Ecology,” Harvard Business Review (82:3), pp. 68–78.
Jansen, S. 2014. “Measuring the health of open source software ecosystems: Beyond the scope of project
health,” Information and Software Technology (56:11), pp. 1508–1519.
Manner, J., Nienaber, D., Schermann, M., and Krcmar, H. 2013. “Six Principles for Governing Mobile
Platforms,” in Eleventh International Conference on Wirtschaftsinformatik, Leipzig, Germany.
Nickerson, R. C., Varshney, U., and Muntermann, J. 2013. “A method for taxonomy development and its
application in information systems,” European Journal of Information Systems (22:3), pp. 336–359.
Plakidas, K., Schall, D., and Zdun, U. 2017. “Evolution of the R software ecosystem: Metrics, relationships,
and their impact on qualities,” Journal of Systems and Software (132), pp. 119–146.
Sarker, S., Sarker, S., Sahaym, A., and Bjørn-Andersen, N. 2012. “Exploring Value Cocreation in
Relationships Between an ERP Vendor and its Partners: A Revelatory Case Study,” MIS Quarterly
(36:1), pp. 317–338.
Schirrmacher, N.-B., Ondrus, J., and Kude, T. 2017. “Launch Strategies of Digital Platforms: Platforms
With Switching and Non-Switching Users,” in Twenty-Fifth European Conference on Information
Systems, Guimarães, Portugal.
Schreieck, M., Wiesche, M., and Krcmar, H. 2018. “Multi-Layer Governance in Platform Ecosystems of
Established Companies,” in Academy of Management Proceedings, Chicago, Illinois, USA.
Staykova, K. S. 2018. “Managing Platform Ecosystem Evolution through the Emergence of Micro-
strategies and Microstructures,” in Twenty-Sixth European Conference on Information Systems,
Portsmouth, United Kingdom.
Tan, B., Pan, S., Lu, X., and Huang, L. 2015. “The Role of IS Capabilities in the Development of Multi-
Sided Platforms: The Digital Ecosystem Strategy of Alibaba.com,” Journal of the Association for
Information Systems (16:4), pp. 248–280.
Tiwana, A. 2014. Platform ecosystems: Aligning architecture, governance, and strategy, Amsterdam et
al., Netherlands et al.: Morgan Kaufmann.
Tiwana, A., Konsynski, B., and Bush, A. A. 2010. “Platform evolution: Coevolution of platform
architecture, governance, and environmental dynamics,” Information Systems Research (21:4), pp.
675–687.
Wareham, J., Fox, P. B., and Giner, J.L.C. 2014. “Technology ecosystem governance,” Organization
Science (25:4), pp. 1195–1215.
Weiß, N., Schreieck, M., Wiesche, M., and Krcmar, H. 2018. “Setting Up a Platform Ecosystem - How to
integrate app developer experience,” in IEEE International Conference on Engineering, Technology
and Innovation (ICE/ITMC), Stuttgart, Germany.
Yin, R. K. 2018. Case study research and applications: Design and methods, Los Angeles, London, New
Dehli, Singapore, Washington DC, Melbourne: SAGE.