Content uploaded by Jianxi Luo
Author content
All content in this area was uploaded by Jianxi Luo on Oct 30, 2021
Content may be subject to copyright.
Electronic copy available at: https://ssrn.com/abstract=2735465
How Do Accelerators Select Startups? Shifting Decision Criteria Across Stages
Bangqi Yin, Jianxi Luo*
Massachusetts Institute of Technology &
Singapore University of Technology & Design
* luo@sutd.edu.sg
(Version as of December 2017)
Abstract
Competitive accelerators can help startups overcome resource constraints and the liability of
newness, but they are also highly selective when choosing startups. However, little is known about
how a top accelerator selects startups and what the decision criteria are in that process. Here, we
analyze a unique dataset of the real profiles of the startups that applied to the first seed accelerator
in Southeast Asia and the accelerator’s decisions to uncover the selection process and latent criteria.
We used a scoreboard of 30 criteria based on the Real-Win-Worth (Is it real? Can it win? Is it worth
doing?) framework to compare the profiles of the selected versus the rejected startups. Our analyses
revealed that accelerator managers’ implicit decision criteria shifted from eight Real or Win criteria
in the initial screening of many startups to another four Win or Worth criteria in the final selection
of a small number of startups. These critical criteria were further used to build regression models
that predict screening and selection results. Understanding the shifting decision criteria may inform
accelerator managers of their own subconscious preferences to improve the decision process, and as
a result also increase entrepreneurs’ empathy toward accelerator managers to sharpen their
applications.
Keywords: accelerator, incubator, startups, entrepreneurship, innovation
Electronic copy available at: https://ssrn.com/abstract=2735465
Acknowledgement
We thank Kevin Otto, Katja Otto, Chaoyang Song and Aditya Ranjan for their insights and help at
the early stage of this research, as well as the participants at DRUID Asia Conference in 2016 for
their comments and suggestions. We particularly thank JFDI for data access that enabled this
research. SUTD-MIT International Design Centre and SUTD-MIT Dual Masters Programme
provided financial support. We are especially grateful to two anonymous referees and co-editors for
detailed criticism, comments and suggestions that greatly improved the article. The authors alone
are responsible for any errors and oversights.
I. INTRODUCTION
Seed-stage startups are challenged by both resource constraints and the liability of newness
[1]. Such startups usually lack not only the necessary financial, social and human capital required to
pursue perceived opportunities [2] but also the business experience and the legitimacy to provide
viable products or services [3]. Traditionally, incubators have been used as an instrument to help
startups overcome these challenges during the most vulnerable starting-up period and grow their
legitimacy, competitiveness and maturity [4]. Incubators usually provide a shared working space,
facilities, administrative support, legal services and many networking and mentoring opportunities
with seasoned entrepreneurs, venture capitalists, industry veterans, incubator alumni and peers [5-
9].
Over the past decade, a special type of incubator called an “accelerator” or “seed
accelerator” proliferated rapidly and emerged as an integrated part of the entrepreneurship
ecosystem. An accelerator intensifies the incubation services and accelerates startup development
through “a fixed-term, cohort-based program, including mentorship and educational components
that culminate in a public pitch event or demo-day” [10]. Accelerators make equity investments in
every startup in a cohort on the same financial terms. By their nature, accelerators are selective
when choosing startups, because they need those startups to raise funding based on an increased
valuation upon their graduation from their short time at the accelerator. The fixed-duration
(normally 3 months) cohort structure and the seed capital investment differentiate accelerators from
traditional incubators.
Despite the growing number of accelerators, most are of questionable efficacy. Recent
empirical studies [11, 12] have suggested that only the top accelerators (e.g., Y Combinator and
TechStars) can actually accelerate startups. A top accelerator is one that has often graduated
noticeably successful companies and is associated with high-profile mentors, e.g., successful serial
entrepreneurs and renowned venture capitalists, who provide reputation, credibility and valuable
mentorship and networking opportunities. Participation in a top accelerator might certify the quality
and potential of an early-stage startup and jump-start its brand [13], thus helping the startup
overcome the liability of newness and attract more and better investors. In turn, a top accelerator
will attract applications from numerous startups [14, 15], some of which have high potential even
prior to the acceleration program. In other words, an accelerator’s value to a startup is associated
with the accelerator’s status and the level of competition to enter it.
The accelerator’s success largely relies on the quality of the startups it selects. Thus, a top
accelerator must be highly selective and admit only a small number of high-potential startups to
keep the community elite and reputable in the hope of large financial returns from the startups later
on. For example, Y Combinator and TechStars admit fewer than 3% of all applying startups.
However, filtering out the small number of top-quality startups from a large number of applicants is
a challenge for accelerators. Taken together, the selection process is fundamentally crucial for the
successes of both the accelerator and the startups that apply to it, but it is also challenging for both
parties. However, little is known about how such accelerators select startups and what criteria have
a critical effect on the decisions in the process.
Our research aims to fill these gaps of understanding by addressing two related research
questions. First, what critical criteria differentiate the selected startups from the rejected ones?
Prior studies of traditional incubators and angel investors have suggested that they seldom explicitly
use a balanced and compulsory set of selection criteria and instead rely on subconscious preferences
[7, 16, 17]. We aim to identify the implicit criteria that are critical for the rejection or selection
decisions of accelerator managers. Second, do the critical criteria vary across different decision
stages of the startup selection process? And if so, how? Prior studies of traditional incubators and
angel investors have suggested that evaluation criteria may change during the decision process from
initial screening to final selection [18-20]. Thus, we aim to identify the shifts of implicit decision
criteria that are critical in different decision stages and explore possible reasons for such shifts, in
the accelerator context.
To identify accelerator managers’ implicit decision criteria, we adopted the Real-Win-Worth
framework, which was initially developed to evaluate innovation projects [21], to categorize 30
potential criteria in three main categories: Is it real? Can it win? Is it worth doing? The framework
was applied to assessing the profiles and results of the startups that applied to the accelerator
program of Singapore-based JFDI, which stands for “Joyful Frog Digital Incubator” and piloted the
first seed accelerator in Southeast Asia. Our unique dataset from the actual decision process,
together with the Real-Win-Worth framework, has allowed us to identify accelerator managers’
specific decision criteria that shift across screening and selection stages. Findings are reported in
detail in Section IV.
Therefore, our research aims to make a theoretical contribution to decision making in the
entrepreneurial process, with a focus on the accelerator context and startup selection decisions. For
practice, our findings can potentially make accelerator managers aware of their own subconscious
preferences, rationales and bias as well as the associated risks and benefits, and guide them to
improve their decision process as well as data collection. Meanwhile, understanding accelerator
managers’ implicit decision criteria may also give entrepreneurs the empathy toward accelerator
managers, and thus improve their applications.
II. LITERATURE REVIEW
Because accelerators are a form of incubators and invest in the equity of their “tenants” like
angel investors, we relate the literature on accelerators, traditional incubators and angel investors in
terms of how they select startups for incubation, investment, or both.
A. Accelerators
Y Combinator is often considered the first accelerator. It was founded by Paul Graham in
2005 in Cambridge, Massachusetts, and later moved to Silicon Valley. TechStars was founded by
David Cohen and Brad Feld in Boulder, Colorado, in 2006 and popularized the accelerator model
through the Global Accelerator Network, a selective international organization for accelerators that
follow the TechStars model. Today, the network has 50 accelerators in 63 cities on 6 continents,
including JFDI, which is the context of this study. Given that it is a recent phenomenon, academic
literature on accelerators is still scant despite the publication of descriptive studies [10, 22, 23]. In
contrast, there is an extant literature on the traditional incubators.
An accelerator is a special type of incubator, whose general goals are to improve the
chances of startups’ survival [6, 24] and accelerate their growth [9, 25, 26] through an array of
business support resources and services [5, 6, 9]. The European Commission reported that
incubated startups have a survival rate of 80-90% 5 years after graduation, significantly higher than
those in the wider startup community [7]. NBIA reported that incubated startups have a survival
rate of 87% after five years compared to 44% of non-incubated startups [27]. However, Amezcua
[8] found that incubation does not necessarily help startups avoid failures but may allow weak
startups to fail sooner, indicating that survival rate is an improper measure of incubator performance
[28]. In addition, the better performance of the startups in incubators might be in part a result of the
screening of weak startups and the selection of high quality ones, rather than a result of the
incubation [29].
The varied types and processes of incubators may also affect incubation performance [6,
28]. For instance, Aernoudt [9] found that the survival rate and employment growth of technology
incubators are higher than those of social, basic and mixed incubators. Amezcua [8] found that for-
profit incubators have higher employment and sales growth in their incubated startups than their
non-profit counterparts. Barbero et al. [30, 31] found that basic research incubators generate more
product and technological process innovation than university, economic development and private
incubators. A few authors have advocated for the benefits of incubators that specialize in a limited
number of industry sectors, e.g., biotechnology, energy and information technology [7, 14, 32].
Interested readers may refer to Barbero et al. [30] for a detailed review of various types of
incubators. The empirical context of our study is a private digital technology incubator and its 100-
day accelerator program.
Accelerators differ from traditional incubators in several ways. First, accelerators offer
cohort-based short-duration programs. Batches of startups enter, grow and graduate together,
whereas incubators’ services are normally continuous. The admission of startups is cyclical for
accelerators but continuous for incubators. During an acceleration period, which normally lasts
approximately three months, the accelerator offers structured and intensive networking, educational
and coaching opportunities either with mentors in residence or with successful entrepreneurs,
alumni, venture capitalists, and industry veterans.
Second, accelerators make a small equity investment in the selected startups, similar to
angel investors, but they invest in the entire cohort of admitted startups on the same financial terms
instead of investing in one venture a time. Traditional incubators seldom make equity investments;
instead, they often collect rents from the incubated startups for the shared space and resources. The
Seed Accelerator Ranking Project (http://www.seedrankings.com; [10]) reported that in the United
States, startups admitted into accelerators received an average of $23,000 for 6% of their equity,
with 41% of them going on to receive subsequent venture funding of $350,000 or more within one
year of graduation. For returns on equity investments, accelerators must be more selective with
startups than incubators but are similar to angel investors in this regard.
In addition, most acceleration programs are concluded with a “demo day” during which the
startups pitch to external investors. At that point, the startups are expected to be ready to raise round
A or pre-A venture capital funding. Cohen and Hochberg [10] have provided a general definition of
accelerators as “a fixed-term, cohort-based program, including mentorship and educational
components that culminates in a public pitch event or demo-day.” Therefore, one can view the
accelerator as a special incubator that provides startups with both seed capital investment (like an
angel investor) and intensified incubation services. For instance, as an accelerator, JFDI is viewed
as an incubator that focuses on offering two 100-day acceleration programs per year. Accelerators
may also vary in terms of the equity stake taken, program length, resources, industry focus, and
affiliations with venture capital firms, corporations, universities and local governments.
Recently, several studies have reported empirical evidence on the general impact of
accelerators on seed-stage startups. Hallen et al. [11] compared accelerated and non-accelerated
startups that eventually raised venture capital and found that only top accelerators can actually
accelerate startups in terms of gaining customer traction, raising venture capital and exiting,
whereas many other accelerators do not speed up startup development. Kim and Wagman [13]
suggested that participating in a top-rated competitive accelerator can signal the viability or certify
the quality of the seed startups. Smith and Hannigan [12] investigated the startups going through Y
Combinator and TechStars, the two leading accelerators, and found these startups are often founded
by entrepreneurs from elite universities and receive subsequent external VC funding and exit (either
acquisition or quitting) sooner than outside startups that also raised venture capital.
B. Startup Selection Criteria
Accelerators normally call for startup applications, evaluate and screen weak applications,
and admit a small number of startups to accelerate [29, 33].1 The selection process and the
characteristics of the selected startups influence the success of the accelerator itself [18], but to the
best of our knowledge have not been investigated and reported in the academic literature on
accelerators. Meanwhile, we found an extant literature on the selection process and a diverse set of
startup selection criteria considered by incubator managers [6, 18, 26, 34, 35] (see Table 1 for a
summary); these criteria were primarily identified from interviews or surveys with incubator
managers. Next, we will draw on the incubator literature to study the selection criteria involved in
accelerators’ startup selection process.
For instance, Smilor [36] surveyed and interviewed the managers of 50 incubators in the
United States to reveal a few general selection criteria, including the ability to create jobs, the
uniqueness of the opportunity, and the potential for rapid growth. Merrifield [18] identified a broad
set of selection criteria, such as profit potential, growth potential, competition, risk, capital
availability, manufacturing competence, marketing and distribution, technical support, materials
availability and management, and then divided them into three groups: startups, incubators, and the
fit between startups and incubators. Mian’s [26] comparative review of six university incubators
suggested selection criteria such as technology, growth potential, business plan, management team,
cash flow, manufacturing competence, capital availability, and fit with the incubator mission.
1 This is essential for for-profit private incubators, which normally make equity investments in the incubated startups
with the hope of harvesting huge financial returns from the eventual success of these startups. In contrast, government,
social and non-profit incubators may be less demanding of the startups they select and incubate because their main
objective is to reduce regional disparities or create local jobs for people with low employment capacities.
Based on a survey of 41 incubator managers in the U.S., Lumpkin and Ireland [34]
identified three groups of screening criteria, including the team’s experience (management,
marketing, technical and financial, etc.), financial strength (profitability, liquidity, debt and asset
ratio, assets, etc.), and market and personal factors (uniqueness and marketability of
product/services, age, creativity, persistence of the startup team). Hackett and Dilts [6] grouped
selection criteria by managerial, market, product, and financial aspects. Using the criteria suggested
by Hackett and Dilts [6] for screening, Aerts et al. [7] found that financial performance,
management team, market size, and growth rate are the primary criteria based on a survey of 140
European incubator managers. Later, Hackett and Dilts [37] re-categorized their original proposed
set of criteria into star characteristics, market characteristics, differentiation characteristics, and
manager characteristics. They also added new criteria, such as the ability to attract capital
investment, patent protection, defendable competitive positioning, and prior work experience.
Wulung et al. [38] proposed a mathematical multi-objective selection model addressing
profitability, survivability, worker absorption, and employment growth.
Bergek and Norrman [39] suggested that startup evaluation criteria can be divided into idea-
focused (i.e., market and profit potential of the idea) versus entrepreneur-focused (i.e., the
characteristics, experiences, skills of the entrepreneurs) criteria. Aerts [7] found that European
incubators focus more on the criteria related to the entrepreneurs and startup team, whereas
American incubators concentrate more on financial- or market-related criteria. Bruneel et al. [40]
found that incubators seldom explicitly use a structured set of selection criteria. However, criteria
such as technology focus, product innovativeness, and growth potential are commonly mentioned.
Meanwhile, Aerts [7] found that although most incubators screen candidates on an unbalanced set
of criteria, the incubators that use a balanced set of criteria to screen startups have a higher survival
rate of their incubated startups than those using an unbalanced set of criteria. A few scholars have
argued for the use of a balanced set of criteria to screen startups [6, 18].
C. Selection Process
In addition to selection criteria, prior research has also explored the process by which those
criteria are or are not considered. For example, Merrifield [18] described a three-step decision
process for startup selection. In the first phase, six criteria are used: sales profit potential, political
and social constraints, growth potential, competitor analysis, risk distribution and industry
restructure. In the second phase, the criteria address the fit between the startup and incubator. The
final phase focuses on criteria such as management, capital, manufacturing competence, marketing
and distribution, technical support, and availability of materials or components.
The multi-step decision process has also been reported in studies on angel investors’ choices
of startups. Landström [41] first suggested that investment decision criteria may change as the
decision process unfolds over time. Mitteness et al. [19] found that angel investors focus more on
evaluating the strength of entrepreneurs initially in the screening stage and then focus relatively
more on the business opportunity at the later stages. Based on the observation of 150 interactions
between entrepreneurs and potential investors on a Canadian reality TV show, Maxwell et al. [20]
found that angel investors consider different criteria during two decision stages, i.e., initial
screening and final decision. In the first stage, angel investors tend to use the “elimination-by-
aspects” [42] heuristic and screen startups that have a fatal flaw instead of startups that outperform
others. As suggested by Shafir et al. [16], when one finds it difficult to make a selection, the
decision rationale will be “first eliminate those options that we do not want.”
Maxwell et al. [20] also found that angel investors implicitly considered a parsimonious set
of criteria instead of a compensatory decision model that systematically weights and scores a large
number of criteria. In addition, the criteria critical for initial screening are not necessarily critical in
the final decision of whether to fund a startup. Since the investors tend to “reject” during the initial
screening stage and then “choose” in the final stage, the reasons for decisions in the screening and
final funding stages should differ [16]. Shafir [43] suggested that advantages and strengths are
weighted more heavily in choosing than in rejecting, and disadvantages and weaknesses are
weighed more heavily in rejecting than in choosing. These preferences given to different kinds of
decisions are normally implicit to the decision makers themselves.
Jeffrey et al. [17] further suggested that the “elimination-by-aspects” decision heuristic and
non-compensatory decision model require less cognitive effort [44], so they are preferable when
investors need to evaluate a large number of investment targets but have time constraints and
limited cognitive capacity. To conserve cognitive efforts, investors implicitly used a parsimonious
set of criteria to reject startups as quickly as possible and trim the number of investment alternatives
that they need to evaluate for funding. However, investors often are not conscious of their
preferences for certain evaluation criteria [45]. The managers of a top accelerator are likely to
experience similar cognitive capacity challenges when attracting many applications [20], and then
they adopt a similar decision heuristic and process.
In brief, the literature on startup selection by traditional incubators and angel investors has
shed light on the decision process and criteria that may also govern the startup selection decisions
of accelerators. Our research empirically investigates the accelerator context and explicates
accelerator managers’ implicit decision heuristics and criteria across stages in the startup selection
process.
III. DATA AND METHODS
A. Empirical Context and Data
The Singapore-based JFDI provided the data for this research. JFDI was founded in 2010
and piloted the first seed accelerator in SEA. It is focused on running twice-yearly 100-day
accelerator programs. It is modeled after TechStars and is a member of TechStars’ “Global
Accelerator Network.” JFDI offers selected startups SG$25,000 for 8.88% equity, mentorship, and
facilities to build and grow their startups over 100 days. The co-founders, Hugh Mason (British)
and Meng Weng Wong (Singaporean), were both successful serial entrepreneurs with extensive
global business experience and networks in information technology, media, and marketing sectors.
JFDI was funded by an international consortium of investors, including Infocomm Investments
(Singapore’s government investment arm in the information and communication technology sector),
SpinUp Partners (Russia) and Fenox Venture Capital (Silicon Valley), along with private investors
from the Philippines, Vijay Saraff (Thailand), Paul Burmester (UK) and Thomas Gorissen
(Germany).2 Below are some facts about JFDI for the period 2010 to 2015, retrieved from JFDI’s
website and the technology media.3
• $3 million dollars were raised and deployed into 70 startups through a structured 100-day
program, creating a portfolio that is now independently valued at >$60 million.
• JFDI admitted 8-12 startups in each batch, approximately 4% of all teams that applied.
• The startups entered with a value between $200,000-500,000, and 50% of them went on to
secure seed funding averaging approximately $500,000 at valuations of $1.5-3.5 million.
• Two years after acceleration, approximately 15-20% of the startups that secured seed funding
grew into successful businesses. The hit rate is approximately 10% of all the teams accelerated.
• JFDI’s pre-accelerator program supported more than 400 startups and 1500+ entrepreneurs from
40+ countries.
At the point of our data analysis, the accelerator had selected and incubated four batches of
startups with founders from 12 countries (primarily from SEA and India). The data analyzed in this
paper are complete digital profiles of the startups that they submitted to JFDI to compete for
entrance into the accelerator program from 2014 to 2015. The total dataset contains 1,003 startup
application profiles in four different batches. JFDI made two calls for applications per year. Among
the 1,003 startups that applied, only 40 were chosen, indicating a success rate of 4%. In brief,
JFDI’s reputation in the region, the fierce competition among startups to enter its small cohort, and
its low selection rate makes JFDI a suitable empirical context in which to investigate the decision
process and criteria for the startup selection of a top-rated accelerator.
The accelerator requires each applying startup to register an account on a website platform
by providing its basic information and optional information such as website address, co-founder
picture, social media link, and an introduction video. Most importantly, startups are required to
2 Information retrieved from JFDI website: http://www.jfdi.asia/blog/jfdi-announces-sgd-2-7-million-usd-2-1-million-
fundraising-to-accelerate-tech-startups-in-south-east-asia/
3 TechCrunch: https://techcrunch.com/2015/05/19/jfdi-asia-remote-work-and-double-investment/
Echinacea: https://www.techinasia.com/jfdi-story-secret-sauce-challenges;
e27: https://e27.co/jfdi-pioneer-singapores-startup-ecosystem-closes-bootcamp-programme-20160914/;
https://e27.co/24-singaporean-accelerators-incubators-know-20150128/
answer a long list of questions about their team, product, operations, markets, competitors, and
future plans online (Table 2). Their answers to these questions profile them and are the raw data for
our analysis. To succeed in the competition for selection, the startups tend to provide information
that is as detailed as possible. The accelerator did not explicitly emphasize or focus on any criteria
for evaluating startups. But some implicit criteria might become critical as a result of the managers
and mentors’ decisions, as suggested by the studies on the investment decision making of angel
investors [17, 20].
JFDI’s selection process consists of two stages: an initial profile screening and an interview.
In the first screening stage, JFDI managers and mentors reviewed all the startup profiles that were
submitted online for each call for applications to screen and trim the startup candidates. The
majority of applications were rejected quickly, but a small number of startups proceeded to an
interview. After the interview, an even smaller number of startups were accepted into the
accelerator program. Following the two-stage selection process, i.e., profile screening and the final
interview/decision, we divide the total population of 1,003 startup applicants into different groups
and subgroups (Figure 1). The first group was categorized as “Filtered” and includes 841 startups
that were rejected in the initial screening stage. The second group was categorized as “Interviewed”
and includes the 162 startups that passed the screening and were invited for interviews. Within the
“Interviewed” group, 40 startups were selected into the accelerator. This subgroup was called
“Interviewed and Successful.” The rest of the “Interviewed” group was categorized in the
“Interviewed but Unsuccessful” subgroup.
The startup profiles need to be manually read and coded. To conserve time and effort while
ensuring a large enough sample size for the analysis, 100 startup profiles were randomly selected
from the “Filtered” group of 841 startups and the “Interviewed” group of 162 startups in the first
stage of the selection process. In the second stage, the “Interviewed and Successful” subgroup had a
total of 40 startups. The workload required to read and code all the profiles was acceptable and thus,
all of the profiles were analyzed. To match the size of the “Interviewed and Successful” subgroup
for a comparative analysis, 40 startup profiles were randomly selected from the “Interviewed but
Unsuccessful” subgroup, which has 122 startups.
B. Critical Criteria
We assumed that JFDI managers unconsciously applied only a few criteria in their rejection
or selection decisions, and these criteria will emerge as making a significant difference between the
startups that succeeded and the startups that failed during the process. Therefore, we assessed and
compared different groups and subgroups of startups against a scoreboard of a comprehensive list
of potential criteria (Table 3) for factor screening to identify the parsimonious set of implicit critical
criteria that result in the decisions. Our scoreboard includes 30 potential criteria, most of which
were chosen from the previously reported startup selection criteria in the incubator literature (Table
1).
In Table 4, we map the criteria in our scoreboard (Table 3) to the references that previously
reported them. Note that some previously reported criteria, such as startup status, technology-based
business and having a business plan, are true by default for all the startups applying to JFDI. Other
criteria, such as the persistence of the management team, are impossible to assess because such
information was not collected in the online application form (Table 2). Thus, our scoreboard
excludes such criteria (listed at the bottom of Table 4) but includes the rest summarized in Table 1.
On this basis, we added a few criteria related to the success or failure of new product development
[46].
Specifically, we adopted the Real-Win-Worth framework, which was initially developed to
evaluate innovation projects [21] and later crowd-funding projects [47], to categorize the 30 criteria.
The Real-Win-Worth screen framework is a systematic synthesis of the success or failure factors in
the new product development literature, and allows one to evaluate the risks and potential of
individual projects by answering questions in three main aspects [21]:
o “Is it real?” explores both market potential and the feasibility of developing the product.
o “Can it win?”4 considers whether the innovation and the company can be competitive.
o “Is it worth doing?” examines the profit potential and whether the innovation makes strategic
sense in the long term.
One can dig deeper for the answers to six more specific questions in the real, win and worth
categories: Is the market real? Is the product real? Can the product be competitive? Can our
company be competitive? Will the product be profitable at an acceptable risk? Does launching the
product make strategic sense? To answer these six queries, one can explore an even more
fundamental set of supporting questions. For example, one can answer the query “Is the market
real?” by answering the following detailed questions: “Is there a need or desire for the product?
Does the consumer understand the benefits of the innovation? Can the customer afford to buy it? Is
the size of the potential market big enough to be worth pursuing? Will the customer have subjective
barriers to buying the product?
In brief, the Real-Win-Worth framework is built on a series of questions about the product,
its market, the competition and the team’s capabilities to expose problems, potential sources of risk,
areas for improvement, and reasons for termination. George Day presented 17 such fundamental
questions [21], whereas Song et al. created 26 questions [47], which belong to the respective Real,
Win and Worth main categories and 6 subcategories. Versions of the Real-Win-Worth questions
have been developed and used by companies, including General Electric, Honeywell and Novartis
to assess business potential and risk exposure of their innovation projects. 3M has used it to
evaluate more than 1,500 projects [21].
To assess the startups, we framed 30 fundamental questions corresponding to the 30
potentially critical criteria into the respective Real-Win-Worth categories (see Table 3), i.e., “Are
the product and market real? Can the product and entrepreneur team win? Is the startup
worthwhile? These 30 questions were well aligned with the startup selection criteria regarding the
4 In the original Real-Win-Worth framework developed for a company to assess its internal innovation projects, the
second question was “can we win”. Herein, we use “can it win” instead to indicate that the assessment of a startup was
not done by the startup itself but the accelerator or any third party.
product [6, 18, 26]; market [6, 7, 18, 26, 34, 37, 39, 48]; entrepreneur or team [7, 18, 26, 34, 37,
39]; protectability [37, 48]; and finance [6, 7, 18, 34, 37, 39, 48] from the incubator literature, and
they also covered the investment opportunity evaluation criteria of the angel investors. For example,
the eight criteria of Maxwell et al. [20] for angel investors’ startup evaluation, including market
potential, product adoption, protectability, entrepreneur experience, product status, route to market,
customer engagement, and financial projections, are all covered by the questions in our Real-Win-
Worth categories.
These 30 questions were designed so that they can be answered objectively with “full,”
“none” or “partial” evidence found in the application data of startups, regardless of the person that
read the data to answer the question. These three levels of availability of evidence (none / partial /
full) were further translated into 0 / 0.5 / 1 for our statistical analysis.5 One simply needs to look for
evidence and facts in the application documents. We also provided specific guidance to the coder
for reading and coding the startup profiles to answer each of the 30 questions. One example is given
in Table 5. The descriptions of such guidance for all questions are available upon request.
Despite the objectivity involved in assessing the startup profiles by answering the questions,
we ran a test to ensure inter-rater reliability. We invited two researchers who have business
backgrounds but have not been exposed to the questions and descriptive guidance to use them to
code a set of five startup profiles from our database. Each researcher read the five profiles
independently and highlighted the evidence in the startup profile to support his answer to each of
the 30 questions. A third researcher orchestrated an intense discussion subsequently to reconcile
different interpretations and further benchmark the coding. In this manner, the inter-rater
5 The 3-level rating schemes were preferable for our dataset. First, in some cases, the evidence in startup profiles for
answering a question is neither non-existent nor significant; it falls in the middle. Thus, rating with only two extremes
(0 and 1) is insufficient. Second, a more fine-grained or gradual rating level for the middle ground is also cognitively
challenging for the researcher deciding a score. In our inter-rater reliability tests, the ratings of different researchers
could not converge easily when more gradual ratings were allowed. The three-level rating (0, 0.5, and 1) enabled a
Kappa ratio higher than 0.8. The high Kappa ratio furthered ensured the reliability of any researcher from the test.
repeatability reached a Cohen’s Kappa of 80%,6, indicating a high degree of consensus. Through
the test, we also found that the objectivity of the startup profile data and the three-level rating
scheme leave little room for inter-rater variability.
C. Prediction Models
After the critical criteria were identified from the comparative analysis between the filtered
and interviewed groups in the screening stage and between the rejected and selected subgroups in
the final stage, we used them to predict the screening and selection results of the additional sets of
startups to explore whether the critical criteria in the respective decision stages were more
explanatory in terms of rejection than acceptance decisions, or the opposite. To do so, we
incorporated the critical criteria as predictor variables in a stepwise regression procedure to build
the regression model that achieves the highest predictability on the screening or selection results of
respective stages. The stepwise regression procedure inserts the candidate predictor variables or
removes the variables from the trial regression model in a stepwise manner to fine-tune the model
regarding the statistical fit, i.e., R2. The most predictive regression model that results might include
a subset, not all the candidate predictor variables.
In each decision stage, we used a binary dependent variable to indicate the screening or
selection result in the logistic regression analyses.
• Profile screening stage: dependent variable is 1 if the startup passed screening and was
invited to the interview or 0 if the startup was rejected.
• Final selection stage: dependent variable is 1 if the startup passed the interview and was
successfully admitted into the incubator or 0 if the startup was rejected.
Additional information about the startups, such as a website, social media link and founder’s
photos, was also collected via the online application system and was visible to accelerator
managers. Such information is extrinsic to the people, products, operations, markets, strategies and
6 We also tested using more rating levels than 3 and found it was challenging for the raters to achieve a high Kappa
ratio. Rating using three levels (1, 0.5 and 0) was the most practical to ensure a high inter-rater reliability.
business of the startups but might influence the perception of accelerator managers and thus their
decisions. To consider the effects of such extrinsic factors, we incorporated the following binary
control variables in the regression analysis. These variables can be assessed using the information
collected in the application system.
• Website: Variable is 1 (or 0 otherwise) if the startup provides a working company website
address in the application.
• Social media: Variable is 1 (or 0 otherwise) if the startup provides a working social media (e.g.,
Facebook, Twitter) link in the application.
• Media: Variable is 1 (or 0 otherwise) if the startup provides an introduction video in the
application.
• Profile picture: Variable is 1 (or 0, otherwise) if the co-founders of the startup upload their
profile pictures in the application.
• Location: Variable is 1 (or 0 otherwise) if the startup’s headquarters is in Singapore.
• Recommend: Variable is 1 (or 0 otherwise) if the startup identifies an internal referee from the
incubator.
During the stepwise regression, although the candidate predictor variables (i.e., the critical
criteria) were removed or added in a stepwise manner, the control variables above were always
included in all intermediate regression models in the search for the best model. The baseline model
in the stepwise regression included only the control variables. Such regression models use the
critical criteria as well as the control variables to explain the screening and selection outcomes.
After the most predictive regression models were built from the stepwise regression procedure, we
further used them to “predict” the successes or failures of additional samples of startups in the
respective stages and compared the predicted results and actual results to uncover accelerator
manager’s subconscious decision preferences or rationales behind the critical criteria identified in
different decision stages.
Specifically, if the model using the critical criteria as predictive variables was more predictive
for failures than for successes, the critical criteria were more likely to be reasons to reject than to
choose the startups, and the managers were more likely to reject the startups due to their weakness
in these criteria than to choose them. If the best regression model based on the critical criteria was
more predictive of successes than of failures, the critical criteria were more likely to be the reasons
to choose than to reject the startups, and the managers were more likely to choose the startups for
their strengths in these criteria than they were to reject them.
IV. RESULTS
A. Critical Criteria
1) Initial screening
We first compare the groups of startups that are “filtered” and “interviewed” in the initial
screening stage. The mean ratings of all 30 criteria for the “Filtered” and “Interviewed” groups are
reported in Table 6. Because the ratings are not normally distributed, a nonparametric Wilcoxon test
is performed on the mean differences of two groups. For criteria Q1, Q2, Q3, Q6, Q7, Q8, Q12, and
Q21, the “Interviewed” group presents a much higher mean rating than the “Filtered” group, and the
differences are statistically significant based on nonparametric tests. These 8 criteria were critical in
the initial screening stage.
Among the criteria that make significant differences, Q1, Q2 and Q3 are in the “Real” main
category and “Market Attractiveness” subcategory. Q1, “demand validation,” asks whether there is
any demand for the startup’s product. Q2, “customer affordability,” asks whether customers can
afford to buy the product. Q3, “market demographics,” relates to the size and growth potential of
the targeted market. Q1, Q2 and Q3 together indicate whether the startup has presented evidence of
the real existence of potential customers and markets.
Q6, Q7 and Q8 are in the “Real” category and “Product Feasibility” subcategory. Q6,
“concept maturity,” asks whether the concept has enough details and development to allow it to
evolve into a real product. Q7, “sales and distribution,” asks whether existing sales and distribution
channels have been established. Q8, “product maturity,” addresses the stage of the product’s
development. Q6, Q7 and Q8 together indicate that startups need to present evidence that their
products can be realistically made, sold, and distributed.
Q12, “value proposition,” is in the “Win” category and “Product Advantage” subcategory. It
focuses on the benefits that the product can provide to customers. Q21, “technology expertise,”
belongs to the “Win” category and the “Team Competence” subcategory. It considers the startup’s
technical ability to develop the product. The criticality of Q12 and Q21 in the “Win” category
suggests that startups need to present evidence of the competitiveness of their products and their
own relevant technical capabilities. Notably, other criteria related to team competencies in
marketing, sales, management and finance within the “Win” category appear insignificant, implying
that accelerator managers have focused more on technology expertise than on non-technical team
competencies in initial screening. Taken together, these eight critical criteria in the screening stage
are in the “Real” and “Win” categories, and none of them falls into the “Worth” category.
2) Final selection
Next, we focus on those startups that passed initial screening and compare the mean ratings
of the subgroups that are “successful” and “unsuccessful” in being eventually selected into the
accelerator program. The mean rating differences of 30 criteria of the two subgroups are reported in
Table 6 with the Wilcoxon values from nonparametric Wilcoxon tests. For Q13, Q24, Q25 and
Q29, the successful subgroup of startups presents a much higher average rating than the
unsuccessful subgroup, with statistical significance based on Wilcoxon tests. These four criteria are
critical in the final selection stage. None of them was found critical in the initial screening stage.
Q13, “sustainable advantage,” is in the “Win” main category and “Product Advantage”
subcategory. It concerns whether the startup has unique assets or capabilities to sustain its
advantages. Q24 and Q25 are in the “Win” main category and “Team Competence” subcategory.
Q24, “prior startup experience,” addresses the relevant experience of the entrepreneurs. Q25,
“feedback mechanism,” addresses whether the startup has an adequate mechanism to consistently
listen to customers and respond to the market. The criticality of Q13, Q24 and Q25 in the “Win”
category suggests again that startups need to present evidence of the competitiveness of their
products and their teams’ ability to defeat the competition and sustain the business. Finally, Q29,
“growth strategy,” is in the “Worth” main category and “Growth Potential” subcategory. It asks
whether the startup presents viable strategies for long-term growth. Its criticality suggests the
importance of presenting information about how the startup is prepared for long-term growth.
Notably, none of the four critical criteria in the second stage falls into the “Real” category.
In the later stage of the selection process, accelerator managers subconsciously shifted their focus
from “Real” to “Worth” criteria, and specifically shifted from assessing how real the product and
market are based on a short-term perspective to assessing the potential of the people and strategy
based on a long-term perspective.
B. Prediction Models
Now, we further explore the possible shifts in accelerator managers’ decision rationales
behind the shifting decision criteria across stages. We first use stepwise regression to sift the
identified critical criteria as candidate predictor variables to identify the most predictive regression
model regarding the screening or selection result of each stage and then apply the prediction model
to “predict” the successes or failures of an additional sample of startups in each stage. On this basis,
we compare the predicted results with the actual results in each stage.
1) Initial screening
We first incorporate the eight critical criteria in the initial screening stage together with all
the control variables in a stepwise regression procedure to explore the regression model that is the
most predictive of the results of the screening stage.7 The critical criteria are sifted in the stepwise
regression procedure to maximize the R2 of the regression model. The resulting regression model is
reported in Table 7. This model has an R2 of 0.6307 and includes Q1, Q3, Q6, Q7 and Q21 as
predictors, all of which have statistically significant effects on the screening results (as evidenced
by the small p-values for their coefficients). In other words, this model with just a subset of five
7 Our pairwise correlation analysis shows that these critical criteria are only weakly correlated with one another or with
the control variables, thus ensuring their incorporation as independent variables in the regression analysis.
critical criteria achieves a higher R2 than the regression model that includes all eight critical criteria
identified from the pairwise comparison between the “Filtered” and “Interviewed” groups. This
model is also significantly more predictive than the baseline model that includes only the control
variables and has an R2 of 0.3898. Table 7 also shows statistically significant results that would
increase the chance of passing profile screening with a working website and a reference from inside
the accelerator among the control variables.
We further apply the model that was optimized for the initial screening stage to “predict” the
results of an additional set of 50 startups randomly sampled from the “Filtered” and “Interviewed”
groups (25 from each). These 50 startups were independent from those used in the stepwise
regression analysis that built the prediction model. The results are presented in Table 8. For both the
“Interviewed” and “Filtered” groups, prediction accuracies are higher than 65%. For the 18 startups
that the model predicted as “Filtered,” the prediction accuracy is 78%, which is much higher than
the accuracy of 66% for the predicted “Interviewed” group.
Therefore, the regression model in the initial screening stage is more predictive of failures
than of successes, and the critical criteria in the model are more explanatory of the reasons to reject
than to accept. This result implies that accelerator managers were more likely to be “rejecting”
startups because of their weaknesses in the critical criteria in the screening stage. This result is
aligned with the prior studies on the decision process of business angels [20]. Following the
argument of Shafir et al. [16] that humans look for weaknesses to reject, entrepreneurs should avoid
weaknesses in the critical criteria identified here to reduce the likelihood of being screened.
However, given the limited prediction accuracies (66%~78%) of the model, we need to draw such
conclusions with caution.
2) Final selection
For the final selection stage, we incorporate the four critical criteria with the control
variables in stepwise regression to explore the most predictive model on the results of the final
selection stage.8 The critical criteria are sifted in the stepwise regression procedure to maximize the
R2 of the regression model. As a result, the model that has the largest R2 (0.5771) incorporates all
four critical criteria (Q13, Q24, Q25 and Q29), all of which have a statistically significant effect on
the selection result (as evidenced by the small p-values for their coefficients). The model is also
reported in Table 7. This model’s predictive power is also much higher than the baseline model that
only includes the control variables and has an R2 of 0.1383.
Again, we applied the prediction model optimized for the final selection stage to an
additional set of 50 randomly selected startup profiles from the “Interviewed and Successful” and
“Interviewed but Unsuccessful” subgroups (25 from each). The results are shown in Table 8. For
both subgroups, the prediction accuracies are higher than 60%. Of the 18 startups that the model
predicted as successfully accepted, the prediction accuracy rate is 72%, which is much higher than
the accuracy of 63% for the predicted unsuccessful rejected subgroup.
Therefore, the regression model in the final selection stage is more predictive of successes
than of failures in the final selection stage, and the critical criteria incorporated in the model are
more explanatory of the reasons to accept than to reject. This result suggests that accelerator
managers are more likely to be making “choosing” decisions than “rejecting” decisions in the final
selection stage, and the startups that present advantages in the identified critical criteria are more
likely to be chosen. Following Shafir et al.’s [16] argument that humans normally weigh advantages
for choosing, entrepreneurs are suggested to develop strengths and present relevant information
about the critical criteria identified in the final selection stage. However, given the limited
prediction accuracies (63~72%) of the model, one cannot draw a firm conclusion.
V. DISCUSSION
The analyses above have unveiled a parsimonious set of decision criteria that shift across the
initial screening and final selection stages in accelerator managers’ startup selection process.
8 Our pairwise correlation analysis of these critical criteria and the control variables find only weak correlations, which
ensure the incorporation of these critical criteria as independent variables in the regression model.
Specifically, demand validation, customer affordability, market demographics, concept maturity,
sales and distribution, product maturity, value proposition, and technology expertise were critical in
the decisions of screening a large number of startups in the initial stage. Sustainable advantage,
prior startup experience, feedback mechanism and growth strategy were critical in the decisions of
selecting a small number of startups in the final stage.
These specific shifting critical criteria across decision stages in the accelerator context differ
from those previously reported in studies of incubators and angel investors. For instance, Merrifield
[18] suggested that incubators’ evaluation criteria shift from the business opportunity to the
entrepreneurs, management and operations during the decision-making process. Mitteness et al.
[19] found that angel investors focus more on evaluating the entrepreneurs in the screening stage
and then on the business opportunity at the later stage. In our findings about the accelerator, some
of the critical criteria in initial screening, such as demand validation, market demographics and
concept maturity, are related to the business opportunity, and two criteria regarding team
competence (prior startup experiences and feedback mechanism) are critical for the final stage.
Such differences might result from the differences in the incentives and natures of accelerators from
traditional incubators and angel investors.
Additionally, we found a shift in critical criteria across the Real-Win-Worth categories—
specifically, from Real and Win criteria in the initial screening stage to Win and Worth criteria in
the final selection stage. In the screening stage, no criterion regarding “is it worth doing” is found to
be critical, whereas no criterion regarding “is it real” makes a critical difference in the final
selection. At the same time, there is also a shift of criteria from the technical to non-technical
capabilities of the entrepreneurs in the “can it win” category. Therefore, the Real-Win-Worth
framework provides additional insights into how real, how competitive and how worthwhile a
startup is, compared to the prior startup evaluation frameworks (see section 2.2), and has allowed us
to identify a different shift heuristic in the decision-making process.
This shift of critical criteria might be a result of the managers’ decision rationale change
from “rejecting” in the screening stage because of the need to trim a large number of startups to
“accepting” a small number of startups in the final selection stage. In other words, the criteria
critical for rejections versus acceptances are different. We found preliminary evidence in this regard
by applying the best regression models using the critical criteria as predictor variables to predict the
screening and selection results of an additional set of startups in respective stages. Such a shift in
implicit decision rationales of accelerator managers might be their natural response to the large
number of applicants and their limited time and cognitive capacity to make choices.
Note that in the second stage, accelerator managers may gain additional non-written
information via the interviews that was not in the online application data but that might be related to
additional criteria that are critical for the decision. This suggests that by analyzing only the
application profile data, we might miss some critical criteria for final selection decisions. Moreover,
for the critical criteria in the initial screening stage, they might be still critical for the final selection,
but the differences in such criteria are no longer sufficient to distinguish the startups that passed the
screening stage. Therefore, the four critical criteria are likely to be only a subset of all the critical
criteria for the final selection stage.
It is also noteworthy that the JFDI managers did not purposefully or explicitly prioritize a
parsimonious set of critical criteria in their decision process. The criticality of these criteria
emerged from collective human behaviors and has been uncovered by our empirical analysis of the
profiles of the startups that have been selected or rejected by the managers, rather than by surveying
or interviewing the managers. As suggested by the psychological studies of decision making [16],
decision makers often do not make a decision with clearly ranked preferences because of the
complexity of choices but instead determine the preferences as a result of having to decide. Many
venture capitalists also do not understand their own decision rationales and biases [45]. This seems
to be true in the case of JFDI and its competitive startup selection process. Our data-driven
identification of critical decision criteria may inform accelerator managers of their own
subconscious decision preferences, rationales and biases.
We presented our results and findings to the JFDI managers, who provided the data and
context for this research. One JFDI manager made the following comment –
“The findings of the paper are insightful in the sense that it would help us to be more conscious
about the shift in key factors at different stage of the selection. This realization would help us think
about how we could improve the efficiency of our selection process. In addition, this paper would
help first-time founders understand what kind of business idea is worth doing and reject the weak
ideas as quickly as possible to conserve resources.”
VI. CONCLUSION
To summarize, our analyses have identified a small number of implicit decision criteria of
accelerator managers’ and a heuristic shift of these criteria across the initial screening and final
selection stages in the decision-making process. According to the Real-Win-Worth framework,
eight Real or Win criteria (i.e., how real and competitive the product is) were critical in the initial
screening decisions of a large number of startups, and another four Win or Worth criteria (i.e., the
competitiveness and potential of the people and strategy) were critical in the final selection
decisions of a small number of startups. Using the identified critical criteria to predict the results of
additional startups, we demonstrate preliminary evidence that the critical criteria in the initial stage
are more explanatory on “rejection” decisions, and the critical criteria in the final stage are more
explanatory on “selection” decisions.
This research has contributed to the growing literature on the accelerator phenomenon [10-
13, 22, 23, 49] by developing a nuanced understanding of the shifting decision criteria across stages
in the startup selection process. Our research also extends the earlier studies of the investment
decision-making of angel investors [17, 20, 45, 50, 51] by not only showing the shift heuristic but
also identifying the specific shift from Real and Win criteria in the initial screening stage to Win
and Worth criteria in the final selection stage, based on the Real-Win-Worth framework. Therefore,
we believe our findings have made a theoretical contribution to decision making in the
entrepreneurial process, particularly in the new accelerator context.
For practice, our findings may help accelerator managers be more conscious of their own
subconscious preferences, rationales and biases and thus improve the decision process.
Understanding their own implicit and shifting decision criteria across stages could potentially be
useful in refining the web-based application data collection system and developing data analytics
(e.g., using prediction models) to make more informed decisions. Meanwhile, these findings may
also help entrepreneurs be more empathetic with accelerator managers, and guide them to better
relate their businesses toward the critical criteria.
Our findings and contributions are grounded in a unique dataset. Our startups’ profile data
were not generated for this research but were rather submitted by the startups themselves to the
JFDI accelerator for the competition into the accelerator program. Our dataset allowed us to take a
data-driven approach to empirically identify the shifting decision criteria in the accelerator decision
process and reveal the subconscious decision preferences, rationales and biases of accelerator
managers. Therefore, our research complements the majority of the prior research that was based on
interviews or surveys with incubator managers and sought to identify the selection criteria from
their opinions and recollections.
A few limitations are worth mentioning. First, the startup profiles used in this study were
obtained from a Singapore-based accelerator that specializes in software and mobile applications.
Thus, the results might not be directly applicable to accelerators in other regions or industries. This
suggests a future research opportunity to develop a contingent understanding related to the traits,
processes and performances of accelerators in different geographic, industry and socio-economic
contexts. Second, the utility of the prediction models for different stages are limited by their low
accuracy (below 80%). Our analysis using these regression models can only be claimed to be
preliminary. We hope that the preliminary prediction models here can be viewed as an invitation for
more comprehensive and powerful data-driven prediction models. Third, accelerator managers may
gain additional non-written information via the interviews that could be critical for their decisions.
Thus, it is possible that by analyzing only the startup profile data, we have overlooked some critical
criteria in the second stage, and the four critical criteria we identified are likely to be a subset.
Future research may involve videotaping such interviews and conducting a verbal protocol analysis
to interpret behaviors and information exchanges during the interviews, as previously done by
Maxwell et al. [20].
Moreover, a natural future direction for accelerator research would involve exploring the
implications of different startup selection processes and criteria as well as the aggregate
characteristics of the startup application pool on the performance of both startups and accelerators.
This approach would require the collection of performance data at the accelerator level, such as the
accelerated startups’ returns on investment. For example, given the importance of the quantity and
quality of the startups for the future success of an accelerator, it will be interesting to investigate the
effects of the size and heterogeneity of the pool of applicant startups on the later performances of
accelerators. In general, the accelerator phenomenon represents an interesting avenue for further
research and understanding to aid startups in overcoming their challenges during the infancy period
of the entrepreneurship process.
REFERENCES
[1] D. A. Shepherd, E. J. Douglas, and M. Shanley, "New venture survival: Ignorance, external
shocks, and risk reduction strategies," Journal of Business Venturing, vol. 15, pp. 393-410,
2000.
[2] E. Ries, "The lean startup," New York: Crown Business, 2011.
[3] H. Aldrich, Organizations Evolving: Sage, 1999.
[4] K. Chan and T. Lau, "Assessing technology incubator programs in the science park: the
good, the bad and the ugly," Technovation, vol. 25, pp. 1215-1228, 2005.
[5] M. Erlewine and E. Gerl, A comprehensive guide to business incubation: National Business
Incubation Association, 2004.
[6] S. M. Hackett and D. M. Dilts, "A real options-driven theory of business incubation," The
Journal of Technology Transfer, vol. 29, pp. 41-54, 2004.
[7] K. Aerts, P. Matthyssens, and K. Vandenbempt, "Critical role and screening practices of
European business incubators," Technovation, vol. 27, pp. 254-267, 2007.
[8] A. S. Amezcua, Boon or Boondoggle? Business incubation as entrepreneurship policy:
Syracuse University, 2010.
[9] R. Aernoudt, "Incubators: tool for entrepreneurship?," Small Business Economics, vol. 23,
pp. 127-135, 2004.
[10] S. Cohen and Y. V. Hochberg, "Accelerating startups: The seed accelerator phenomenon,"
2014.
[11] B. L. Hallen, C. B. Bingham, and S. Cohen, "Do Accelerators Accelerate? A Study of
Venture Accelerators as a Path to Success?" in Academy of Management Proceedings, 2014,
p. 12955.
[12] S. W. Smith and T. J. Hannigan, "Swinging for the fences: How do top accelerators impact
the trajectories of new ventures," Paper presetend at DRUID Conference, Rome, Italy, 2015.
[13] J.-H. Kim and L. Wagman, "Portfolio size and information disclosure: An analysis of
startup accelerators," Journal of Corporate Finance, vol. 29, pp. 520-534, 2014.
[14] M. T. Hansen, H. W. Chesbrough, N. Nohria, and D. N. Sull, "Networked incubators,"
Harvard Business Review, vol. 78, pp. 74-84, 2000.
[15] L. Rothschild and A. Darr, "Technological incubators and the social construction of
innovation networks: an Israeli case study," Technovation, vol. 25, pp. 59-67, 2005.
[16] E. Shafir, I. Simonson, and A. Tversky, "Reason-based choice," Cognition, vol. 49, pp. 11-
36, 1993.
[17] S. A. Jeffrey, M. Lévesque, and A. L. Maxwell, "The non-compensatory relationship
between risk and return in business angel investment decision making," Venture Capital,
vol. 18, pp. 189-209, 2016.
[18] D. B. Merrifield, "New business incubators," Journal of Business Venturing, vol. 2, pp. 277-
284, 1987.
[19] C. R. Mitteness, M. S. Baucus, and R. Sudek, "Horse vs. jockey? How stage of funding
process and industry experience affect the evaluations of angel investors," Venture Capital,
vol. 14, pp. 241-267, 2012.
[20] A. L. Maxwell, S. A. Jeffrey, and M. Lévesque, "Business angel early stage decision
making," Journal of Business Venturing, vol. 26, pp. 212-225, 2011.
[21] G. S. Day, "Is it real? Can we win? Is it worth doing," Harvard Business Review, vol. 85,
pp. 110-120, 2007.
[22] P. Miller and K. Bound, The Startup Factories: The Rise of Accelerator Programmes to
Support New Technology Ventures: NESTA, 2011.
[23] D. A. Isabelle, "Key factors affecting a technology entrepreneur's choice of incubator or
accelerator," Technology Innovation Management Review, vol. 3, p. 16, 2013.
[24] M. Schwartz, "Beyond incubation: an analysis of firm survival and exit dynamics in the
post-graduation period," The Journal of Technology Transfer, vol. 34, pp. 403-421, 2009.
[25] R. W. Smilor and M. D. Gill Jr, "The New Business Incubator: Linking Talent,
Technology," Capital, and Know-how, 1986.
[26] S. A. Mian, "US university-sponsored technology incubators: an overview of management,
policies and performance," Technovation, vol. 14, pp. 515-528, 1994.
[27] K. Grifantini, "Incubating Innovation: A standard model for nurturing new businesses, the
incubator gains prominence in the world of biotech," IEEE Pulse, vol. 6, p. 27, 2015.
[28] P. H. Phan, D. S. Siegel, and M. Wright, "Science parks and incubators: observations,
synthesis and future research," Journal of Business Venturing, vol. 20, pp. 165-182, 2005.
[29] A. S. Amezcua, M. G. Grimes, S. W. Bradley, and J. Wiklund, "Organizational sponsorship
and founding environments: A contingency view on the survival of business-incubated
firms, 1994–2007," Academy of Management Journal, vol. 56, pp. 1628-1654, 2013.
[30] J. L. Barbero, J. C. Casillas, A. Ramos, and S. Guitar, "Revisiting incubation performance:
How incubator typology affects results," Technological Forecasting and Social Change,
vol. 79, pp. 888-902, 2012.
[31] J. L. Barbero, J. C. Casillas, M. Wright, and A. R. Garcia, "Do different types of incubators
produce different types of innovations?," The Journal of Technology Transfer, vol. 39, pp.
151-168, 2014.
[32] M. Schwartz and C. Hornych, "Specialization as strategy for business incubators: An
assessment of the Central German Multimedia Center," Technovation, vol. 28, pp. 436-449,
2008.
[33] S. Linder, 2002 State of the Business Incubation Industry: NBIA Publications, 2003.
[34] J. R. Lumpkin and R. D. Ireland, "Screening practices of new business incubators: the
evaluation of critical success factors," American Journal of Small Business, vol. 12, pp. 59-
81, 1988.
[35] L. Peters, M. Rice, and M. Sundararajan, "The role of incubators in the entrepreneurial
process," The Journal of Technology Transfer, vol. 29, pp. 83-91, 2004.
[36] R. W. Smilor, "Managing the incubator system: critical success factors to accelerate new
company development," Engineering Management, IEEE Transactions on, pp. 146-155,
1987.
[37] S. M. Hackett and D. M. Dilts, "Inside the black box of business incubation: Study B—scale
assessment, model refinement, and incubation outcomes," The Journal of Technology
Transfer, vol. 33, pp. 439-471, 2008.
[38] R. S. Wulung, K. Takahashi, and K. Morikawa, "An interactive multi-objective incubatee
selection model incorporating incubator manager orientation," Operational Research, vol.
14, pp. 409-438, 2014.
[39] A. Bergek and C. Norrman, "Incubator best practice: A framework," Technovation, vol. 28,
pp. 20-28, 2008.
[40] J. Bruneel, T. Ratinho, B. Clarysse, and A. Groen, "The evolution of business incubators:
Comparing demand and supply of business incubation services across different incubator
generations," Technovation, vol. 32, pp. 110-121, 2012.
[41] H. Landström, "Informal investors as entrepreneurs: Decision-making criteria used by
informal investors in their assessment of new investment proposals," Technovation, vol. 18,
pp. 321-333, 1998.
[42] A. Tversky, "Elimination by aspects: A theory of choice," Psychological Review, vol. 79, p.
281, 1972.
[43] E. Shafir, "Choosing versus rejecting: Why some options are both better and worse than
others," Memory & Cognition, vol. 21, pp. 546-556, 1993.
[44] J. W. Payne, J. R. Bettman, and E. J. Johnson, "Adaptive strategy selection in decision
making," Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 14,
p. 534, 1988.
[45] A. L. Zacharakis and G. D. Meyer, "A lack of insight: do venture capitalists really
understand their own decision process?," Journal of Business Venturing, vol. 13, pp. 57-76,
1998.
[46] R. G. Cooper and E. J. Kleinschmidt, "New products: what separates winners from losers?,"
Journal of Product Innovation management, vol. 4, pp. 169-184, 1987.
[47] C. Song;, J. Luo;, K. Hölttä-Otto;, K. Otto;, and W. Seering, "The design of crowd-funded
products," in ASME 2015 International Design Engineering Technical Conferences &
Computers and Information in Engineering Conference (IDETC/CIE 2015), Boston,
Massachusetts, 2015.
[48] F. A. Khalid, D. Gilbert, and A. Huq, "Investigating the underlying components in business
incubation process in Malaysian ICT incubators," Asian Journal of Social Sciences and
Humanities, vol. 1, pp. 88-102, 2012.
[49] N. Radojevich-Kelley and D. L. Hoffman, "Analysis of accelerator companies: An
exploratory case study of their programs, processes, and early results," Small Business
Institute Journal, vol. 8, pp. 54-70, 2012.
[50] C. Mason and R. Harrison, "Why 'business angels' say no: a case study of opportunities
rejected by an informal investor syndicate," International Small Business Journal, vol. 14,
pp. 35-51, 1996.
[51] R. Sudek, "Angel investment criteria," Journal of Small Business Strategy, vol. 17, p. 89,
2006.
Figure 1. Groups of startups according to the two decision stages
Table 1. Previously Reported Startup Selection Criteria of Incubators
#
Criteria
Prior Studies
1
Ability of Job Creation
Smilor [36]; Wulung et al. [38]
2
Capital Availability
Smilor [36]; Merrifield [18]; Lumpkin and Ireland [34]; Mian [26]; Hackett and Dilts [37]; Khalid et
al. [48]
3
Competitive Advantage
Merrifield [18]; Hackett and Dilts [37]; Khalid et al. [48]
4
Company Age
Bruneel et al. [40]; Wulung et al. [38]
5
Company is Locally Owned
Smilor [36]
6
Company is Startup
Smilor [36]; Mian [26]
7
Company Size
Lumpkin and Ireland [34]; Aerts et al. [7]
8
Company Survivability
Wulung et al. [38]
9
Exit Options
Hackett and Dilts [37]; Khalid et al. [48]
10
Financials: Liquidity, Price
Earnings, Debt, Asset Utilization
Lumpkin and Ireland [34]; Aerts et al. [7]
11
Growth Potential
Smilor [36]; Merrifield [18]; Lumpkin and Ireland [34]; Mian [26]; Aerts et al. [7]; Hackett and Dilts
[37]; Bruneel et al. [40]
12
Team’s Age
Lumpkin and Ireland [34]; Aerts et al. [7]
13
Team’s Gender
Aerts et al. [7]
14
Team’s Finance Expertise
Lumpkin and Ireland [34]; Aerts et al. [7]; Bergek and Norrman [39]
15
Team’s Management Expertise
Merrifield [18]; Hackett and Dilts [37]; Khalid et al. [48]
16
Team’s Marketing Expertise
Lumpkin and Ireland [34]; Aerts et al. [7]; Bergek and Norrman [39]
17
Team’s Persistence
Lumpkin and Ireland [34]; Aerts et al. [7]
18
Team’s Prior Startup Experience
Bergek and Norrman [39]; Hackett and Dilts [37]; Bruneel et al. [39]; Khalid et al. [48]
19
Team’s Technical Expertise
Merrifield [18]; Lumpkin and Ireland [34]; Hackett and Dilts [6]; Aerts et al. [7]; Bergek and
Norrman [39]; Khalid et al. [48]
20
Manufacturing Competence
Merrifield [18]; Mian [26]
21
Market Size and Growth
Hackett and Dilts [6]; Aerts et al. [7]; Bergek and Norrman [39]; Hackett and Dilts [37]; Khalid et al.
[48]
22
Marketing & Distribution
Merrifield [18]; Lumpkin and Ireland [34]; Aerts et al. [7]
23
Supply Chain Availability
Merrifield [18]
24
Patent Protection
Hackett and Dilts [37]; Khalid et al. [48]
25
Political and Social Constraints
Merrifield [18]
26
Profitability
Merrifield [18]; Lumpkin and Ireland [34]; Hackett and Dilts [6]; Aerts et al. [7]; Bergek and
Norrman [39]; Hackett and Dilts [37]; Khalid et al. [48]; Wulung et al. [38]
27
Reference from Others
Lumpkin and Ireland [34]; Aerts et al. [7]
28
Risk Distribution
Merrifield [18]
29
Technology Related
Smilor [36]; Mian [26]; Bruneel et al. [40]
30
Unique Opportunity
Smilor [36]; Lumpkin and Ireland [34]; Hackett and Dilts [6]; Aerts et al. [7]; Hackett and Dilts [37];
Bruneel et al. [40]; Khalid et al. [48]
31
Written Business Plan
Smilor [36]; Lumpkin and Ireland [34]; Mian [26]; Aerts et al. [7]
Table 2. JFDI Online Questionnaire
General
1) The JFDI.2015A application is now live. We ask the founder/co-founders to fill out the form (no need to ask employees, contractors, or
advisors). You can invite your other co-founders via the invite button on this form.
2) Tell us about your idea, in 140 characters or less.
Founding Team
3) How many founders are there?
4) How long have all founders (not including employees) worked together as a team?
5) What have you as a team already achieved together? It doesn't have to be examples from this particular project/startup.
6) Please record a short video (2 minutes) where all founders introduce themselves and explain why you are building a team and startup together.
7) Can all founders attend and be physically present throughout the JFDI.2015A program + at least 1 month post program (early April to early
August 2015) to meet with investors?
8) If you, as a co-founder, cannot attend the program in person through the entire duration of the program (early April to early August 2015),
please explain.
9) Which city makes sense for the startup to physically be set up in, after the program?
Your Startup
10) Are you already incorporated?
11) What date did you start this company?
12) What is the total amount of cash invested in this startup to date?
13) How is equity divided amongst founders and team? If you have other shareholders or employee option pool, please list the details.
14) How many person-months has the team (including founders, employees, contractors) worked on this startup?
15) How many full-time employees are there on your team?
16) How many full-time developers/engineers are there on the team?
17) Please provide any GitHub URLs for technical employees and LinkedIn URLs for business side employees
Individual Founders
18) How did you meet your co-founders?
19) Which of the following skills do you personally have?
• Hacker: I can build software. I will write code for this startup
• Hipster: I can do web and graphic design. I will build the UI / UX
• Hustler: I can sell to customers and talk to investors. I will do biz dev.
20) What is your role in this business?
21) How many years in your career have you led the development of new products, services, or technologies?
22) How much work experience do you have – at a "real company", not a startup?
23) What is your experience with startups?
24) Describe your work experience, include your GitHub or LinkedIn URLs.
25) What is the approximate monthly salary in Singapore Dollar you would expect working for a medium sized tech company where you live?
26) Tell us all educational milestones you have reached.
27) Have you made any unusual lifestyle choices? Tell us about your strange food choices, weird hobbies, or bizarre behaviors which mainstream
humans just don't get. Or if you have something impressive you have personally built or achieved, please share links or stories with us.
28) If you own more than 5% of any other business, whether incorporated, a partnership, or family business, please describe your relationship with
that other business.
29) Do you have any commitments, for example, a job offer, military service, or study that will prevent you from giving 100% commitment to this
business over the next two years?
Customers
30) Who are you selling to/do you plan to sell to in the next year?
31) Explain how you intend to (or already do) find customers?
Product
32) Please provide a 1-minute video demo of your product. Please *only* post a video demo of your product or prototype. Videos longer than 1
minute will not be viewed
33) What is the URL for your website/demo/mockup etc.?
34) What kinds of products are you selling/do you plan to sell to your customers?
35) What kind of traction milestone does your product enjoy?
36) Please describe in details what evidence and metrics you have to support the traction milestone you picked above.
37) What is your next traction milestone for this business and what are the steps you need to take to reach it?
38) What monetization models are you using/do you plan to test during the program and beyond?
39) Please connect your stats tracking account(s) to help us understand your product or service usage
Market & Domain
40) Why did your team choose this particular idea to work on?
41) Who are your competitors? What differentiates you? Include URLs
42) What is different/interesting/new about your business?
43) Imagine we sent you and your team back in time. Could your idea have been successful five years ago? Please explain why.
Financials
44) What is the current monthly cash required to pay all founders, employees and expenses (gross burn) in your home country?
45) How much total revenue has your startup had in its lifetime?
46) How much revenue has your startup had in the last month?
47) Do you plan to raise money in the future? If so how much and when will it make sense?
JFDI
48) Name a JFDI alumni/mentor that you know and any notable mentors, investors or advisors that you want to tell us about.
Table 3. Screening Questions Addressing Potential Critical Criteria in Startup Selection
Categories
Criteria
Detailed Questions
Real
Market
Attractiveness
Q01 Demand Validation
Is there voice-of-customer type evidence or demand validation?
Q02 Customer Affordability
Is there evidence that customers can afford buying the product?
Q03 Market Demographics
Is there market size and demographic analysis?
Q04 Benefit Understanding
Is there evidence that customers understand the product’s benefits?
Q05 Subjective Constraint
Is there subjective barrier that constrains the customer?
Product
Feasibility
Q06 Concept Maturity
Is there evidence that the concept can be realized to a product?
Q07 Sales & Distribution
Is there evidence of existing sales and distribution channels?
Q08 Product Maturity
Is there evidence of the functional feasibility of the product?
Q09 Manufacturability
Is there evidence of manufacturability with efficiency and low cost?
Q10 Clarified Tradeoffs
Is there clarification of trade-offs in performance, cost, etc.?
Win
Product
Advantage
Q11 Competition Validation
Is there validation of product’ competitiveness in the market?
Q12 Value Proposition
Is there evidence of tangible or intangible benefits for customers?
Q13 Sustainable Advantage
Is there evidence of advantages not easily available to the competitors?
Q14 Patent Strategy
Is there a patent strategy for existing/circumvent patents?
Q15 Patent Protection
Is there capability to maintain and protect patents?
Q16 Competitor Response
Is there evaluation of potential competitor responses?
Q17 Competition Strategy
Is there strategy prepared for competition?
Q18 Marketing Effort
Is there evidence of marketing efforts to enhance customer perception?
Team
Competence
Q19 Team Size
Is there adequate manpower in the startup?
Q20 Marketing/Sales Expertise
Is there marketing/sales experience in the startup team?
Q21 Technology Expertise
Is there product development skill set in the startup team?
Q22 Management Expertise
Is there management experience in the startup team?
Q23 Financial Expertise
Is there financial skill set in the startup team?
Q24 Prior Startup Experience
Is there prior entrepreneurship experience in the startup team?
Q25 Feedback Mechanism
Is there team mechanism to listen and respond to customers?
Worth
Expected
Return
Q26 Profitability
Is there evidence of adequate profitability?
Q27 Risk Assessment
Is there evidence of risk assessment?
Q28 Risk Mitigation
Is there evidence of risk mitigation measures?
Growth
Potential
Q29 Growth Strategy
Is there evidence of strategies and potential for future growth?
Q30 Capital Availability
Is there evidence of adequate capital for growth?
Table 4. Mapping Criteria in the Scoreboard to References
Smilor
[36]
Merrifield
[18]
Lumpkin and
Ireland [34]
Mian
[26]
Hackett and
Dilts [6]
Aerts et al.
[7]
Bergek and
Norrman [39]
Hackett and
Dilts [37]
Bruneel et al.
[39]
Khalid et al.
[48]
Wulung et al.
[38]
Independent Variables (Potential Critical Criteria)
Real
Market
Attractiveness
Q01 Demand Validation
X
X
Q02 Customer Affordability
X
X
Q03 Market Demographics
X
X
X
X
X
Q04 Benefit Understanding
Q05 Subjective Constraint
X
Product
Feasibility
Q06 Concept Maturity
Q07 Sales & Distribution
X
X
Q08 Product Maturity
X
Q09 Manufacturability
X
X
Q10 Clarified Tradeoffs
Win
Product
Advantage
Q11 Competition Validation
X
X
X
Q12 Value Proposition
X
X
X
X
X
X
X
X
Q13 Sustainable Advantage
X
X
X
Q14 Patent Strategy
X
X
Q15 Patent Protection
X
X
Q16 Competitor Response
X
X
X
Q17 Competition Strategy
X
X
X
Q18 Marketing Effort
X
X
Team
Competence
Q19 Team Size
X
X
Q20 Marketing/Sales Expertise
X
X
X
X
Q21 Technology Expertise
X
X
X
X
X
X
Q22 Management Expertise
X
X
X
Q23 Financial Expertise
X
X
X
Q24 Prior Startup Experience
X
X
X
X
Q25 Feedback Mechanism
Worth
Expected
Return
Q26 Profitability
X
X
X
X
X
X
X
X
Q27 Risk Assessment
X
Q28 Risk Migration
X
Growth
Potential
Q29 Growth Strategy
X
X
X
X
X
X
X
Q30 Capital Availability
X
X
X
X
X
X
Control Variables
Website
Social Profile
Media
Profile Pictures
Recommendation
X
X
Location
X
Variables Excluded due to Lack of Relevant Information in Profile Data
Ability to create jobs
X
X
Financial ratios: liquidity, price earnings, debt and asset utilization
X
X
Management team persistence
X
X
Age of the management
X
X
Exit options
X
X
Company age
X
X
Company survivability
X
Management team gender
X
Variables Excluded due to Being Default for All Applicants
Startup status
X
X
Tech-related
X
X
X
Business plan
X
X
X
X
Table 5. Guidance to Answer Q2
Q2. Is there evidence that customers can afford buying the product?
Guidance
Full: Data that customers are willing to pay, surveys or benchmarking data (table, competing and
complementary data)
Partial: Single customer quote or summary customer statements on price (all our customers we talked to
said xxx)
None: No points if they did not communicate with any customers about price (e.g. everybody wants a low
cost product)
Sample Information Provided by A Startup
The company made over $6,000 in revenue in the last 3 months and enjoys a 100% subscriber growth from July to
August. We currently have over 100 subscribers, who are providing recurring revenue. Our net promoter score is at
100% when we last surveyed 30 customers & current retention rate is at 80%.
Evidence Level: Full
Table 6. Mean Differences of the Criteria in the Initial Screening Stage and Final Selection Stage
Criteria
Critical at
Which Stage
Initial Screening Stage
Final Selection Stage
Mean Difference
(Continued – Filtered)
Wilcoxon
Value
Mean Difference
(Accepted – Rejected)
Wilcoxon
Value
Real
Q1
Screening
0.370
<0.0001
0.0125
0.9533
Q2
Screening
0.337
<0.0001
0.0375
0.6673
Q3
Screening
0.280
<0.0001
0.025
0.8123
Q4
0.059
0.2597
0.0875
0.2890
Q5
0.050
0.9644
0.075
0.3674
Q6
Screening
0.272
<0.0001
0.0375
0.6634
Q7
Screening
0.335
<0.0001
0.0625
0.5101
Q8
Screening
0.257
<0.0001
0.05
0.5484
Q9
0.040
0.0556
0.1375
0.0195
Q10
0.051
0.3767
0.0875
0.3912
Win
Q11
0.007
0.8868
0.025
0.8165
Q12
Screening
0.230
<0.0001
0.075
0.4489
Q13
Selection
0.009
0.8471
0.4125
<0.0001
Q14
0.030
0.2257
0.075
0.3717
Q15
0.060
0.1707
0.025
0.7760
Q16
0.017
0.8528
0.05
0.6879
Q17
0.069
0.3195
0.0375
0.7009
Q18
0.004
0.9886
0.1125
0.2015
Q19
0.076
0.1975
0.125
0.1818
Q20
0.042
0.4781
0.0375
0.7349
Q21
Screening
0.400
<0.0001
0.0625
0.4026
Q22
0.110
0.0724
0.05
0.4475
Q23
0.006
0.9128
0.1
0.0955
Q24
Selection
0.060
0.7607
0.375
<0.0001
Q25
Selection
0.004
0.9510
0.425
<0.0001
Worth
Q26
0.015
0.8214
0.0125
0.8817
Q27
0.021
0.7394
0.025
0.7894
Q28
0.051
0.3954
0.025
0.7810
Q29
Selection
0.015
0.5494
0.45
<0.0001
Q30
0.021
0.0650
0.0375
0.7237
Table 7. The Best Prediction Models
Initial Screening
Final Selection
Coefficients
p-value
VIF
Coefficients
p-value
VIF
Control
Variables
Website
1.7
<0.0001
1.107
-0.25
0.5412
1.779
Social profile
0.05
0.8458
1.111
-0.23
0.5929
1.370
Media
0.57
0.0543
1.049
-0.002
0.9951
1.086
Profile Pictures
-0.05
0.4201
1.036
-0.57
0.2215
1.074
Location
0.35
0.1711
1.100
0.49
0.2479
1.039
Recommend
1.66
<0.0001
1.100
0.65
0.1146
1.075
Independent
Predictor
Variables
Q1 Demand Validation
2.51
0.0004
1.069
Q3 Market Demographics
2.63
0.0003
1.094
Q6 Concept Maturity
1.65
0.0124
1.078
Q7 Sales & Distribution
1.56
0.0212
1.096
Q21 Technology Expertise
1.99
0.0075
1.114
Q13 Sustainable Advantage
2.93
0.0130
1.381
Q24 Prior Startup Experience
4.37
0.0047
1.069
Q25 Feedback Mechanism
2.07
0.0347
1.036
Q29 Growth Strategy
2.08
0.0468
1.057
Regression
Statistics
Log-likelihood
-51
-23
Wald chi-square (p)
<0.0001
<0.0001
Pseudo R2
0.6307
0.5771
Number of Observations
200
80
Table 8. Predicted versus Actual Results
Initial Screening Stage
Actual “Filtered”
Actual “Interviewed”
Total
Predicted “Filtered”
14 (77.8%)
4 (22.2%)
18
Predicted “Interviewed”
11 (34.4%)
21 (65.6%)
32
Total
25
25
50
Final Selection Stage
Actual Rejection
Actual Acceptance
Total
Predicted Rejection
20 (62.5%)
12 (37.5%)
32
Predicted Acceptance
5 (27.8%)
13 (72.2%)
18
Total
25
25
50