ArticlePDF Available

How Do Accelerators Select Startups? Shifting Decision Criteria Across Stages


Abstract and Figures

Competitive accelerators can help startups overcome resource constraints and the liability of newness, but they are also highly selective when choosing startups. However, little is known about how a top accelerator selects startups and what the decision criteria are in that process. Here, we analyze a unique dataset of the real profiles of the startups that applied to the first seed accelerator in Southeast Asia and the accelerator's decisions to uncover the selection process and latent criteria. We used a scoreboard of 30 criteria based on the real-win-worth (Is it real? Can it win? Is it worth doing?) framework to compare the profiles of the selected versus the rejected startups. Our analyses revealed that accelerator managers’ implicit decision criteria shifted from eight real or win criteria in the initial screening of many startups to another four win or worth criteria in the final selection of a small number of startups. These critical criteria were further used to build regression models that predict screening and selection results. Understanding the shifting decision criteria may inform accelerator managers of their own subconscious preferences to improve the decision process, and as a result also increase entrepreneurs’ empathy toward accelerator managers to sharpen their applications.
Content may be subject to copyright.
Electronic copy available at:
How Do Accelerators Select Startups? Shifting Decision Criteria Across Stages
Bangqi Yin, Jianxi Luo*
Massachusetts Institute of Technology &
Singapore University of Technology & Design
(Version as of December 2017)
Competitive accelerators can help startups overcome resource constraints and the liability of
newness, but they are also highly selective when choosing startups. However, little is known about
how a top accelerator selects startups and what the decision criteria are in that process. Here, we
analyze a unique dataset of the real profiles of the startups that applied to the first seed accelerator
in Southeast Asia and the accelerator’s decisions to uncover the selection process and latent criteria.
We used a scoreboard of 30 criteria based on the Real-Win-Worth (Is it real? Can it win? Is it worth
doing?) framework to compare the profiles of the selected versus the rejected startups. Our analyses
revealed that accelerator managers’ implicit decision criteria shifted from eight Real or Win criteria
in the initial screening of many startups to another four Win or Worth criteria in the final selection
of a small number of startups. These critical criteria were further used to build regression models
that predict screening and selection results. Understanding the shifting decision criteria may inform
accelerator managers of their own subconscious preferences to improve the decision process, and as
a result also increase entrepreneurs’ empathy toward accelerator managers to sharpen their
Keywords: accelerator, incubator, startups, entrepreneurship, innovation
Electronic copy available at:
We thank Kevin Otto, Katja Otto, Chaoyang Song and Aditya Ranjan for their insights and help at
the early stage of this research, as well as the participants at DRUID Asia Conference in 2016 for
their comments and suggestions. We particularly thank JFDI for data access that enabled this
research. SUTD-MIT International Design Centre and SUTD-MIT Dual Masters Programme
provided financial support. We are especially grateful to two anonymous referees and co-editors for
detailed criticism, comments and suggestions that greatly improved the article. The authors alone
are responsible for any errors and oversights.
Seed-stage startups are challenged by both resource constraints and the liability of newness
[1]. Such startups usually lack not only the necessary financial, social and human capital required to
pursue perceived opportunities [2] but also the business experience and the legitimacy to provide
viable products or services [3]. Traditionally, incubators have been used as an instrument to help
startups overcome these challenges during the most vulnerable starting-up period and grow their
legitimacy, competitiveness and maturity [4]. Incubators usually provide a shared working space,
facilities, administrative support, legal services and many networking and mentoring opportunities
with seasoned entrepreneurs, venture capitalists, industry veterans, incubator alumni and peers [5-
Over the past decade, a special type of incubator called an “accelerator” or “seed
accelerator” proliferated rapidly and emerged as an integrated part of the entrepreneurship
ecosystem. An accelerator intensifies the incubation services and accelerates startup development
through “a fixed-term, cohort-based program, including mentorship and educational components
that culminate in a public pitch event or demo-day” [10]. Accelerators make equity investments in
every startup in a cohort on the same financial terms. By their nature, accelerators are selective
when choosing startups, because they need those startups to raise funding based on an increased
valuation upon their graduation from their short time at the accelerator. The fixed-duration
(normally 3 months) cohort structure and the seed capital investment differentiate accelerators from
traditional incubators.
Despite the growing number of accelerators, most are of questionable efficacy. Recent
empirical studies [11, 12] have suggested that only the top accelerators (e.g., Y Combinator and
TechStars) can actually accelerate startups. A top accelerator is one that has often graduated
noticeably successful companies and is associated with high-profile mentors, e.g., successful serial
entrepreneurs and renowned venture capitalists, who provide reputation, credibility and valuable
mentorship and networking opportunities. Participation in a top accelerator might certify the quality
and potential of an early-stage startup and jump-start its brand [13], thus helping the startup
overcome the liability of newness and attract more and better investors. In turn, a top accelerator
will attract applications from numerous startups [14, 15], some of which have high potential even
prior to the acceleration program. In other words, an accelerator’s value to a startup is associated
with the accelerator’s status and the level of competition to enter it.
The accelerator’s success largely relies on the quality of the startups it selects. Thus, a top
accelerator must be highly selective and admit only a small number of high-potential startups to
keep the community elite and reputable in the hope of large financial returns from the startups later
on. For example, Y Combinator and TechStars admit fewer than 3% of all applying startups.
However, filtering out the small number of top-quality startups from a large number of applicants is
a challenge for accelerators. Taken together, the selection process is fundamentally crucial for the
successes of both the accelerator and the startups that apply to it, but it is also challenging for both
parties. However, little is known about how such accelerators select startups and what criteria have
a critical effect on the decisions in the process.
Our research aims to fill these gaps of understanding by addressing two related research
questions. First, what critical criteria differentiate the selected startups from the rejected ones?
Prior studies of traditional incubators and angel investors have suggested that they seldom explicitly
use a balanced and compulsory set of selection criteria and instead rely on subconscious preferences
[7, 16, 17]. We aim to identify the implicit criteria that are critical for the rejection or selection
decisions of accelerator managers. Second, do the critical criteria vary across different decision
stages of the startup selection process? And if so, how? Prior studies of traditional incubators and
angel investors have suggested that evaluation criteria may change during the decision process from
initial screening to final selection [18-20]. Thus, we aim to identify the shifts of implicit decision
criteria that are critical in different decision stages and explore possible reasons for such shifts, in
the accelerator context.
To identify accelerator managers’ implicit decision criteria, we adopted the Real-Win-Worth
framework, which was initially developed to evaluate innovation projects [21], to categorize 30
potential criteria in three main categories: Is it real? Can it win? Is it worth doing? The framework
was applied to assessing the profiles and results of the startups that applied to the accelerator
program of Singapore-based JFDI, which stands for “Joyful Frog Digital Incubator” and piloted the
first seed accelerator in Southeast Asia. Our unique dataset from the actual decision process,
together with the Real-Win-Worth framework, has allowed us to identify accelerator managers’
specific decision criteria that shift across screening and selection stages. Findings are reported in
detail in Section IV.
Therefore, our research aims to make a theoretical contribution to decision making in the
entrepreneurial process, with a focus on the accelerator context and startup selection decisions. For
practice, our findings can potentially make accelerator managers aware of their own subconscious
preferences, rationales and bias as well as the associated risks and benefits, and guide them to
improve their decision process as well as data collection. Meanwhile, understanding accelerator
managers’ implicit decision criteria may also give entrepreneurs the empathy toward accelerator
managers, and thus improve their applications.
Because accelerators are a form of incubators and invest in the equity of their “tenants” like
angel investors, we relate the literature on accelerators, traditional incubators and angel investors in
terms of how they select startups for incubation, investment, or both.
A. Accelerators
Y Combinator is often considered the first accelerator. It was founded by Paul Graham in
2005 in Cambridge, Massachusetts, and later moved to Silicon Valley. TechStars was founded by
David Cohen and Brad Feld in Boulder, Colorado, in 2006 and popularized the accelerator model
through the Global Accelerator Network, a selective international organization for accelerators that
follow the TechStars model. Today, the network has 50 accelerators in 63 cities on 6 continents,
including JFDI, which is the context of this study. Given that it is a recent phenomenon, academic
literature on accelerators is still scant despite the publication of descriptive studies [10, 22, 23]. In
contrast, there is an extant literature on the traditional incubators.
An accelerator is a special type of incubator, whose general goals are to improve the
chances of startups’ survival [6, 24] and accelerate their growth [9, 25, 26] through an array of
business support resources and services [5, 6, 9]. The European Commission reported that
incubated startups have a survival rate of 80-90% 5 years after graduation, significantly higher than
those in the wider startup community [7]. NBIA reported that incubated startups have a survival
rate of 87% after five years compared to 44% of non-incubated startups [27]. However, Amezcua
[8] found that incubation does not necessarily help startups avoid failures but may allow weak
startups to fail sooner, indicating that survival rate is an improper measure of incubator performance
[28]. In addition, the better performance of the startups in incubators might be in part a result of the
screening of weak startups and the selection of high quality ones, rather than a result of the
incubation [29].
The varied types and processes of incubators may also affect incubation performance [6,
28]. For instance, Aernoudt [9] found that the survival rate and employment growth of technology
incubators are higher than those of social, basic and mixed incubators. Amezcua [8] found that for-
profit incubators have higher employment and sales growth in their incubated startups than their
non-profit counterparts. Barbero et al. [30, 31] found that basic research incubators generate more
product and technological process innovation than university, economic development and private
incubators. A few authors have advocated for the benefits of incubators that specialize in a limited
number of industry sectors, e.g., biotechnology, energy and information technology [7, 14, 32].
Interested readers may refer to Barbero et al. [30] for a detailed review of various types of
incubators. The empirical context of our study is a private digital technology incubator and its 100-
day accelerator program.
Accelerators differ from traditional incubators in several ways. First, accelerators offer
cohort-based short-duration programs. Batches of startups enter, grow and graduate together,
whereas incubators’ services are normally continuous. The admission of startups is cyclical for
accelerators but continuous for incubators. During an acceleration period, which normally lasts
approximately three months, the accelerator offers structured and intensive networking, educational
and coaching opportunities either with mentors in residence or with successful entrepreneurs,
alumni, venture capitalists, and industry veterans.
Second, accelerators make a small equity investment in the selected startups, similar to
angel investors, but they invest in the entire cohort of admitted startups on the same financial terms
instead of investing in one venture a time. Traditional incubators seldom make equity investments;
instead, they often collect rents from the incubated startups for the shared space and resources. The
Seed Accelerator Ranking Project (; [10]) reported that in the United
States, startups admitted into accelerators received an average of $23,000 for 6% of their equity,
with 41% of them going on to receive subsequent venture funding of $350,000 or more within one
year of graduation. For returns on equity investments, accelerators must be more selective with
startups than incubators but are similar to angel investors in this regard.
In addition, most acceleration programs are concluded with a “demo day” during which the
startups pitch to external investors. At that point, the startups are expected to be ready to raise round
A or pre-A venture capital funding. Cohen and Hochberg [10] have provided a general definition of
accelerators as a fixed-term, cohort-based program, including mentorship and educational
components that culminates in a public pitch event or demo-day.” Therefore, one can view the
accelerator as a special incubator that provides startups with both seed capital investment (like an
angel investor) and intensified incubation services. For instance, as an accelerator, JFDI is viewed
as an incubator that focuses on offering two 100-day acceleration programs per year. Accelerators
may also vary in terms of the equity stake taken, program length, resources, industry focus, and
affiliations with venture capital firms, corporations, universities and local governments.
Recently, several studies have reported empirical evidence on the general impact of
accelerators on seed-stage startups. Hallen et al. [11] compared accelerated and non-accelerated
startups that eventually raised venture capital and found that only top accelerators can actually
accelerate startups in terms of gaining customer traction, raising venture capital and exiting,
whereas many other accelerators do not speed up startup development. Kim and Wagman [13]
suggested that participating in a top-rated competitive accelerator can signal the viability or certify
the quality of the seed startups. Smith and Hannigan [12] investigated the startups going through Y
Combinator and TechStars, the two leading accelerators, and found these startups are often founded
by entrepreneurs from elite universities and receive subsequent external VC funding and exit (either
acquisition or quitting) sooner than outside startups that also raised venture capital.
B. Startup Selection Criteria
Accelerators normally call for startup applications, evaluate and screen weak applications,
and admit a small number of startups to accelerate [29, 33].1 The selection process and the
characteristics of the selected startups influence the success of the accelerator itself [18], but to the
best of our knowledge have not been investigated and reported in the academic literature on
accelerators. Meanwhile, we found an extant literature on the selection process and a diverse set of
startup selection criteria considered by incubator managers [6, 18, 26, 34, 35] (see Table 1 for a
summary); these criteria were primarily identified from interviews or surveys with incubator
managers. Next, we will draw on the incubator literature to study the selection criteria involved in
accelerators’ startup selection process.
For instance, Smilor [36] surveyed and interviewed the managers of 50 incubators in the
United States to reveal a few general selection criteria, including the ability to create jobs, the
uniqueness of the opportunity, and the potential for rapid growth. Merrifield [18] identified a broad
set of selection criteria, such as profit potential, growth potential, competition, risk, capital
availability, manufacturing competence, marketing and distribution, technical support, materials
availability and management, and then divided them into three groups: startups, incubators, and the
fit between startups and incubators. Mian’s [26] comparative review of six university incubators
suggested selection criteria such as technology, growth potential, business plan, management team,
cash flow, manufacturing competence, capital availability, and fit with the incubator mission.
1 This is essential for for-profit private incubators, which normally make equity investments in the incubated startups
with the hope of harvesting huge financial returns from the eventual success of these startups. In contrast, government,
social and non-profit incubators may be less demanding of the startups they select and incubate because their main
objective is to reduce regional disparities or create local jobs for people with low employment capacities.
Based on a survey of 41 incubator managers in the U.S., Lumpkin and Ireland [34]
identified three groups of screening criteria, including the team’s experience (management,
marketing, technical and financial, etc.), financial strength (profitability, liquidity, debt and asset
ratio, assets, etc.), and market and personal factors (uniqueness and marketability of
product/services, age, creativity, persistence of the startup team). Hackett and Dilts [6] grouped
selection criteria by managerial, market, product, and financial aspects. Using the criteria suggested
by Hackett and Dilts [6] for screening, Aerts et al. [7] found that financial performance,
management team, market size, and growth rate are the primary criteria based on a survey of 140
European incubator managers. Later, Hackett and Dilts [37] re-categorized their original proposed
set of criteria into star characteristics, market characteristics, differentiation characteristics, and
manager characteristics. They also added new criteria, such as the ability to attract capital
investment, patent protection, defendable competitive positioning, and prior work experience.
Wulung et al. [38] proposed a mathematical multi-objective selection model addressing
profitability, survivability, worker absorption, and employment growth.
Bergek and Norrman [39] suggested that startup evaluation criteria can be divided into idea-
focused (i.e., market and profit potential of the idea) versus entrepreneur-focused (i.e., the
characteristics, experiences, skills of the entrepreneurs) criteria. Aerts [7] found that European
incubators focus more on the criteria related to the entrepreneurs and startup team, whereas
American incubators concentrate more on financial- or market-related criteria. Bruneel et al. [40]
found that incubators seldom explicitly use a structured set of selection criteria. However, criteria
such as technology focus, product innovativeness, and growth potential are commonly mentioned.
Meanwhile, Aerts [7] found that although most incubators screen candidates on an unbalanced set
of criteria, the incubators that use a balanced set of criteria to screen startups have a higher survival
rate of their incubated startups than those using an unbalanced set of criteria. A few scholars have
argued for the use of a balanced set of criteria to screen startups [6, 18].
C. Selection Process
In addition to selection criteria, prior research has also explored the process by which those
criteria are or are not considered. For example, Merrifield [18] described a three-step decision
process for startup selection. In the first phase, six criteria are used: sales profit potential, political
and social constraints, growth potential, competitor analysis, risk distribution and industry
restructure. In the second phase, the criteria address the fit between the startup and incubator. The
final phase focuses on criteria such as management, capital, manufacturing competence, marketing
and distribution, technical support, and availability of materials or components.
The multi-step decision process has also been reported in studies on angel investors’ choices
of startups. Landström [41] first suggested that investment decision criteria may change as the
decision process unfolds over time. Mitteness et al. [19] found that angel investors focus more on
evaluating the strength of entrepreneurs initially in the screening stage and then focus relatively
more on the business opportunity at the later stages. Based on the observation of 150 interactions
between entrepreneurs and potential investors on a Canadian reality TV show, Maxwell et al. [20]
found that angel investors consider different criteria during two decision stages, i.e., initial
screening and final decision. In the first stage, angel investors tend to use the “elimination-by-
aspects” [42] heuristic and screen startups that have a fatal flaw instead of startups that outperform
others. As suggested by Shafir et al. [16], when one finds it difficult to make a selection, the
decision rationale will be “first eliminate those options that we do not want.”
Maxwell et al. [20] also found that angel investors implicitly considered a parsimonious set
of criteria instead of a compensatory decision model that systematically weights and scores a large
number of criteria. In addition, the criteria critical for initial screening are not necessarily critical in
the final decision of whether to fund a startup. Since the investors tend to “reject” during the initial
screening stage and then “choose” in the final stage, the reasons for decisions in the screening and
final funding stages should differ [16]. Shafir [43] suggested that advantages and strengths are
weighted more heavily in choosing than in rejecting, and disadvantages and weaknesses are
weighed more heavily in rejecting than in choosing. These preferences given to different kinds of
decisions are normally implicit to the decision makers themselves.
Jeffrey et al. [17] further suggested that the “elimination-by-aspects” decision heuristic and
non-compensatory decision model require less cognitive effort [44], so they are preferable when
investors need to evaluate a large number of investment targets but have time constraints and
limited cognitive capacity. To conserve cognitive efforts, investors implicitly used a parsimonious
set of criteria to reject startups as quickly as possible and trim the number of investment alternatives
that they need to evaluate for funding. However, investors often are not conscious of their
preferences for certain evaluation criteria [45]. The managers of a top accelerator are likely to
experience similar cognitive capacity challenges when attracting many applications [20], and then
they adopt a similar decision heuristic and process.
In brief, the literature on startup selection by traditional incubators and angel investors has
shed light on the decision process and criteria that may also govern the startup selection decisions
of accelerators. Our research empirically investigates the accelerator context and explicates
accelerator managers’ implicit decision heuristics and criteria across stages in the startup selection
A. Empirical Context and Data
The Singapore-based JFDI provided the data for this research. JFDI was founded in 2010
and piloted the first seed accelerator in SEA. It is focused on running twice-yearly 100-day
accelerator programs. It is modeled after TechStars and is a member of TechStars’ “Global
Accelerator Network.” JFDI offers selected startups SG$25,000 for 8.88% equity, mentorship, and
facilities to build and grow their startups over 100 days. The co-founders, Hugh Mason (British)
and Meng Weng Wong (Singaporean), were both successful serial entrepreneurs with extensive
global business experience and networks in information technology, media, and marketing sectors.
JFDI was funded by an international consortium of investors, including Infocomm Investments
(Singapore’s government investment arm in the information and communication technology sector),
SpinUp Partners (Russia) and Fenox Venture Capital (Silicon Valley), along with private investors
from the Philippines, Vijay Saraff (Thailand), Paul Burmester (UK) and Thomas Gorissen
(Germany).2 Below are some facts about JFDI for the period 2010 to 2015, retrieved from JFDI’s
website and the technology media.3
$3 million dollars were raised and deployed into 70 startups through a structured 100-day
program, creating a portfolio that is now independently valued at >$60 million.
JFDI admitted 8-12 startups in each batch, approximately 4% of all teams that applied.
The startups entered with a value between $200,000-500,000, and 50% of them went on to
secure seed funding averaging approximately $500,000 at valuations of $1.5-3.5 million.
Two years after acceleration, approximately 15-20% of the startups that secured seed funding
grew into successful businesses. The hit rate is approximately 10% of all the teams accelerated.
JFDI’s pre-accelerator program supported more than 400 startups and 1500+ entrepreneurs from
40+ countries.
At the point of our data analysis, the accelerator had selected and incubated four batches of
startups with founders from 12 countries (primarily from SEA and India). The data analyzed in this
paper are complete digital profiles of the startups that they submitted to JFDI to compete for
entrance into the accelerator program from 2014 to 2015. The total dataset contains 1,003 startup
application profiles in four different batches. JFDI made two calls for applications per year. Among
the 1,003 startups that applied, only 40 were chosen, indicating a success rate of 4%. In brief,
JFDI’s reputation in the region, the fierce competition among startups to enter its small cohort, and
its low selection rate makes JFDI a suitable empirical context in which to investigate the decision
process and criteria for the startup selection of a top-rated accelerator.
The accelerator requires each applying startup to register an account on a website platform
by providing its basic information and optional information such as website address, co-founder
picture, social media link, and an introduction video. Most importantly, startups are required to
2 Information retrieved from JFDI website:
3 TechCrunch:
answer a long list of questions about their team, product, operations, markets, competitors, and
future plans online (Table 2). Their answers to these questions profile them and are the raw data for
our analysis. To succeed in the competition for selection, the startups tend to provide information
that is as detailed as possible. The accelerator did not explicitly emphasize or focus on any criteria
for evaluating startups. But some implicit criteria might become critical as a result of the managers
and mentors’ decisions, as suggested by the studies on the investment decision making of angel
investors [17, 20].
JFDI’s selection process consists of two stages: an initial profile screening and an interview.
In the first screening stage, JFDI managers and mentors reviewed all the startup profiles that were
submitted online for each call for applications to screen and trim the startup candidates. The
majority of applications were rejected quickly, but a small number of startups proceeded to an
interview. After the interview, an even smaller number of startups were accepted into the
accelerator program. Following the two-stage selection process, i.e., profile screening and the final
interview/decision, we divide the total population of 1,003 startup applicants into different groups
and subgroups (Figure 1). The first group was categorized as “Filtered and includes 841 startups
that were rejected in the initial screening stage. The second group was categorized as “Interviewed”
and includes the 162 startups that passed the screening and were invited for interviews. Within the
“Interviewed” group, 40 startups were selected into the accelerator. This subgroup was called
“Interviewed and Successful.” The rest of the “Interviewed” group was categorized in the
“Interviewed but Unsuccessful” subgroup.
The startup profiles need to be manually read and coded. To conserve time and effort while
ensuring a large enough sample size for the analysis, 100 startup profiles were randomly selected
from the “Filtered” group of 841 startups and the “Interviewed” group of 162 startups in the first
stage of the selection process. In the second stage, the “Interviewed and Successfulsubgroup had a
total of 40 startups. The workload required to read and code all the profiles was acceptable and thus,
all of the profiles were analyzed. To match the size of the “Interviewed and Successfulsubgroup
for a comparative analysis, 40 startup profiles were randomly selected from the “Interviewed but
Unsuccessful” subgroup, which has 122 startups.
B. Critical Criteria
We assumed that JFDI managers unconsciously applied only a few criteria in their rejection
or selection decisions, and these criteria will emerge as making a significant difference between the
startups that succeeded and the startups that failed during the process. Therefore, we assessed and
compared different groups and subgroups of startups against a scoreboard of a comprehensive list
of potential criteria (Table 3) for factor screening to identify the parsimonious set of implicit critical
criteria that result in the decisions. Our scoreboard includes 30 potential criteria, most of which
were chosen from the previously reported startup selection criteria in the incubator literature (Table
In Table 4, we map the criteria in our scoreboard (Table 3) to the references that previously
reported them. Note that some previously reported criteria, such as startup status, technology-based
business and having a business plan, are true by default for all the startups applying to JFDI. Other
criteria, such as the persistence of the management team, are impossible to assess because such
information was not collected in the online application form (Table 2). Thus, our scoreboard
excludes such criteria (listed at the bottom of Table 4) but includes the rest summarized in Table 1.
On this basis, we added a few criteria related to the success or failure of new product development
Specifically, we adopted the Real-Win-Worth framework, which was initially developed to
evaluate innovation projects [21] and later crowd-funding projects [47], to categorize the 30 criteria.
The Real-Win-Worth screen framework is a systematic synthesis of the success or failure factors in
the new product development literature, and allows one to evaluate the risks and potential of
individual projects by answering questions in three main aspects [21]:
o “Is it real?” explores both market potential and the feasibility of developing the product.
o “Can it win?”4 considers whether the innovation and the company can be competitive.
o “Is it worth doing?” examines the profit potential and whether the innovation makes strategic
sense in the long term.
One can dig deeper for the answers to six more specific questions in the real, win and worth
categories: Is the market real? Is the product real? Can the product be competitive? Can our
company be competitive? Will the product be profitable at an acceptable risk? Does launching the
product make strategic sense? To answer these six queries, one can explore an even more
fundamental set of supporting questions. For example, one can answer the query “Is the market
real?” by answering the following detailed questions: “Is there a need or desire for the product?
Does the consumer understand the benefits of the innovation? Can the customer afford to buy it? Is
the size of the potential market big enough to be worth pursuing? Will the customer have subjective
barriers to buying the product?
In brief, the Real-Win-Worth framework is built on a series of questions about the product,
its market, the competition and the team’s capabilities to expose problems, potential sources of risk,
areas for improvement, and reasons for termination. George Day presented 17 such fundamental
questions [21], whereas Song et al. created 26 questions [47], which belong to the respective Real,
Win and Worth main categories and 6 subcategories. Versions of the Real-Win-Worth questions
have been developed and used by companies, including General Electric, Honeywell and Novartis
to assess business potential and risk exposure of their innovation projects. 3M has used it to
evaluate more than 1,500 projects [21].
To assess the startups, we framed 30 fundamental questions corresponding to the 30
potentially critical criteria into the respective Real-Win-Worth categories (see Table 3), i.e., “Are
the product and market real? Can the product and entrepreneur team win? Is the startup
worthwhile? These 30 questions were well aligned with the startup selection criteria regarding the
4 In the original Real-Win-Worth framework developed for a company to assess its internal innovation projects, the
second question was “can we win”. Herein, we use “can it win” instead to indicate that the assessment of a startup was
not done by the startup itself but the accelerator or any third party.
product [6, 18, 26]; market [6, 7, 18, 26, 34, 37, 39, 48]; entrepreneur or team [7, 18, 26, 34, 37,
39]; protectability [37, 48]; and finance [6, 7, 18, 34, 37, 39, 48] from the incubator literature, and
they also covered the investment opportunity evaluation criteria of the angel investors. For example,
the eight criteria of Maxwell et al. [20] for angel investors’ startup evaluation, including market
potential, product adoption, protectability, entrepreneur experience, product status, route to market,
customer engagement, and financial projections, are all covered by the questions in our Real-Win-
Worth categories.
These 30 questions were designed so that they can be answered objectively with “full,”
“none” or “partial” evidence found in the application data of startups, regardless of the person that
read the data to answer the question. These three levels of availability of evidence (none / partial /
full) were further translated into 0 / 0.5 / 1 for our statistical analysis.5 One simply needs to look for
evidence and facts in the application documents. We also provided specific guidance to the coder
for reading and coding the startup profiles to answer each of the 30 questions. One example is given
in Table 5. The descriptions of such guidance for all questions are available upon request.
Despite the objectivity involved in assessing the startup profiles by answering the questions,
we ran a test to ensure inter-rater reliability. We invited two researchers who have business
backgrounds but have not been exposed to the questions and descriptive guidance to use them to
code a set of five startup profiles from our database. Each researcher read the five profiles
independently and highlighted the evidence in the startup profile to support his answer to each of
the 30 questions. A third researcher orchestrated an intense discussion subsequently to reconcile
different interpretations and further benchmark the coding. In this manner, the inter-rater
5 The 3-level rating schemes were preferable for our dataset. First, in some cases, the evidence in startup profiles for
answering a question is neither non-existent nor significant; it falls in the middle. Thus, rating with only two extremes
(0 and 1) is insufficient. Second, a more fine-grained or gradual rating level for the middle ground is also cognitively
challenging for the researcher deciding a score. In our inter-rater reliability tests, the ratings of different researchers
could not converge easily when more gradual ratings were allowed. The three-level rating (0, 0.5, and 1) enabled a
Kappa ratio higher than 0.8. The high Kappa ratio furthered ensured the reliability of any researcher from the test.
repeatability reached a Cohen’s Kappa of 80%,6, indicating a high degree of consensus. Through
the test, we also found that the objectivity of the startup profile data and the three-level rating
scheme leave little room for inter-rater variability.
C. Prediction Models
After the critical criteria were identified from the comparative analysis between the filtered
and interviewed groups in the screening stage and between the rejected and selected subgroups in
the final stage, we used them to predict the screening and selection results of the additional sets of
startups to explore whether the critical criteria in the respective decision stages were more
explanatory in terms of rejection than acceptance decisions, or the opposite. To do so, we
incorporated the critical criteria as predictor variables in a stepwise regression procedure to build
the regression model that achieves the highest predictability on the screening or selection results of
respective stages. The stepwise regression procedure inserts the candidate predictor variables or
removes the variables from the trial regression model in a stepwise manner to fine-tune the model
regarding the statistical fit, i.e., R2. The most predictive regression model that results might include
a subset, not all the candidate predictor variables.
In each decision stage, we used a binary dependent variable to indicate the screening or
selection result in the logistic regression analyses.
Profile screening stage: dependent variable is 1 if the startup passed screening and was
invited to the interview or 0 if the startup was rejected.
Final selection stage: dependent variable is 1 if the startup passed the interview and was
successfully admitted into the incubator or 0 if the startup was rejected.
Additional information about the startups, such as a website, social media link and founder’s
photos, was also collected via the online application system and was visible to accelerator
managers. Such information is extrinsic to the people, products, operations, markets, strategies and
6 We also tested using more rating levels than 3 and found it was challenging for the raters to achieve a high Kappa
ratio. Rating using three levels (1, 0.5 and 0) was the most practical to ensure a high inter-rater reliability.
business of the startups but might influence the perception of accelerator managers and thus their
decisions. To consider the effects of such extrinsic factors, we incorporated the following binary
control variables in the regression analysis. These variables can be assessed using the information
collected in the application system.
Website: Variable is 1 (or 0 otherwise) if the startup provides a working company website
address in the application.
Social media: Variable is 1 (or 0 otherwise) if the startup provides a working social media (e.g.,
Facebook, Twitter) link in the application.
Media: Variable is 1 (or 0 otherwise) if the startup provides an introduction video in the
Profile picture: Variable is 1 (or 0, otherwise) if the co-founders of the startup upload their
profile pictures in the application.
Location: Variable is 1 (or 0 otherwise) if the startup’s headquarters is in Singapore.
Recommend: Variable is 1 (or 0 otherwise) if the startup identifies an internal referee from the
During the stepwise regression, although the candidate predictor variables (i.e., the critical
criteria) were removed or added in a stepwise manner, the control variables above were always
included in all intermediate regression models in the search for the best model. The baseline model
in the stepwise regression included only the control variables. Such regression models use the
critical criteria as well as the control variables to explain the screening and selection outcomes.
After the most predictive regression models were built from the stepwise regression procedure, we
further used them to “predict” the successes or failures of additional samples of startups in the
respective stages and compared the predicted results and actual results to uncover accelerator
manager’s subconscious decision preferences or rationales behind the critical criteria identified in
different decision stages.
Specifically, if the model using the critical criteria as predictive variables was more predictive
for failures than for successes, the critical criteria were more likely to be reasons to reject than to
choose the startups, and the managers were more likely to reject the startups due to their weakness
in these criteria than to choose them. If the best regression model based on the critical criteria was
more predictive of successes than of failures, the critical criteria were more likely to be the reasons
to choose than to reject the startups, and the managers were more likely to choose the startups for
their strengths in these criteria than they were to reject them.
A. Critical Criteria
1) Initial screening
We first compare the groups of startups that are “filtered” and “interviewed” in the initial
screening stage. The mean ratings of all 30 criteria for the “Filtered” and “Interviewed” groups are
reported in Table 6. Because the ratings are not normally distributed, a nonparametric Wilcoxon test
is performed on the mean differences of two groups. For criteria Q1, Q2, Q3, Q6, Q7, Q8, Q12, and
Q21, the “Interviewed” group presents a much higher mean rating than the “Filtered” group, and the
differences are statistically significant based on nonparametric tests. These 8 criteria were critical in
the initial screening stage.
Among the criteria that make significant differences, Q1, Q2 and Q3 are in the “Real” main
category and “Market Attractiveness” subcategory. Q1, “demand validation,” asks whether there is
any demand for the startup’s product. Q2, “customer affordability,” asks whether customers can
afford to buy the product. Q3, “market demographics,” relates to the size and growth potential of
the targeted market. Q1, Q2 and Q3 together indicate whether the startup has presented evidence of
the real existence of potential customers and markets.
Q6, Q7 and Q8 are in the “Real” category and “Product Feasibility” subcategory. Q6,
“concept maturity,” asks whether the concept has enough details and development to allow it to
evolve into a real product. Q7, “sales and distribution,” asks whether existing sales and distribution
channels have been established. Q8, “product maturity,” addresses the stage of the product’s
development. Q6, Q7 and Q8 together indicate that startups need to present evidence that their
products can be realistically made, sold, and distributed.
Q12, “value proposition,” is in the “Wincategory and “Product Advantage” subcategory. It
focuses on the benefits that the product can provide to customers. Q21, “technology expertise,”
belongs to the “Win” category and the “Team Competence” subcategory. It considers the startup’s
technical ability to develop the product. The criticality of Q12 and Q21 in the “Win” category
suggests that startups need to present evidence of the competitiveness of their products and their
own relevant technical capabilities. Notably, other criteria related to team competencies in
marketing, sales, management and finance within the “Win” category appear insignificant, implying
that accelerator managers have focused more on technology expertise than on non-technical team
competencies in initial screening. Taken together, these eight critical criteria in the screening stage
are in the “Real” and “Win” categories, and none of them falls into the “Worth” category.
2) Final selection
Next, we focus on those startups that passed initial screening and compare the mean ratings
of the subgroups that are “successful” and “unsuccessful” in being eventually selected into the
accelerator program. The mean rating differences of 30 criteria of the two subgroups are reported in
Table 6 with the Wilcoxon values from nonparametric Wilcoxon tests. For Q13, Q24, Q25 and
Q29, the successful subgroup of startups presents a much higher average rating than the
unsuccessful subgroup, with statistical significance based on Wilcoxon tests. These four criteria are
critical in the final selection stage. None of them was found critical in the initial screening stage.
Q13, “sustainable advantage,” is in the “Win” main category and “Product Advantage
subcategory. It concerns whether the startup has unique assets or capabilities to sustain its
advantages. Q24 and Q25 are in the “Win” main category and “Team Competence” subcategory.
Q24, “prior startup experience,” addresses the relevant experience of the entrepreneurs. Q25,
“feedback mechanism,” addresses whether the startup has an adequate mechanism to consistently
listen to customers and respond to the market. The criticality of Q13, Q24 and Q25 in the “Win”
category suggests again that startups need to present evidence of the competitiveness of their
products and their teams ability to defeat the competition and sustain the business. Finally, Q29,
“growth strategy,” is in the “Worth” main category and “Growth Potential” subcategory. It asks
whether the startup presents viable strategies for long-term growth. Its criticality suggests the
importance of presenting information about how the startup is prepared for long-term growth.
Notably, none of the four critical criteria in the second stage falls into the “Real” category.
In the later stage of the selection process, accelerator managers subconsciously shifted their focus
from “Real” to “Worth” criteria, and specifically shifted from assessing how real the product and
market are based on a short-term perspective to assessing the potential of the people and strategy
based on a long-term perspective.
B. Prediction Models
Now, we further explore the possible shifts in accelerator managers’ decision rationales
behind the shifting decision criteria across stages. We first use stepwise regression to sift the
identified critical criteria as candidate predictor variables to identify the most predictive regression
model regarding the screening or selection result of each stage and then apply the prediction model
to “predict” the successes or failures of an additional sample of startups in each stage. On this basis,
we compare the predicted results with the actual results in each stage.
1) Initial screening
We first incorporate the eight critical criteria in the initial screening stage together with all
the control variables in a stepwise regression procedure to explore the regression model that is the
most predictive of the results of the screening stage.7 The critical criteria are sifted in the stepwise
regression procedure to maximize the R2 of the regression model. The resulting regression model is
reported in Table 7. This model has an R2 of 0.6307 and includes Q1, Q3, Q6, Q7 and Q21 as
predictors, all of which have statistically significant effects on the screening results (as evidenced
by the small p-values for their coefficients). In other words, this model with just a subset of five
7 Our pairwise correlation analysis shows that these critical criteria are only weakly correlated with one another or with
the control variables, thus ensuring their incorporation as independent variables in the regression analysis.
critical criteria achieves a higher R2 than the regression model that includes all eight critical criteria
identified from the pairwise comparison between the “Filtered” and “Interviewed” groups. This
model is also significantly more predictive than the baseline model that includes only the control
variables and has an R2 of 0.3898. Table 7 also shows statistically significant results that would
increase the chance of passing profile screening with a working website and a reference from inside
the accelerator among the control variables.
We further apply the model that was optimized for the initial screening stage to “predict” the
results of an additional set of 50 startups randomly sampled from the “Filtered” and “Interviewed”
groups (25 from each). These 50 startups were independent from those used in the stepwise
regression analysis that built the prediction model. The results are presented in Table 8. For both the
“Interviewed” and “Filtered” groups, prediction accuracies are higher than 65%. For the 18 startups
that the model predicted as “Filtered,” the prediction accuracy is 78%, which is much higher than
the accuracy of 66% for the predicted “Interviewed” group.
Therefore, the regression model in the initial screening stage is more predictive of failures
than of successes, and the critical criteria in the model are more explanatory of the reasons to reject
than to accept. This result implies that accelerator managers were more likely to be “rejecting
startups because of their weaknesses in the critical criteria in the screening stage. This result is
aligned with the prior studies on the decision process of business angels [20]. Following the
argument of Shafir et al. [16] that humans look for weaknesses to reject, entrepreneurs should avoid
weaknesses in the critical criteria identified here to reduce the likelihood of being screened.
However, given the limited prediction accuracies (66%~78%) of the model, we need to draw such
conclusions with caution.
2) Final selection
For the final selection stage, we incorporate the four critical criteria with the control
variables in stepwise regression to explore the most predictive model on the results of the final
selection stage.8 The critical criteria are sifted in the stepwise regression procedure to maximize the
R2 of the regression model. As a result, the model that has the largest R2 (0.5771) incorporates all
four critical criteria (Q13, Q24, Q25 and Q29), all of which have a statistically significant effect on
the selection result (as evidenced by the small p-values for their coefficients). The model is also
reported in Table 7. This model’s predictive power is also much higher than the baseline model that
only includes the control variables and has an R2 of 0.1383.
Again, we applied the prediction model optimized for the final selection stage to an
additional set of 50 randomly selected startup profiles from the “Interviewed and Successful” and
“Interviewed but Unsuccessful” subgroups (25 from each). The results are shown in Table 8. For
both subgroups, the prediction accuracies are higher than 60%. Of the 18 startups that the model
predicted as successfully accepted, the prediction accuracy rate is 72%, which is much higher than
the accuracy of 63% for the predicted unsuccessful rejected subgroup.
Therefore, the regression model in the final selection stage is more predictive of successes
than of failures in the final selection stage, and the critical criteria incorporated in the model are
more explanatory of the reasons to accept than to reject. This result suggests that accelerator
managers are more likely to be making “choosing” decisions than “rejecting” decisions in the final
selection stage, and the startups that present advantages in the identified critical criteria are more
likely to be chosen. Following Shafir et al.’s [16] argument that humans normally weigh advantages
for choosing, entrepreneurs are suggested to develop strengths and present relevant information
about the critical criteria identified in the final selection stage. However, given the limited
prediction accuracies (63~72%) of the model, one cannot draw a firm conclusion.
The analyses above have unveiled a parsimonious set of decision criteria that shift across the
initial screening and final selection stages in accelerator managers’ startup selection process.
8 Our pairwise correlation analysis of these critical criteria and the control variables find only weak correlations, which
ensure the incorporation of these critical criteria as independent variables in the regression model.
Specifically, demand validation, customer affordability, market demographics, concept maturity,
sales and distribution, product maturity, value proposition, and technology expertise were critical in
the decisions of screening a large number of startups in the initial stage. Sustainable advantage,
prior startup experience, feedback mechanism and growth strategy were critical in the decisions of
selecting a small number of startups in the final stage.
These specific shifting critical criteria across decision stages in the accelerator context differ
from those previously reported in studies of incubators and angel investors. For instance, Merrifield
[18] suggested that incubators’ evaluation criteria shift from the business opportunity to the
entrepreneurs, management and operations during the decision-making process. Mitteness et al.
[19] found that angel investors focus more on evaluating the entrepreneurs in the screening stage
and then on the business opportunity at the later stage. In our findings about the accelerator, some
of the critical criteria in initial screening, such as demand validation, market demographics and
concept maturity, are related to the business opportunity, and two criteria regarding team
competence (prior startup experiences and feedback mechanism) are critical for the final stage.
Such differences might result from the differences in the incentives and natures of accelerators from
traditional incubators and angel investors.
Additionally, we found a shift in critical criteria across the Real-Win-Worth categories—
specifically, from Real and Win criteria in the initial screening stage to Win and Worth criteria in
the final selection stage. In the screening stage, no criterion regarding is it worth doing” is found to
be critical, whereas no criterion regarding “is it realmakes a critical difference in the final
selection. At the same time, there is also a shift of criteria from the technical to non-technical
capabilities of the entrepreneurs in the “can it win” category. Therefore, the Real-Win-Worth
framework provides additional insights into how real, how competitive and how worthwhile a
startup is, compared to the prior startup evaluation frameworks (see section 2.2), and has allowed us
to identify a different shift heuristic in the decision-making process.
This shift of critical criteria might be a result of the managers’ decision rationale change
from “rejecting” in the screening stage because of the need to trim a large number of startups to
“accepting” a small number of startups in the final selection stage. In other words, the criteria
critical for rejections versus acceptances are different. We found preliminary evidence in this regard
by applying the best regression models using the critical criteria as predictor variables to predict the
screening and selection results of an additional set of startups in respective stages. Such a shift in
implicit decision rationales of accelerator managers might be their natural response to the large
number of applicants and their limited time and cognitive capacity to make choices.
Note that in the second stage, accelerator managers may gain additional non-written
information via the interviews that was not in the online application data but that might be related to
additional criteria that are critical for the decision. This suggests that by analyzing only the
application profile data, we might miss some critical criteria for final selection decisions. Moreover,
for the critical criteria in the initial screening stage, they might be still critical for the final selection,
but the differences in such criteria are no longer sufficient to distinguish the startups that passed the
screening stage. Therefore, the four critical criteria are likely to be only a subset of all the critical
criteria for the final selection stage.
It is also noteworthy that the JFDI managers did not purposefully or explicitly prioritize a
parsimonious set of critical criteria in their decision process. The criticality of these criteria
emerged from collective human behaviors and has been uncovered by our empirical analysis of the
profiles of the startups that have been selected or rejected by the managers, rather than by surveying
or interviewing the managers. As suggested by the psychological studies of decision making [16],
decision makers often do not make a decision with clearly ranked preferences because of the
complexity of choices but instead determine the preferences as a result of having to decide. Many
venture capitalists also do not understand their own decision rationales and biases [45]. This seems
to be true in the case of JFDI and its competitive startup selection process. Our data-driven
identification of critical decision criteria may inform accelerator managers of their own
subconscious decision preferences, rationales and biases.
We presented our results and findings to the JFDI managers, who provided the data and
context for this research. One JFDI manager made the following comment –
“The findings of the paper are insightful in the sense that it would help us to be more conscious
about the shift in key factors at different stage of the selection. This realization would help us think
about how we could improve the efficiency of our selection process. In addition, this paper would
help first-time founders understand what kind of business idea is worth doing and reject the weak
ideas as quickly as possible to conserve resources.”
To summarize, our analyses have identified a small number of implicit decision criteria of
accelerator managers’ and a heuristic shift of these criteria across the initial screening and final
selection stages in the decision-making process. According to the Real-Win-Worth framework,
eight Real or Win criteria (i.e., how real and competitive the product is) were critical in the initial
screening decisions of a large number of startups, and another four Win or Worth criteria (i.e., the
competitiveness and potential of the people and strategy) were critical in the final selection
decisions of a small number of startups. Using the identified critical criteria to predict the results of
additional startups, we demonstrate preliminary evidence that the critical criteria in the initial stage
are more explanatory on “rejection” decisions, and the critical criteria in the final stage are more
explanatory on “selection” decisions.
This research has contributed to the growing literature on the accelerator phenomenon [10-
13, 22, 23, 49] by developing a nuanced understanding of the shifting decision criteria across stages
in the startup selection process. Our research also extends the earlier studies of the investment
decision-making of angel investors [17, 20, 45, 50, 51] by not only showing the shift heuristic but
also identifying the specific shift from Real and Win criteria in the initial screening stage to Win
and Worth criteria in the final selection stage, based on the Real-Win-Worth framework. Therefore,
we believe our findings have made a theoretical contribution to decision making in the
entrepreneurial process, particularly in the new accelerator context.
For practice, our findings may help accelerator managers be more conscious of their own
subconscious preferences, rationales and biases and thus improve the decision process.
Understanding their own implicit and shifting decision criteria across stages could potentially be
useful in refining the web-based application data collection system and developing data analytics
(e.g., using prediction models) to make more informed decisions. Meanwhile, these findings may
also help entrepreneurs be more empathetic with accelerator managers, and guide them to better
relate their businesses toward the critical criteria.
Our findings and contributions are grounded in a unique dataset. Our startups’ profile data
were not generated for this research but were rather submitted by the startups themselves to the
JFDI accelerator for the competition into the accelerator program. Our dataset allowed us to take a
data-driven approach to empirically identify the shifting decision criteria in the accelerator decision
process and reveal the subconscious decision preferences, rationales and biases of accelerator
managers. Therefore, our research complements the majority of the prior research that was based on
interviews or surveys with incubator managers and sought to identify the selection criteria from
their opinions and recollections.
A few limitations are worth mentioning. First, the startup profiles used in this study were
obtained from a Singapore-based accelerator that specializes in software and mobile applications.
Thus, the results might not be directly applicable to accelerators in other regions or industries. This
suggests a future research opportunity to develop a contingent understanding related to the traits,
processes and performances of accelerators in different geographic, industry and socio-economic
contexts. Second, the utility of the prediction models for different stages are limited by their low
accuracy (below 80%). Our analysis using these regression models can only be claimed to be
preliminary. We hope that the preliminary prediction models here can be viewed as an invitation for
more comprehensive and powerful data-driven prediction models. Third, accelerator managers may
gain additional non-written information via the interviews that could be critical for their decisions.
Thus, it is possible that by analyzing only the startup profile data, we have overlooked some critical
criteria in the second stage, and the four critical criteria we identified are likely to be a subset.
Future research may involve videotaping such interviews and conducting a verbal protocol analysis
to interpret behaviors and information exchanges during the interviews, as previously done by
Maxwell et al. [20].
Moreover, a natural future direction for accelerator research would involve exploring the
implications of different startup selection processes and criteria as well as the aggregate
characteristics of the startup application pool on the performance of both startups and accelerators.
This approach would require the collection of performance data at the accelerator level, such as the
accelerated startups’ returns on investment. For example, given the importance of the quantity and
quality of the startups for the future success of an accelerator, it will be interesting to investigate the
effects of the size and heterogeneity of the pool of applicant startups on the later performances of
accelerators. In general, the accelerator phenomenon represents an interesting avenue for further
research and understanding to aid startups in overcoming their challenges during the infancy period
of the entrepreneurship process.
[1] D. A. Shepherd, E. J. Douglas, and M. Shanley, "New venture survival: Ignorance, external
shocks, and risk reduction strategies," Journal of Business Venturing, vol. 15, pp. 393-410,
[2] E. Ries, "The lean startup," New York: Crown Business, 2011.
[3] H. Aldrich, Organizations Evolving: Sage, 1999.
[4] K. Chan and T. Lau, "Assessing technology incubator programs in the science park: the
good, the bad and the ugly," Technovation, vol. 25, pp. 1215-1228, 2005.
[5] M. Erlewine and E. Gerl, A comprehensive guide to business incubation: National Business
Incubation Association, 2004.
[6] S. M. Hackett and D. M. Dilts, "A real options-driven theory of business incubation," The
Journal of Technology Transfer, vol. 29, pp. 41-54, 2004.
[7] K. Aerts, P. Matthyssens, and K. Vandenbempt, "Critical role and screening practices of
European business incubators," Technovation, vol. 27, pp. 254-267, 2007.
[8] A. S. Amezcua, Boon or Boondoggle? Business incubation as entrepreneurship policy:
Syracuse University, 2010.
[9] R. Aernoudt, "Incubators: tool for entrepreneurship?," Small Business Economics, vol. 23,
pp. 127-135, 2004.
[10] S. Cohen and Y. V. Hochberg, "Accelerating startups: The seed accelerator phenomenon,"
[11] B. L. Hallen, C. B. Bingham, and S. Cohen, "Do Accelerators Accelerate? A Study of
Venture Accelerators as a Path to Success?" in Academy of Management Proceedings, 2014,
p. 12955.
[12] S. W. Smith and T. J. Hannigan, "Swinging for the fences: How do top accelerators impact
the trajectories of new ventures," Paper presetend at DRUID Conference, Rome, Italy, 2015.
[13] J.-H. Kim and L. Wagman, "Portfolio size and information disclosure: An analysis of
startup accelerators," Journal of Corporate Finance, vol. 29, pp. 520-534, 2014.
[14] M. T. Hansen, H. W. Chesbrough, N. Nohria, and D. N. Sull, "Networked incubators,"
Harvard Business Review, vol. 78, pp. 74-84, 2000.
[15] L. Rothschild and A. Darr, "Technological incubators and the social construction of
innovation networks: an Israeli case study," Technovation, vol. 25, pp. 59-67, 2005.
[16] E. Shafir, I. Simonson, and A. Tversky, "Reason-based choice," Cognition, vol. 49, pp. 11-
36, 1993.
[17] S. A. Jeffrey, M. Lévesque, and A. L. Maxwell, "The non-compensatory relationship
between risk and return in business angel investment decision making," Venture Capital,
vol. 18, pp. 189-209, 2016.
[18] D. B. Merrifield, "New business incubators," Journal of Business Venturing, vol. 2, pp. 277-
284, 1987.
[19] C. R. Mitteness, M. S. Baucus, and R. Sudek, "Horse vs. jockey? How stage of funding
process and industry experience affect the evaluations of angel investors," Venture Capital,
vol. 14, pp. 241-267, 2012.
[20] A. L. Maxwell, S. A. Jeffrey, and M. Lévesque, "Business angel early stage decision
making," Journal of Business Venturing, vol. 26, pp. 212-225, 2011.
[21] G. S. Day, "Is it real? Can we win? Is it worth doing," Harvard Business Review, vol. 85,
pp. 110-120, 2007.
[22] P. Miller and K. Bound, The Startup Factories: The Rise of Accelerator Programmes to
Support New Technology Ventures: NESTA, 2011.
[23] D. A. Isabelle, "Key factors affecting a technology entrepreneur's choice of incubator or
accelerator," Technology Innovation Management Review, vol. 3, p. 16, 2013.
[24] M. Schwartz, "Beyond incubation: an analysis of firm survival and exit dynamics in the
post-graduation period," The Journal of Technology Transfer, vol. 34, pp. 403-421, 2009.
[25] R. W. Smilor and M. D. Gill Jr, "The New Business Incubator: Linking Talent,
Technology," Capital, and Know-how, 1986.
[26] S. A. Mian, "US university-sponsored technology incubators: an overview of management,
policies and performance," Technovation, vol. 14, pp. 515-528, 1994.
[27] K. Grifantini, "Incubating Innovation: A standard model for nurturing new businesses, the
incubator gains prominence in the world of biotech," IEEE Pulse, vol. 6, p. 27, 2015.
[28] P. H. Phan, D. S. Siegel, and M. Wright, "Science parks and incubators: observations,
synthesis and future research," Journal of Business Venturing, vol. 20, pp. 165-182, 2005.
[29] A. S. Amezcua, M. G. Grimes, S. W. Bradley, and J. Wiklund, "Organizational sponsorship
and founding environments: A contingency view on the survival of business-incubated
firms, 1994–2007," Academy of Management Journal, vol. 56, pp. 1628-1654, 2013.
[30] J. L. Barbero, J. C. Casillas, A. Ramos, and S. Guitar, "Revisiting incubation performance:
How incubator typology affects results," Technological Forecasting and Social Change,
vol. 79, pp. 888-902, 2012.
[31] J. L. Barbero, J. C. Casillas, M. Wright, and A. R. Garcia, "Do different types of incubators
produce different types of innovations?," The Journal of Technology Transfer, vol. 39, pp.
151-168, 2014.
[32] M. Schwartz and C. Hornych, "Specialization as strategy for business incubators: An
assessment of the Central German Multimedia Center," Technovation, vol. 28, pp. 436-449,
[33] S. Linder, 2002 State of the Business Incubation Industry: NBIA Publications, 2003.
[34] J. R. Lumpkin and R. D. Ireland, "Screening practices of new business incubators: the
evaluation of critical success factors," American Journal of Small Business, vol. 12, pp. 59-
81, 1988.
[35] L. Peters, M. Rice, and M. Sundararajan, "The role of incubators in the entrepreneurial
process," The Journal of Technology Transfer, vol. 29, pp. 83-91, 2004.
[36] R. W. Smilor, "Managing the incubator system: critical success factors to accelerate new
company development," Engineering Management, IEEE Transactions on, pp. 146-155,
[37] S. M. Hackett and D. M. Dilts, "Inside the black box of business incubation: Study B—scale
assessment, model refinement, and incubation outcomes," The Journal of Technology
Transfer, vol. 33, pp. 439-471, 2008.
[38] R. S. Wulung, K. Takahashi, and K. Morikawa, "An interactive multi-objective incubatee
selection model incorporating incubator manager orientation," Operational Research, vol.
14, pp. 409-438, 2014.
[39] A. Bergek and C. Norrman, "Incubator best practice: A framework," Technovation, vol. 28,
pp. 20-28, 2008.
[40] J. Bruneel, T. Ratinho, B. Clarysse, and A. Groen, "The evolution of business incubators:
Comparing demand and supply of business incubation services across different incubator
generations," Technovation, vol. 32, pp. 110-121, 2012.
[41] H. Landström, "Informal investors as entrepreneurs: Decision-making criteria used by
informal investors in their assessment of new investment proposals," Technovation, vol. 18,
pp. 321-333, 1998.
[42] A. Tversky, "Elimination by aspects: A theory of choice," Psychological Review, vol. 79, p.
281, 1972.
[43] E. Shafir, "Choosing versus rejecting: Why some options are both better and worse than
others," Memory & Cognition, vol. 21, pp. 546-556, 1993.
[44] J. W. Payne, J. R. Bettman, and E. J. Johnson, "Adaptive strategy selection in decision
making," Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 14,
p. 534, 1988.
[45] A. L. Zacharakis and G. D. Meyer, "A lack of insight: do venture capitalists really
understand their own decision process?," Journal of Business Venturing, vol. 13, pp. 57-76,
[46] R. G. Cooper and E. J. Kleinschmidt, "New products: what separates winners from losers?,"
Journal of Product Innovation management, vol. 4, pp. 169-184, 1987.
[47] C. Song;, J. Luo;, K. Hölttä-Otto;, K. Otto;, and W. Seering, "The design of crowd-funded
products," in ASME 2015 International Design Engineering Technical Conferences &
Computers and Information in Engineering Conference (IDETC/CIE 2015), Boston,
Massachusetts, 2015.
[48] F. A. Khalid, D. Gilbert, and A. Huq, "Investigating the underlying components in business
incubation process in Malaysian ICT incubators," Asian Journal of Social Sciences and
Humanities, vol. 1, pp. 88-102, 2012.
[49] N. Radojevich-Kelley and D. L. Hoffman, "Analysis of accelerator companies: An
exploratory case study of their programs, processes, and early results," Small Business
Institute Journal, vol. 8, pp. 54-70, 2012.
[50] C. Mason and R. Harrison, "Why 'business angels' say no: a case study of opportunities
rejected by an informal investor syndicate," International Small Business Journal, vol. 14,
pp. 35-51, 1996.
[51] R. Sudek, "Angel investment criteria," Journal of Small Business Strategy, vol. 17, p. 89,
Figure 1. Groups of startups according to the two decision stages
Table 1. Previously Reported Startup Selection Criteria of Incubators
Prior Studies
Ability of Job Creation
Smilor [36]; Wulung et al. [38]
Capital Availability
Smilor [36]; Merrifield [18]; Lumpkin and Ireland [34]; Mian [26]; Hackett and Dilts [37]; Khalid et
al. [48]
Competitive Advantage
Merrifield [18]; Hackett and Dilts [37]; Khalid et al. [48]
Company Age
Bruneel et al. [40]; Wulung et al. [38]
Company is Locally Owned
Smilor [36]
Company is Startup
Smilor [36]; Mian [26]
Company Size
Lumpkin and Ireland [34]; Aerts et al. [7]
Company Survivability
Wulung et al. [38]
Exit Options
Hackett and Dilts [37]; Khalid et al. [48]
Financials: Liquidity, Price
Earnings, Debt, Asset Utilization
Lumpkin and Ireland [34]; Aerts et al. [7]
Growth Potential
Smilor [36]; Merrifield [18]; Lumpkin and Ireland [34]; Mian [26]; Aerts et al. [7]; Hackett and Dilts
[37]; Bruneel et al. [40]
Team’s Age
Lumpkin and Ireland [34]; Aerts et al. [7]
Team’s Gender
Aerts et al. [7]
Team’s Finance Expertise
Lumpkin and Ireland [34]; Aerts et al. [7]; Bergek and Norrman [39]
Team’s Management Expertise
Merrifield [18]; Hackett and Dilts [37]; Khalid et al. [48]
Team’s Marketing Expertise
Lumpkin and Ireland [34]; Aerts et al. [7]; Bergek and Norrman [39]
Team’s Persistence
Lumpkin and Ireland [34]; Aerts et al. [7]
Team’s Prior Startup Experience
Bergek and Norrman [39]; Hackett and Dilts [37]; Bruneel et al. [39]; Khalid et al. [48]
Team’s Technical Expertise
Merrifield [18]; Lumpkin and Ireland [34]; Hackett and Dilts [6]; Aerts et al. [7]; Bergek and
Norrman [39]; Khalid et al. [48]
Manufacturing Competence
Merrifield [18]; Mian [26]
Market Size and Growth
Hackett and Dilts [6]; Aerts et al. [7]; Bergek and Norrman [39]; Hackett and Dilts [37]; Khalid et al.
Marketing & Distribution
Merrifield [18]; Lumpkin and Ireland [34]; Aerts et al. [7]
Supply Chain Availability
Merrifield [18]
Patent Protection
Hackett and Dilts [37]; Khalid et al. [48]
Political and Social Constraints
Merrifield [18]
Merrifield [18]; Lumpkin and Ireland [34]; Hackett and Dilts [6]; Aerts et al. [7]; Bergek and
Norrman [39]; Hackett and Dilts [37]; Khalid et al. [48]; Wulung et al. [38]
Reference from Others
Lumpkin and Ireland [34]; Aerts et al. [7]
Risk Distribution
Merrifield [18]
Technology Related
Smilor [36]; Mian [26]; Bruneel et al. [40]
Unique Opportunity
Smilor [36]; Lumpkin and Ireland [34]; Hackett and Dilts [6]; Aerts et al. [7]; Hackett and Dilts [37];
Bruneel et al. [40]; Khalid et al. [48]
Written Business Plan
Smilor [36]; Lumpkin and Ireland [34]; Mian [26]; Aerts et al. [7]
Table 2. JFDI Online Questionnaire
1) The JFDI.2015A application is now live. We ask the founder/co-founders to fill out the form (no need to ask employees, contractors, or
advisors). You can invite your other co-founders via the invite button on this form.
2) Tell us about your idea, in 140 characters or less.
Founding Team
3) How many founders are there?
4) How long have all founders (not including employees) worked together as a team?
5) What have you as a team already achieved together? It doesn't have to be examples from this particular project/startup.
6) Please record a short video (2 minutes) where all founders introduce themselves and explain why you are building a team and startup together.
7) Can all founders attend and be physically present throughout the JFDI.2015A program + at least 1 month post program (early April to early
August 2015) to meet with investors?
8) If you, as a co-founder, cannot attend the program in person through the entire duration of the program (early April to early August 2015),
please explain.
9) Which city makes sense for the startup to physically be set up in, after the program?
Your Startup
10) Are you already incorporated?
11) What date did you start this company?
12) What is the total amount of cash invested in this startup to date?
13) How is equity divided amongst founders and team? If you have other shareholders or employee option pool, please list the details.
14) How many person-months has the team (including founders, employees, contractors) worked on this startup?
15) How many full-time employees are there on your team?
16) How many full-time developers/engineers are there on the team?
17) Please provide any GitHub URLs for technical employees and LinkedIn URLs for business side employees
Individual Founders
18) How did you meet your co-founders?
19) Which of the following skills do you personally have?
• Hacker: I can build software. I will write code for this startup
• Hipster: I can do web and graphic design. I will build the UI / UX
• Hustler: I can sell to customers and talk to investors. I will do biz dev.
20) What is your role in this business?
21) How many years in your career have you led the development of new products, services, or technologies?
22) How much work experience do you have at a "real company", not a startup?
23) What is your experience with startups?
24) Describe your work experience, include your GitHub or LinkedIn URLs.
25) What is the approximate monthly salary in Singapore Dollar you would expect working for a medium sized tech company where you live?
26) Tell us all educational milestones you have reached.
27) Have you made any unusual lifestyle choices? Tell us about your strange food choices, weird hobbies, or bizarre behaviors which mainstream
humans just don't get. Or if you have something impressive you have personally built or achieved, please share links or stories with us.
28) If you own more than 5% of any other business, whether incorporated, a partnership, or family business, please describe your relationship with
that other business.
29) Do you have any commitments, for example, a job offer, military service, or study that will prevent you from giving 100% commitment to this
business over the next two years?
30) Who are you selling to/do you plan to sell to in the next year?
31) Explain how you intend to (or already do) find customers?
32) Please provide a 1-minute video demo of your product. Please *only* post a video demo of your product or prototype. Videos longer than 1
minute will not be viewed
33) What is the URL for your website/demo/mockup etc.?
34) What kinds of products are you selling/do you plan to sell to your customers?
35) What kind of traction milestone does your product enjoy?
36) Please describe in details what evidence and metrics you have to support the traction milestone you picked above.
37) What is your next traction milestone for this business and what are the steps you need to take to reach it?
38) What monetization models are you using/do you plan to test during the program and beyond?
39) Please connect your stats tracking account(s) to help us understand your product or service usage
Market & Domain
40) Why did your team choose this particular idea to work on?
41) Who are your competitors? What differentiates you? Include URLs
42) What is different/interesting/new about your business?
43) Imagine we sent you and your team back in time. Could your idea have been successful five years ago? Please explain why.
44) What is the current monthly cash required to pay all founders, employees and expenses (gross burn) in your home country?
45) How much total revenue has your startup had in its lifetime?
46) How much revenue has your startup had in the last month?
47) Do you plan to raise money in the future? If so how much and when will it make sense?
48) Name a JFDI alumni/mentor that you know and any notable mentors, investors or advisors that you want to tell us about.
Table 3. Screening Questions Addressing Potential Critical Criteria in Startup Selection
Detailed Questions
Q01 Demand Validation
Is there voice-of-customer type evidence or demand validation?
Q02 Customer Affordability
Is there evidence that customers can afford buying the product?
Q03 Market Demographics
Is there market size and demographic analysis?
Q04 Benefit Understanding
Is there evidence that customers understand the product’s benefits?
Q05 Subjective Constraint
Is there subjective barrier that constrains the customer?
Q06 Concept Maturity
Is there evidence that the concept can be realized to a product?
Q07 Sales & Distribution
Is there evidence of existing sales and distribution channels?
Q08 Product Maturity
Is there evidence of the functional feasibility of the product?
Q09 Manufacturability
Is there evidence of manufacturability with efficiency and low cost?
Q10 Clarified Tradeoffs
Is there clarification of trade-offs in performance, cost, etc.?
Q11 Competition Validation
Is there validation of product’ competitiveness in the market?
Q12 Value Proposition
Is there evidence of tangible or intangible benefits for customers?
Q13 Sustainable Advantage
Is there evidence of advantages not easily available to the competitors?
Q14 Patent Strategy
Is there a patent strategy for existing/circumvent patents?
Q15 Patent Protection
Is there capability to maintain and protect patents?
Q16 Competitor Response
Is there evaluation of potential competitor responses?
Q17 Competition Strategy
Is there strategy prepared for competition?
Q18 Marketing Effort
Is there evidence of marketing efforts to enhance customer perception?
Q19 Team Size
Is there adequate manpower in the startup?
Q20 Marketing/Sales Expertise
Is there marketing/sales experience in the startup team?
Q21 Technology Expertise
Is there product development skill set in the startup team?
Q22 Management Expertise
Is there management experience in the startup team?
Q23 Financial Expertise
Is there financial skill set in the startup team?
Q24 Prior Startup Experience
Is there prior entrepreneurship experience in the startup team?
Q25 Feedback Mechanism
Is there team mechanism to listen and respond to customers?
Q26 Profitability
Is there evidence of adequate profitability?
Q27 Risk Assessment
Is there evidence of risk assessment?
Q28 Risk Mitigation
Is there evidence of risk mitigation measures?
Q29 Growth Strategy
Is there evidence of strategies and potential for future growth?
Q30 Capital Availability
Is there evidence of adequate capital for growth?
Table 4. Mapping Criteria in the Scoreboard to References
Lumpkin and
Ireland [34]
Hackett and
Dilts [6]
Aerts et al.
Bergek and
Norrman [39]
Hackett and
Dilts [37]
Bruneel et al.
Khalid et al.
Wulung et al.
Independent Variables (Potential Critical Criteria)
Q01 Demand Validation
Q02 Customer Affordability
Q03 Market Demographics
Q04 Benefit Understanding
Q05 Subjective Constraint
Q06 Concept Maturity
Q07 Sales & Distribution
Q08 Product Maturity
Q09 Manufacturability
Q10 Clarified Tradeoffs
Q11 Competition Validation
Q12 Value Proposition
Q13 Sustainable Advantage
Q14 Patent Strategy
Q15 Patent Protection
Q16 Competitor Response
Q17 Competition Strategy
Q18 Marketing Effort
Q19 Team Size
Q20 Marketing/Sales Expertise
Q21 Technology Expertise
Q22 Management Expertise
Q23 Financial Expertise
Q24 Prior Startup Experience
Q25 Feedback Mechanism
Q26 Profitability
Q27 Risk Assessment
Q28 Risk Migration
Q29 Growth Strategy
Q30 Capital Availability
Control Variables
Social Profile
Profile Pictures
Variables Excluded due to Lack of Relevant Information in Profile Data
Ability to create jobs
Financial ratios: liquidity, price earnings, debt and asset utilization
Management team persistence
Age of the management
Exit options
Company age
Company survivability
Management team gender
Variables Excluded due to Being Default for All Applicants
Startup status
Business plan
Table 5. Guidance to Answer Q2
Q2. Is there evidence that customers can afford buying the product?
Full: Data that customers are willing to pay, surveys or benchmarking data (table, competing and
complementary data)
Partial: Single customer quote or summary customer statements on price (all our customers we talked to
said xxx)
None: No points if they did not communicate with any customers about price (e.g. everybody wants a low
cost product)
Sample Information Provided by A Startup
The company made over $6,000 in revenue in the last 3 months and enjoys a 100% subscriber growth from July to
August. We currently have over 100 subscribers, who are providing recurring revenue. Our net promoter score is at
100% when we last surveyed 30 customers & current retention rate is at 80%.
Evidence Level: Full
Table 6. Mean Differences of the Criteria in the Initial Screening Stage and Final Selection Stage
Critical at
Which Stage
Initial Screening Stage
Final Selection Stage
Mean Difference
(Continued Filtered)
Mean Difference
(Accepted Rejected)
Table 7. The Best Prediction Models
Initial Screening
Final Selection
Social profile
Profile Pictures
Q1 Demand Validation
Q3 Market Demographics
Q6 Concept Maturity
Q7 Sales & Distribution
Q21 Technology Expertise
Q13 Sustainable Advantage
Q24 Prior Startup Experience
Q25 Feedback Mechanism
Q29 Growth Strategy
Wald chi-square (p)
Pseudo R2
Number of Observations
Table 8. Predicted versus Actual Results
Initial Screening Stage
Actual “Filtered”
Actual “Interviewed”
Predicted “Filtered”
14 (77.8%)
4 (22.2%)
Predicted “Interviewed”
11 (34.4%)
21 (65.6%)
Final Selection Stage
Actual Rejection
Actual Acceptance
Predicted Rejection
20 (62.5%)
12 (37.5%)
Predicted Acceptance
5 (27.8%)
13 (72.2%)
... However, appropriate skills are critical to intergenerational collaboration, and researchers have emphasized the importance for global start-up founders to identify both competency and success characteristics (Giardino et al., 2014;Massis et al., 2018;Pirkkalainen & Pawlowski, 2014;Rasmussen & Tanev, 2015;Tanev, 2012;van der Westhuizen & Goyayi, 2020;Yin & Luo, 2018), especially in the early stages of business development, when strategic organizational decisions are often urgently needed (Basly, 2007;Giardino et al., 2014;Rasmussen & Tanev, 2015;Tanev, 2012). Given the variety of perspectives on innovation, in this study, we understand the innovation as processes and activities that add strategic value to the current status quo. ...
... At the group level, individuals' competencies were combined as human resources that complement each other to form a specific group of expertise or organizational capabilities (Saa-Perez & Garcia-Falcon, 2002). In the context of start-ups, the competency set is one of the most important prerequisites for assessing the potential success of start-up development (Colombo & Piva, 2008;Hafeez et al., 2002;Yin & Luo, 2018). ...
... The result can be used as a self-assessment tool to understand target competencies and help start-ups find a qualified partner to complement current competencies. Investors can also use the competencies to assess start-ups' readiness to collaborate and leverage cross-generational collaboration for innovation (Kleine & Yoder, 2011;Yin & Luo, 2018). ...
Full-text available
In this study, we looked at the competencies and changes in the competency spectrum required for global start-ups in the digital age. Specifically, we explored intergenerational collaboration as an intervention in which experienced business-people from senior adult groups support young entrepreneurs. We conducted a Delphi study with 20 experts from different disciplines, considering the study context. The results of this study shed light on understanding the necessary competencies of entrepreneurs for intergenerationally supported start-up innovation by providing 27 competencies categorized as follows: intergenerational safety facilitation, cultural awareness, virtues for growth, effectual creativity, technical expertise, responsive teamwork, values-based organization, and sustainable network development. In addition, the study results also reveal the competency priorities and the minimum requirements for each competency group based on the global innovation process and can be used to develop a readiness assessment for start-up entrepreneurs.
... Although the research on selection criteria applied by commercial as well as socialimpact accelerators has recently increased [4][5][6][7], there are still many gaps to be addressed in the literature. Our understanding of the factors that affect the probability of a sustainability startup being selected by accelerators is very limited [3,4]. ...
... Recently, the research on the selection and screening processes of accelerators has been growing, providing a more concise review of the criteria applied by accelerators to select startups [4][5][6][7]. These studies reveal that accelerators consider both business-project-related factors and the characteristics, skills, and competences of startup teams [4,5,7]. ...
... Recently, the research on the selection and screening processes of accelerators has been growing, providing a more concise review of the criteria applied by accelerators to select startups [4][5][6][7]. These studies reveal that accelerators consider both business-project-related factors and the characteristics, skills, and competences of startup teams [4,5,7]. ...
Full-text available
Accelerators are specially designed entrepreneurship programs that enable startups to scale up at a fast pace through mentoring, intense consulting, training, and provision of access to business networks. To cope with the challenges of the entrepreneurial process and to access resources to achieve a quick scale-up, sustainability startups need a great deal of support from intermediary organizations. In this study, we examined 7358 social-sustainability startups and 2671 environmental-sustainability startups to understand the factors that influence the probability of a sustainability startup being selected by accelerators. Our main research question was whether previous funding (in the form of equity funding or philanthropic support) received by sustainability startups affects the selection decisions of accelerators. We also investigated how team-related characteristics such as work experience diversity, female startup teams, a team’s passion or commitment, and entrepreneurial experience influence the chances of startups being selected by accelerators. Our data were drawn from the Global Accelerator Learning Initiative (GALI), which was cocreated by the Aspen Network of Development Entrepreneurs and Emory University. The data have been collected from entrepreneurs around the world since 2013. The wave we used included a dataset covering the years 2013–2019. Our results indicate that for both social-sustainability and environmental-sustainability startups, the amount of previous equity funding and philanthropic support received from external funding providers is of critical importance for the startup to be selected by accelerators. We also found that previous funding mediates the relationship between various team-related characteristics and the probability of a startup being selected by accelerators.
... The company profiles along with the selection results were compared using real-win-worth criteria, and regression models predicting the selection results were subsequently constructed. The models developed can be used to help accelerator managers improve their own decision-making processes [Yin and Luo, 2018]. ...
Full-text available
Accelerators have been becoming increasingly popular among young entrepreneurs interested in developing products, attracting investors, or establishing relations with industry represented by large companies. The focus of the studies is to conduct literature review due to the small number of scientific articles are available on this topic. The article aims to show the current state of knowledge about startup accelerators and the support they provide. It outlines what added value accelerators offer in their programs for young innovative companies. To achieve the stated aim, the authors combine a systematic literature review with a bibliometric analysis. The results of this research will be helpful in better matching the developed project with existing accelerator programs on the market. It can contribute to a better understanding of the principles governing the programs, program expectations of the accelerator and its partners with respect to the proposed solutions (corporations, business angels, and venture capital funds).
... The second study from that period investigates the manner in which the best accelerators select start-ups, taking the example of the first seed accelerator in South-East Asia and a group of few enterprises applying for its programs. Their profiles along with the selection results are compared by means of the real-winworth criteria and on this basis regression models are built to predict selection results, which is expected to help accelerator managers to improve their own decision-making processes (Yin et al., 2018). ...
Full-text available
Purpose: The paper presents a review of the literature concerning start-up accelerators and a classification of related research untill August 2021. Approach/Methodology/Design: While elaborating the classification, the authors coded works according to the type of accelerator and implemented acceleration program. Furthermore, the paper identifies the countries, research bodies and authors who focus on research on the functioning of accelerators. The authors present how various accelerator forms operate and how they perform. Findings: The paper systematizes knowledge related to start-up accelerators available in the Scopus base and suggests directions for future research. Practical Implications: Recently a clear phenomenon is shown that is the development of a start-up ecosystem, in particular creation and professionalization of the new form of organisation that is a start-up accelerator. This entity acts as a bridge between start-ups and corporations and big enterprises, promoting success of both sides-conclusion of business contracts. More start-ups and corporations decide to collaborate with accelerators that, with their acceleration programs involving big companies, support them both. By monitoring the corporate-start-up collaboration, accelerators actively promote both parties, also in terms of generating necessary innovations to support, for instance, production, sales or service processes in big companies. An evergrowing number of accelerators and accelerator programs worldwide translates into more interest in research in this field. Originality/Value: Despite the increasing research trend related to start-up accelerators, no precise research classification has been available to date.
... Time inconsistency (Kim and Wagman, 2014) Open innovation (Kohler, 2016;Prexl et al., 2019;Pustovrh, Rangus and Drnovšek, 2020) Startup selection criteria; selection process (Yin and Luo, 2018) Dynamic socially situated cognition; expert information processing theory (Goswami, Mitchell and Bhagavatula, 2018) Entrepreneurship: Speed in entrepreneurship; time compression diseconomies; and entrepreneurial resource acquisition (Qin, Wright and Gao, 2019) Growth of new ventures, the operation of university accelerators, and the entrepreneurial ecosystem (Breznitz and Zhang, 2019) The knowledge spillover theory of entrepreneurship (KSTE); absorptive capacity. (Cuvero et al., 2019) Community capital framework; entrepreneurial clusters (Bliemel et al., 2019) Bounded rationality (Cohen, Bingham and Hallen, 2019) Interorganizational learning (Hallen, Cohen and Bingham, 2020) Signaling theory; gender role congruity theory (GRCT) (Yang, Kher and Newbert, 2020) High-growth entrepreneurship; business accelerators (González-Uribe and Reyes, 2021) Sociomaterial practice theory and literature on practice-based learning (Katila, Kuismin and Valtonen, 2020) Source: authors. ...
Conference Paper
Practical and theoretical interest in startups grown significantly in recent years, and this phenomenon brings accelerators as a suitable option for the development of these firms. Our study aims to analyze the research field's development on accelerators and present perspectives and opportunities for future research. We use a bibliometric analysis, a systematic literature review, and a content analysis to present the principal studies and references in the area and highlight that entrepreneurship and open innovation are frequently associated with accelerators. Accelerators can be viewed as organizations or as acceleration processes, meaning that acceleration can be used by firm to open innovate and accelerators will need to develop unique characteristics. Finally, we developed four research streams for future studies that have the potential of a better comprehension of the field.
University business incubators are important drivers of entrepreneurial innovation ecosystems. The current study examines how digital tools—especially social, mobile, analytics and cloud (SMAC) technologies—facilitate internal and external interactions among university incubators and various actors in the entrepreneurial innovation ecosystem. This research uses a comparative study of multiple cases of Canadian university incubators in a longitudinal manner to explore the role of these digital technologies in the three primary incubation process activities: incubatee search and selection, business support, and networking. Findings suggest SMAC technologies are important for facilitating the incubation processes of university incubators but are currently underutilized. Digital technologies can be used further to help incubator managers and entrepreneurs develop innovative ideas, foster incubatees, and contribute to entrepreneurial ecosystems.
This study explores the use of machine learning methods to forecast the likelihood of firm birth and firm abandonment during the first five years of a new business gestation. The predictability of traditional logistic regression is compared with several machine learning methods, including k-nearest neighbors, random forest, extreme gradient boosting, support vector machines, and artificial neural networks. While extreme gradient boosting shows the best overall model performance, neural networks provide good results by correctly classifying entrepreneurs who have not abandoned their business venture in the early stage of the gestation process. In addition, this study provides valuable insights in relation to the start-up activities leading to firm emergence. Entrepreneurs who perform a greater number of activities and who can orchestrate them at the right rate, concentration, and time are more likely to successfully launch a new business venture.
We investigate the role of entrepreneurs’ human capital on the potential of newly created ventures to receive equity funding from Accelerators and Business Angels using a resource-based approach to entrepreneurship theory. Using data from 10,563 for-profit innovative ventures, we find significant differences between those two groups. More specifically, formal education and founding experience of the entrepreneurial team is positively associated with the likelihood of the team to receive equity from Angels but negatively associated with the likelihood of the team to receive equity from Accelerators. Overall, our results are in line with the theoretical argument that human capital signals are important in reducing the information asymmetries faced by angels and ultimately driving entrepreneurs’ success in securing angel funding, but our results also suggest that some aspects of human capital signals do not contribute to the entrepreneur’s success in receiving accelerator funding. Our findings have important repercussions for the quality of design and operation of both private and state supported programmes and accelerator managers.
Full-text available
Technology entrepreneurship rarely succeeds in isolation; increasingly, it occurs in interconnected networks of business partners and other organizations. For entrepreneurs lacking access to an established business ecosystem, incubators and accelerators provide a possible support mechanism for access to partners and resources. Yet, these relatively recent approaches to supporting entrepreneurship are still evolving. Therefore, it can be challenging for entrepreneurs to assess these mechanisms and to make insightful decisions on whether or not to join an incubator or accelerator, and which incubator or accelerator best meets their needs. In this article, five key factors that entrepreneurs should take into consideration about incubators and accelerators are offered. Insights are drawn from two surveys of managers and users of incubators and accelerators. An understanding of these five key success factors (stage of venture, fit with incubator’s mission, selection and graduation policies, services provided, and network of partners) and potential pitfalls will help entrepreneurs confidently enter into a relationship with an incubator or accelerator.
Full-text available
Crowdfunding is an emerging phenomenon where entrepreneurs publicize their product concepts to raise development funding and collect design feedback directly from potential supporters. Many innovative products have raised a significant amount of crowdfunding. This paper analyzes the crowd-funded products to develop design guidelines for crowdfunding success. A database of 127 samples is collected in two different product categories from two different crowdfunding websites. They are evaluated using a design project assessment scorecard, the Real-Win-Worth framework, which focuses on the state of maturity on various customer, technical and supply chain dimensions. Our analysis identified key RWW factors that characterize successful design for crowd-funded products. For example, success at crowdfunding is attained through clear explanation of how the design operates technically and meets customer needs. Another recommendation is to not emphasize patent protection, for which crowd-funders are less concerned. Also, evidence of a strong startup financial plan is not necessary for crowdfunding success. These key RWW factors provide guidelines for designers and engineers to improve their design and validate their concepts early to improve their chances for success on crowdfunding platforms.
Full-text available
The role of effort and accuracy in the adaptive use of decision processes is examined. A computer simulation using the concept of elementary information processes identified heuristic choice strategies that approximate the accuracy of normative procedures while saving substantial effort. However, no single heuristic did well across all task and context conditions. Of particular interest was the finding that under time constraints, several heuristics were more accurate than a truncated normative procedure. Using a process-tracing technique that monitors information acquisition behaviors, two experiments tested how closely the efficient processing patterns for a given decision problem identified by the simulation correspond to the actual processing behavior exhibited by subjects. People appear highly adaptive in responding to changes in the structure of the available alternatives and to the presence of time pressure. In general, actual behavior corresponded to the general patterns of efficient processing identified by the simulation. Finally, learning of effort and accuracy trade-offs are discussed.
Full-text available
By analyzing observed interactions between entrepreneurs and business angels (BAs) on the Canadian reality TV show Dragons’ Den, we find that BAs use a non-compensatory decision-making process when evaluating anticipated risk and return. This is consistent with our hypotheses that BAs use decision heuristics (shortcuts) to conserve cognitive effort when deciding whether or not to invest in business opportunities proposed by entrepreneurs. Our results further our understanding of how and when behavioral decision theory can inform real-life BA investment decision processes. Additionally, the results offer practical implications for entrepreneurs interested in pitching proposals to BAs.
New firms are an important mechanism through which new jobs are created. However, the new venture failure rate is greater than the rate of creation. Business incubators have been organized to bring new businesses together to increase the probability of success. Incubators do not guarantee success; however, evaluating potential clients on Critical Success Factors can minimize failures once the firm joins an incubator. This research investigates the screening practices of incubators and identifies unique groups of incubators. The screening practices were found to relate to sponsorship but not to physical characteristics or objectives.
Incubators, accelerators, innovation centers, launch pads. Everyone defines the idea a bit differently, but, generally, these infrastructures refer to a subsidized space where fledgling companies get support?a combination of mentorship, funding, low rent, networking opportunities, and other training?with the goal of propelling early businesses to success.
This paper proposes an incubatee selection model as an important tool for technology incubators. Previous studies have determined that incubator managers who use multi-criterion screening or selection factors realize lower incubatee failure rates. Despite the importance of the incubatee selection process, there have been no efforts to date to formulate a mathematical model that addresses multi-criterion incubatee selection. Therefore, only a small number of incubator managers use multiple criteria to select the most promising incubatees. Our selection model uses multiple criteria in a multi-objective optimization based on the incubator’s goal. The criteria include profitability, survivability, and worker absorption. Because different ideological orientations of the incubator managers acting as decision makers (DMs) can influence the incubatee selection process, an interactive Tchebycheff method is used to provide a set of alternative solutions. Using a set of alternative solutions, we provide a degree of freedom in the analysis to accommodate DM orientation. Using the proposed model, a decision maker can optimize incubator goals, thereby ensuring the survivability of the incubatee and the success of the technology transfer process. Furthermore, the model also incorporates incubator specialization and the advantages of diversification.
A fundamental challenge for new ventures is overcoming liabilities of newness - particularly, lack of business knowledge and lack of social embeddedness. Accelerators, intense, time- compressed entrepreneurial programs, attempt to alleviate these liabilities and accelerate venture development by facilitating learning and network development in new ventures. However, because of time compression diseconomies and the potential for inappropriate standardization, the literature suggests that such attempts at acceleration may be ineffective or even counterproductive. We test these competing ideas by comparing performance effects of accelerator-backed new ventures to a matched set of non–accelerator new ventures. Compared to the non-accelerator new ventures, we find that ventures backed by top accelerators are faster in raising venture capital and gaining customer traction. Intriguingly, our results also indicate that prior founder experience (e.g., prior entrepreneurial experience, formal education) is not a substitute for accelerator participation – suggesting that top accelerators provide a unique form of entrepreneurial learning and networks. Key contributions are uncovering evidence for time compression economies (vs diseconomies) in the new venture gestation process, and unpacking forms of learning and networks that may aid venture development.