Toward a project portfolio management evaluation framework
As projects constitute a major part of organizational budgets and strategic development,
practitioners become dependent on project portfolio management (PPM). However, the existing
knowledge on how to evaluate and improve PPM is rather fragmented and lacks empirical
grounding. We ask: How can we develop a holistic and empirically validated PPM evaluation
framework? Drawing on evaluation theory, we structure contributions from 20 PPM publications
into a framework with four evaluation areas. Together with a large company, we develop, apply,
and refine the framework. As a result, we offer two contributions: (1) a theory-ingrained artifact
that structures a fragmented body of knowledge into four related PPM evaluation areas, and (2) a
demonstration of how a theory-ingrained evaluation artifact can serve as an evaluation framework
that helps practitioners identify strengths and improvement potential in PPM. In conclusion, we
discuss how our results may inform future research and help organizations evaluate PPM.
Keywords: Project portfolio management, Evaluation, Action design research
As Western societies increasingly get projectified (Jensen et al., 2016), organizations experience
ongoing challenges in managing their project portfolios, since projects constitute a major part of
the organizational budgets and strategic development (Schoper et al., 2018). Recent research
defines Project Portfolio Management (PPM) as the overall organizational ability to manage project
portfolios strategically and holistically to support the success of the organization (Clegg et al.,
2018). However, our review of the literature finds that there is a lack of frameworks enabling us to
evaluate how well the PPM arrangements of contemporary organizations support this endeavor.
PPM research has a long tradition. Early publications from the 1950s and 1960s start out by
applying a narrow focus to the project selection processes (Baker and Pound, 1964, Rosen, 1956).
Gradually, the scope expands to also include the processes before and after project selection
(Archer and Ghasemzadeh, 1999). After the turn of the millennium, PPM research began to shed
light on a wider range of problem areas (Elonen and Artto, 2003). Especially, issues regarding lack
of resources compared to the amount of ideas seem to be a focal point for high-cited contributions
(Engwall and Jerbrant, 2003, Cooper et al., 2000). In the same era, a stream of research adopted
the maturity concept from the fast evolving field of software development (Paulk et al., 1993).
Soon thereafter, maturity assessments became central to the way PPM organizations evaluate PPM
in the years to come. In that regard, a rich variety of maturity models was developed by PPM
researchers (Reyck et al., 2005, Jeffery and Leliveld, 2004, Andersen and Jessen, 2003).
Maturity models are found useful because they allow individuals and organizations to evaluate the
maturity of various aspects of their procedures against benchmarks and to prioritize improvement
actions (Nikkhou et al., 2016). Here, maturity is understood as a state where the organization is in
a perfect condition to achieve its objectives (Andersen and Jessen, 2003). Maturity is mostly
described in stages, e.g. (0) Ad hoc, (1) Initial, (2) Repeatable, (3) Defined, (4) Managed and
optimized (Paulk et al., 1993). Research provides some empirical support for a connection between
the concept of PPM maturity and positive organizational effects (Jeffery and Leliveld, 2004, Reyck
et al., 2005), but no clear causation that can be statistically generalized (Hansen and
Furthermore, scholars argue that PPM maturity models have drawbacks (Nikkhou et al., 2016).
One substantial criticism is the implicit and dubious notion that one universal and static maturity
model fits all organizational settings across time and space (Drazin and Van de Ven, 1985). This
argument seems important, as many of these maturity models were developed more than a decade
ago. This inspires scholars to consider the generic question: What is the core PPM process leading
to organizational success (Padovani and Carvalho, 2016)? Patanakul (2015) forcefully asks: How
do we know that our PPM arrangement supports organizational success? He suggests the concept
of PPM effectiveness, which spans over evaluating the classical operational attribute of PPM
efficiency (Martinsuo and Lehtonen, 2007), but also includes a broader evaluation of the ability of
PPM to obtain strategic goals (Patanakul, 2015). Related, but still distinct, recent scholars put
forward the concept of effectuation which enables us to evaluate organizations’ ability to use
available resources and to foster partnerships and useful networks (Nguyen et al., 2018). Overall,
we see a development in the concepts used to evaluate PPM moving toward a broader
understanding of value (Laursen and Svejvig, 2016); this includes new evaluation criteria such as
sustainability (Martinsuo and Killen, 2014, Schipper and Silvius, 2018) and preparedness for the
future (Rank et al., 2015).
Despite the importance of previous contributions, we find that the literature on PPM evaluation is
rather fragmented and has no integrated and empirically tested framework enabling us to evaluate
PPM in contemporary organizations. On this backdrop, we formulate the research question: How
can we develop a holistic and empirically validated PPM evaluation framework?
We structure our response to the research question as follows. In the next section we present the
theoretical background, followed by the methodology section. Fourth, we present the developed
PPM evaluation framework, and in the fifth section we show how the evaluation framework can
be applied in a real-life organization. Sixth, we summarize key learning and reflection points,
before we finalize the article with a discussion of the practical and theoretical contributions of the
2 Theoretical background
To understand the construct of evaluation, we start this section by discussing its origin and later
development. Finally, we suggest four approaches to PPM evaluation which we utilize to organize
our review of literature on PPM evaluation.
Evaluation is the “action of appraising or valuing” [something] (Oxford English Dictionary).
Following Rode and Svejvig (2018b), evaluation can be conceived of as an integral part of our
basic human cognitive processes and a natural element in our everyday life.
Although there are documented evaluations of human interventions dating back to 2200 B.C.
(Shadish et al., 1991), the issue of program evaluation became especially important in the USA in
the 1960s (Chen, 2015), when Kennedy’s and Johnson’s administrations invested heavily in social
programs (Linzalone and Schiuma, 2015). Today, some talk about the evaluation society (Dahler-
Larsen, 2013) and consider evaluation a profession with a community of evaluators (Stufflebeam
and Coryn, 2014). Within this community, the evaluated object is a program. We are aware of the
distinction between project, program, and portfolio within the project management domain, but
according to Dahler-Larsen (2013), “program evaluation is “just” evaluation” – and therefore this
stream of research is considered relevant for PPM evaluation.
Within this stream of evaluation research, there is no one agreed-upon definition of evaluation –
but a broad range of evaluation paradigms, classifications, typologies, and models (Dahler-Larsen,
2013, Mertens and Wilson, 2012, Linzalone and Schiuma, 2015). For instance, recent research has
identified more than 50 evaluation models and 20 evaluation typologies (Linzalone and Schiuma,
2015). Dahler-Larsen (2013) presents a collection of 10 evaluation definitions – conceptualizing
evaluation as a process of systematic assessment, examination, investigation, and determination.
Based on this list, he describes how the definition of evaluation has developed over time – from a
method-centered approach in the early 1980s to an approach where context and conditions are more
One of the earliest evaluation classifications was made by Scriven (1967 in Chen, 2015) who
distinguished between formative and summative evaluations. The goal of formative evaluation is
to improve, while the goal of summative evaluation is to merit. Consequently, formative
evaluations are done early, whereas summative evaluations are done later in a program’s life cycle.
Scriven (1991 p 19 in Chen, 2015 p 8) explains the distinction by referring to a cook: “When the
cook tastes the soup, that’s formative evaluation; when the guests taste it, that’s summative
evaluation” (Scriven, 1991 p 19 in Chen, 2015 p 8). Later, Chen (2015) develops Scriven’s (1991)
dichotomy and proposes a matrix based on two distinctions: the evaluation’s function and the stages
of a program. He describes four distinct evaluation types and advocates for a hybrid of two or more
of these evaluation types. Hybrid evaluations are recommendable because they shed light on
different aspects and serve multiple purposes.
On this backdrop, Rode and Svejvig (2018a) present a multidimensional approach to project
evaluation consisting of four approaches: process, benchmarking, outcome, and learning. In this
framework, projects are the evaluand: the unit of analysis (Dahler-Larsen, 2013). As this
framework draws on general evaluation theory, we do however expect the four approaches to be
generic to such an extent that they can apply to different levels of analysis – beyond the meso
project level and including the macro PPM level (Geraldi and Söderlund, 2018). To test this
assumption, we structure the following review of PPM literature on evaluation around these four
approaches and develop a PPM evaluation framework that we instantiate in a real-life organization.
Following Rode and Svejvig (2018a), we structure the following discussion of PPM literature on
evaluation around the four evaluation approaches: process, outcome, benchmarking, and learning.
By drawing on references from an unstructured review of the PPM literature, we utilize the four
approaches to structure a historical account of how the PPM field of research has developed and
progressed throughout the years.
The first approach – PPM process evaluation – has a long and rich tradition (e.g. Rubenstein, 1957).
The early literature focused primarily on the selection process (e.g. Rosen, 1956, Hitchcock, 1963).
Later, the earlier aforementioned seminal paper by Archer and Ghasemzadeh (1999) integrated the
existing techniques and tools from the literature in a coherent framework, enabling organizations
to evaluate a broad repertoire of their PPM selection processes. After the turn of the millennium,
scholars introduced the concept of maturity as a quantified measure for evaluating PPM processes
(e.g. Reyck et al., 2005, Jeffery and Leliveld, 2004, Pennypacker, 2005).
This leads to our next approach in PPM evaluation, as the concepts of maturity enable organizations
to benchmark their portfolio processes against internal and external organizational entities. As the
standards in project management also put a large effort into this development, the level of
sophistication in PPM benchmark models increases (e.g. OGC, 2006, PMI, 2013).
Whereas the maturity models are well suited for benchmarking of PPM processes, they have less
focus on benchmarking the results of the portfolio. Recent research points in a promising direction,
namely toward inclusion of an outcome perspective as well. Examples are concepts of PPM
effectiveness (Patanakul, 2015) and PPM effectuation (Nguyen et al., 2018), which transcend the
traditional focus on efficiency and emphasize the consequences of PPM.
In parallel, recent research has begun to explore how evaluations can emphasize learning. In that
regard, Stettina and Hörz (2015) discuss the concept of continued improvement by repeating PPM
routines, and scholars such as Sweetman and Conboy (2018) discuss how portfolio management
may be adopted based on feedback and learning.
On the backdrop of our discussion of the PPM evaluation literature, we find several contributions
that advance PPM evaluation. However, we find no integrated and emperically validated PPM
evaluation model embracing all four approaches – but a rather fragmented literature on PPM
evaluation and no knowledge on how PPM evaluation is conducted in its organizational context.
3 Research methodology
This paper adopts the Action Design Research (ADR) approach as described by Sein et al. (2011a)
and utilized by Rode and Svejvig (2018b) to develop and demonstrate a project evaluation
framework. ADR has elements of action research (intervention) and design research (artifact
building) (Goldkuhl, 2012). ADR is a research method for generating prescriptive design
knowledge by building and evaluating (intervening) an artifact in its organizational setting. ADR
consists of four interleaved stages: (1) problem formulation, (2) building, intervention, and
evaluation, (3) reflection and learning, and (4) formalization of learning (Sein et al., 2011b). To
each stage there are one or more guiding principles, as shown in table 1.
Table 1: ADR methodology
Stage 1: Problem formulation
Principle 1: Practice-inspired research
Principle 2: Theory-ingrained artifact
Stage 2: Building, intervention, and
Principle 3: Reciprocal shaping
Principle 4: Mutually influential roles
Principle 5: Authentic and concurrent evaluation
Stage 3: Reflection and learning
Principle 6: Guided emergence
Stage 4: Formalization of learning
Principle 7: Generalized outcomes
Stage 1: Problem formulation
Following the first principle of practice-inspired research, we aim to solve field problems per se
by generating knowledge that can be applied to the class of problems specified by the specific
problem (Sein et al., 2011b). In that regard, we engaged in a one-day workshop focusing on agile
project portfolio management. It was held in November 2018 and involved participants from five
international companies. The purpose of the workshop was to identify the main challenges facing
the participating organizations. We used these challenges to guide our problem formulation, and
we thus let practice inspire our research following the first principle of stage 1. In specific, one of
our key findings of this workshop was that the organizations had little common ground for
evaluating their PPM. This practical finding inspired a research process leading to the design of a
PPM evaluation framework.
Following the second principle of developing a theory-ingrained artifact (Sein et al., 2011b), we
used theory to inform the design of the evaluation artifact. Here, we understand theory to be
systems of statements allowing generalization and abstraction (Gregor, 2006). By adapting the
project evaluation framework proposed by Rode and Svejvig (2018a), we developed the first
version of our PPM evaluation artifact. To leverage the building process (Sein et al., 2011a) we
utilized general evaluation theory (Dahler-Larsen, 2013, Chen, 2015, Mertens and Wilson, 2012)
and an unstructured review of the literature on PPM. The latter includes publications from
recognized international peer-reviewed journals on project management-related issues as well as
practitioner-orientated literature developed by the two large project management standards: OGC
(OGC, 2006) and PMI (PMI, 2008). The key point of each publication was translated from an
abstract level into a concrete evaluation criterion and then operationalized into questions –
understandable to managers in the domain of practice.
The outcome of this first stage was the initial design of the PPM evaluation artifact – presented in
Stage 2: Building, intervention, and evaluation
Following the second stage of building, intervention, and evaluation, in December 2018 we tested
the initial artifact in one of the participating organizations – hereafter called Alfa. One of the co-
authors works as a PPM expert and manager in Alfa. We tested the artifact by using it as an
interview guide in two highly structured interviews regarding PPM in Alfa. The interviews were
recorded and lasted one hour each. In one way, the interviews had a deductive nature, taking as
their point of departure the theory-ingrained artifact. On the other hand, the interviews took an
abductive turn, as the interviewer and interviewee adapted the questions along the way to fit the
practice domain. In this way, the two interviews fostered mutually influential iterations between
the two domains: the theory-ingrained artifact and the organizational context (Sein et al., 2011a).
The process facilitated mutual learning and reflection in and among the participants (interviewer
and interviewee) working through the questions and implementing improvements during and after
each interview. In this way, decisions about designing, shaping, and re-shaping the artifact were
woven into the evaluation process. In that regard, our research process follows the principles of
stage 2: (3) reciprocal shaping, (4) mutually influential roles, and (5) authentic and concurrent
evaluation (Sein et al., 2011b).
The outcome of this second stage was the instantiation of the realized design of the PPM evaluation
artifact – shown in section five.
Stage 3: Reflection and learning
In the third stage of reflection and learning, we re-considered the findings from the stage of
developing and deploying the theory-ingrained artifact (Sein et al., 2011a) in Alfa. Like Giessmann
and Legner (2016), we consulted all our material – including the minutes from the first meetings,
slides from the one-day workshop, as well as recordings and notes from the two interviews –
summarized in the appendix. Following the sixth principle of guided emergence, we discussed our
findings in the research team and summarized them in four reflection and learning points that can
refine the PPM artifact and eventually guide the emergence of a set of preliminary design principles
(Sein et al., 2011a) for PPM evaluation.
The current outcome of this third stage is a set of learning and reflection points for PPM evaluation
– shown in section six.
Stage 4: Formalization of learning
In the fourth and final stage of formalization of learning, we will formalize our learning by
generalizing the outcomes of the prior stages. Before we do that, we plan to re-intervene in practice
and evaluate the use of the artifact in other organizational contexts. This will help us understand
the utility of the PPM evaluation framework in a broader sense and to generalize the specific artifact
as a solution to a broader class of problems (Sein et al., 2011b) – in this case, PPM evaluation.
Confronting the framework with other real-life settings will provide instant feedback on its design
features (Mathiassen, 2002) and allow us to further evaluate the robustness of the artifact. Thus,
we plan to follow the seventh principle and develop a set of refined design principles for how PPM
evaluation can be organized to improve the quality of PPM.
The outcome of this fourth stage will be a refined set of general design principles that can guide
PPM evaluation and improve PPM performance.
4 Developing a PPM evaluation framework
In this section, we present the developed PPM evaluation framework.
We adapt the project evaluation framework proposed by Rode and Svejvig (2018a), as shown in
below figure 1 to structure our review of the PPM literature.
Figure 1: Project evaluation framework
After the PPM literature has been structured in the project evaluation framework, as shown in the
second column of table 2, the key point of each publication is translated into a concrete evaluation
criterion and then operationalized into a question shown in the third column to the right.
Table 2: Four approaches, 20 publications and 20 questions
Table 2 is based on recognized international peer-reviewed journals on project management-related
issues as well as practitioner-oriented literature developed by the two large project management
standards: OGC (OGC, 2006) and PMI (PMI, 2008).
The PPM evaluation framework consists of 20 double questions. The first part of each question
asks to what degree the organization follows the evaluation criteria – to assess the real state of
Q1. Use of (traditional) PPM processes and techniques
Q2. How integrated are PPM processes
Q3. How formalized are PPM processes and practices
Q4. How tailored is the PPM design
Q5. PMO roles of coordinating, controlling, and supporting
Q6. Management quality, allocation quality, and cooperation quality in PPM
Q7. Use of maturity assessment
Q8. Assessment of PPM governance
Q9. External or internal comparison of the project portfolio balance
Q10. Data on project portfolio costs in internal and external benchmarking
Q11. Effectiveness of strategic attributes
Q12. Effectiveness of operational attributes
Q13. Measure value creation at portfolio level
Q14. Sustainability included in PPM
Q15. To what extent is the logic of effectuation used in PPM
Q16. Use of learning loops
Q17. Responsiveness and flexibility in PPM
Q18. Frequency of face-to-face interaction in PPM
Q19. Formative role of projects
Q20. Use scaling learning from project level to portfolio level
affairs. The second part asks how important the proposed concept is to the organization – to assess
the ideal state of affairs.
5 Using the PPM evaluation framework
In this section, we present an instantiation of the PPM evaluation framework illustrating how it is
used to evaluate PPM in one organizational context – namely Alfa – which is described in the first
The case company, here anonymized as Alfa, is a large European manufacturer of fast-moving
consumer goods. Alfa has a global project portfolio with a traditional project management setup
utilizing a Stage-Gate model to structure its project process. However, the setup is currently
undergoing structural modifications at both portfolio and project management levels to enable
faster response to change. At project management level, modifications include the introduction of
an agile project approach in addition to the traditional plan-based approach – both to be conducted
within the frame of a global Stage-Gate project model that remains in place. At portfolio level, the
yearly portfolio process is replaced with a more rapid monthly process, and governance of project
moves from one, global process for all projects toward a centralized mandate on large, strategic
projects and local mandates and prioritization on small and medium-sized projects. Global
transparency through local tracking of key metrics on all projects remains. The instantiation below
is based on the current state of PPM in Alfa.
The scores from the evaluation are summarized in the below figure 2. The “uninterrupted” line
shows the score of “to what extent” the organization is in line with the 20 publications’ concepts
represented by the 20 questions – the real state of affairs. The dotted line shows the “relative
importance” of the proposed concept to the organization – the ideal state of affairs. Overall, the
results shown in the figure indicate that most of the suggested questions are very important to the
organization. Furthermore, the results indicate that there is room for improvement, as there are
many areas where the organization is not in line with the proposed concept – despite the fact that
the concept is important to the organization. In the following, we discuss the empirical findings
from each of the four areas of evaluation.
Figure 2: Instantiation of PPM evaluation framework
The PPM process is covered in questions Q1-Q6. Interestingly, use of traditional PPM processes
and techniques was the only place where the organization followed the prescriptions of the concepts
to a very high degree, though these concepts were scored as less important to the organization. A
bit simplified, one could say that the organization over-performed in this area. One key reason is
the organization’s current focus on changing its plan-based PPM processes and techniques. In the
nearby future, new and agile-inspired processes will be implemented. The PMO is a key actor in
the structural modifications at portfolio and project management levels, who draws vastly on their
resources. Thus, the PMO mostly focus on controlling the portfolio and currently down-prioritizes
other roles, e.g. the supporting and coordinating roles in the organization.
Being in this state of introducing an agile alternative to traditional approaches explains the paradox
that in some regards the organization is described as very mature, and in other regards it is described
as very immature. This finding somewhat contrasts the findings of earlier research, where maturity
and PPM processes tend to cluster and go together (Jeffery and Leliveld, 2004, Reyck et al., 2005).
Credit must be given to Alfa for allowing self-critical and authentic evaluation of its PPM. They
admit that many “on-the-surface-mature” PPM processes do not reflect the organizational reality.
For example, the organization has advanced dashboards with various traffic lights showing the KPI
status of projects and a detailed human resource management plan – which does not represent the
actual state of affairs.
The organization’s way of thinking about its PPM processes led to some interesting discussions
about how some of the concepts in the evaluation framework should be interpreted. One illustrative
example was our question about the degree of formalization, where the manager explained that
Alfa has a fine-grained notion of what should be formalized. Alfa wants specific things to be more
formalized, e.g. the vision of the projects, the availability of resources, the allocated budget. On
the other hand, Alfa intends to become less formalized with regard to how and when the projects
in the portfolio provide value.
Benchmark is covered in questions Q7-Q10, and there is high alignment between the real and ideal
states in these results. Furthermore, in the light of the above discussion, it is not surprising that Alfa
finds traditional maturity assessment of PPM processes such as P3M3 (OGC, 2006) and OPM3
(PMI, 2008) less important. Instead, Alfa looks for models which originate from the so-called agile
mindset, e.g. SAFe (Leffingwell, 2007) and Scrum (Schwaber and Sutherland, 2011), but also
models by Gartner have been considered. Furthermore, Alfa has an urgent need for benchmarking
its PPM governance structures, but has not found any suitable frameworks. Therefore, Alfa
organized the aforementioned workshop on agile PPM governance with four other organizations.
Whereas Alfa needs tools and practices for benchmarking PPM processes and governance, it has
much experience and strong capabilities in external and internal comparison of project portfolio
costs (Verhoef, 2002, Verhoef, 2005), the balance of its assets (Weill and Aral, 2006) and the
balance of its strategic buckets (Chao and Kavadias, 2008). The manager explained that Alfa has
many practices for supporting portfolio balancing, as this has direct and instant impact on the
project selection process. Alfa’s position in a highly competitive global marked demands constant
focus on how its development resources – materialized in projects – are distributed and balanced
across its strategic goals.
The manager brought up an important type of benchmarking not mentioned in our framework,
namely benchmarking of value provided by projects. Alfa was described as being advanced in its
practices for defining project costs and benefits. This is done in the “project initiation” processes
by utilizing a standardized format for project business cases. However, the organization lacks
tracking of benefits after project completion, and the following section discusses this issue in
Output is covered in questions Q11-Q15. Effectiveness of strategic attributes involves strategic
alignment, adaptability, and delivering the expected value (Patanakul, 2015). The manager gave
the highest score to effectiveness in the planning process and explained the rigorous processes in
the planning process, with high transparency in decision-making around business case approval
and project tracking on key metrics. The project organization was described as having strong
capabilities in delivering transparency in terms of schedule, cost, and quality. However, the
manager emphasized the lack when it comes to measuring realized value: “Our intentions are clear
… but what do we get in the end?”. The manager explained that this high transparency in schedule,
cost, and quality is becoming relatively less important compared to the question of how Alfa gets
the most value out of its projects. Our discussions on value seemed highly relevant to Alfa, and
value could not be reduced to commercial issues. Instead, the concept of sustainability encapsulated
a deeply embedded and highly prioritized value in PPM as sustainable, environmental, social,
health, and safety values. For example, Alfa invests in becoming sustainable and CO2 neutral.
Furthermore, the company has made long-term investments in the development of sustainable
Whereas the concept of effectiveness seems to capture the current focus areas in PPM, the concept
of effectuation seemed to point in the direction for which the organization is aiming in a longer
time horizon. The organization was said to have large development potential by conforming to the
principles defined in effectuation. In particular, effectuation suggests more emphasis on using
available resources rather than following pre-defined goals to shape projects (Nguyen et al., 2018).
However, the manager explained some current tendencies supporting this direction; the
organization is decentralizing it resource allocation to more autonomous units and stable teams.
These teams “pull” work assignments to a work backlog, and the manager emphasized that the
logic of effectuation is a key driver in this current development of PPM. Toward the end of our
discussions on outcome evaluation, the manager forcefully asked if/how adaptability can be
measured. As Alfa is a player in a global innovation economy, this environment provides many
“unknown-unknowns” (Teece et al., 2016). This challenges or at least requires it to expand its
traditional PPM practices and principles, and the next section will look into this discussion.
Learning is covered in questions Q16-Q20. As illustrated by the spider’s web in figure 2, there is
a significant gap between the high priority Alfa gives to the concepts in this area and the current
practices in the organization. During the interviews, the manager easily provided concrete
examples of how the suggested concepts already are or were intended to be utilized. Namely, the
manager saw the potential of implementing more organizational routines as learning loops, as
suggested by agile PPM (Stettina and Hörz, 2015). Similarly, the manager saw a potential in using
scaling learning from project retrospectives (Agile team learning/reflection meetings) on the
portfolio level, as suggested by Dingsøyr et al. (2018b). Nevertheless, Alfa already uses such a
high frequency of face-to-face interaction in PPM, as suggested by the PPM literature (Stettina and
Schoemaker, 2018). It has been argued that the “Scrum of Scrums” presents an opportunity to share
information across portfolios. However, it is not clear how such practices will scale beyond eight-
10 teams (Rautiainen et al., 2011, Sweetman and Conboy, 2018). Thus, the organization needs
more knowledge on how to utilize these face-to-face meetings as learning loops with a portfolio-
level impact, and how to rethink this coordination (Dingsøyr et al., 2018a).
To finalize our discussion on learning, we find limits to the role of strictly formal strategy for
guiding PPM decisions and actions, as this is insufficient in Alfa’s turbulent environments
(Kopmann et al., 2017). Instead, Alfa assigns higher priority to responsiveness and flexibility than
to visibility and predictability in formalized strategy. One way to foster emerging strategies is to
let projects play a formative role in the enactment of the portfolio (Sweetman and Conboy, 2018).
The manager finds this formative role very important and provided additional insightful comments.
Projects in Alfa play (and are intended to play) a formative role at the operational and tactical
levels, but not at the strategical level. For example, projects cannot make products that do not
follow the very strict rules of the brand.
6 Refining the PPM evaluation framework
In this section, we present a set of learning and reflection points on PPM evaluation.
Based on the design and application of the PPM evaluation framework in Alfa, we elicit a set of
learning and reflection points which can further refine the PPM evaluation framework and
eventually emerge into a set of general design principles guiding PPM evaluation. The learning
and reflection points are summarized in the following four sub-sections: questions, methods,
context, and use.
Questions: The 20 questions of the evaluation framework are extracted from 20 different
publications which are difficult to condense to one-point questions. Thus, our interviews were a
constant negation and trial-and-error process of agreeing what could and should be included in the
questions. For instance, many of the concepts are multidimensional: Management quality consists
of information, allocation, and corporation quality (Jonas et al., 2013), and the three PMO roles
include controlling, coordinating, and supporting (Unger et al., 2012). We found no useful way to
merge these dimensions and therefore developed three evenly weighed sub-questions to represent
the multiple dimensions. The abstract level of the concepts demands constant translation to the
concrete empirical setting and language of the organization. In this case, this process was aided by
the fact that the manager we interviewed has a research background and more than 10 years of
experience in PPM research and practice. This provided an opportunity to engage in mutually
influential roles and reciprocal shaping, as advised by Sein et al. (2011b). However, this is unlikely
to be the case in the majority of instances. As the target audience for our PPM evaluation
framework includes both researchers and practitioners wanting to evaluate PPM, it is important to
consider how the questions can be further refined to fit the language of these two domains.
Method: Applying the PPM evaluation framework did not only lead to a refinement of the content
of the questions, but also a reconsideration of how the questions were asked, how the response was
measured, and more broadly how the evaluation was conducted. During the interviews, we
discussed how the questions should be framed – including how the “ideal” and “real” states of
affairs should be assessed – and compared to what. We agreed that the manager should express his
or her subjective perception of PPM. However, this provided the challenge that different actors in
PPM may have different perceptions (Blichfeldt and Eskerod, 2008) and interests (Platje et al.,
1994). Our PPM evaluation in Alfa reflects the viewpoint of the manager who is a PPM expert.
One could argue that as PPM consists of ongoing activities and processes between the senior
management, PPM management, and project management (Stettina and Hörz, 2015), multiple
viewpoints should be taken into account.
We also discussed the scale on which the questions were scored. Most of the answers were placed
in the extreme categories “1” or “5” on our five-point Likert scale. The manager suggested that a
seven-point Likert scale would provide more fine-grained response options. Moreover, we could
potentially increase the reliability of the answers by adding explanatory text with concrete
examples of extreme scores – like Shenhar and Dvir (2007) operationalization of the diamond
Context: As we tested the evaluation framework, we continuously discussed the specific contextual
settings of Alfa and how this impacted the results. One learning point was that the questions would
have different meanings depending on the organization’s current development focus in PPM, e.g.
the organization’s prioritization of adaptability versus reliability (Bernstein et al., 2016). Both the
vocabulary used by Alfa and the results of the PPM evaluation clearly showed that the focus of
Alfa is on improving adaptability. To do so the organization aims to replace plan-based thinking
with a so-called agile mindset. The manager repeatedly mentioned that most of the questions in the
framework have explicit or implicit plan-based (or waterfall) assumptions. This seems likely as the
PPM discipline, despite ongoing change, is dominated by this top-down thinking and linear
thinking (Sweetman and Conboy, 2018, Hansen and Svejvig, 2018, Hansen and Kræmmergaard,
2014). Further refinement of the PPM evaluation framework should pay attention to the
implications of these assumptions.
Use: Overall, the four evaluation approaches captured important aspects of the PPM in Alfa and
fostered insightful discussions and reflections. Especially, the graphical illustration of the results
in a spider’s web provided a comparable and easy-to-understand overview of current PPM practices
and improvement areas in the organization. The web clearly shows Alfa’s strengths and
weaknesses. Using the framework again in the same context at a later point in time can elicit insight
into the development and potential improvement of PPM in Alfa. Despite the great potential of the
PPM evaluation framework, it has its limitations and premises. In our instantiation, the result of
the PPM evaluation rely on perception. Our results are based on how the manager perceives ideal
and real PPM practices. Thus, triangulating our findings with observation of PPM practices may
provide other results.
In general, the PPM evaluation framework can be used in a variety of ways. For instance, the
evaluation can be conducted by an independent assessor observing PPM and/or interviewing
managers or by involving superiors and subordinates in the assessment process. The PPM
evaluation framework can also be used as a self-reflection tool for managers who wish to continue
to be reflective in and on their (PPM) practice (Rode et al., 2018). Most importantly, we
recommend using the framework to stimulate further reflection – in an inner or shared dialog about
current and future PPM. As such, the framework can facilitate a meta (re)consideration of the ideal
and real states of PPM affairs. It can be used to ask and answer questions regarding single- and
double-loop learning (Argyris, 1977) and to understand if project portfolio managers are doing
things right and doing the right things. In this case, informal discussions after the instantiation in
Alfa indicated that the evaluation results provided concrete inspiration for improving PPM. Such
improvements can be leveraged by giving legitimacy, grounded in academic knowledge, to
developing reoccurring organizational routines as learning loops at portfolio level.
Further research and instantiations of the revised and improved PPM evaluation framework can
further refine the above learning and reflection points and eventually develop them into a set of
design principles for PPM evaluation.
By answering the research question: How can we develop a holistic and empirically validated PPM
evaluation framework? our paper addresses a gap in the literature on PPM evaluation, as we find
no empirically validated evaluation models integrating existing knowledge on PPM evaluation.
Inspired by a multidimensional evaluation framework, we structure contributions from 20 PPM
publications into four areas and develop a framework to facilitate a meta evaluation of PPM. The
PPM evaluation framework contributes to practice by enabling practitioners to evaluate their
current PPM efforts and identify improvement potentials. We contribute to research, as we respond
to calls for more research on how PPM is conducted in real-life settings – by providing a first step
toward developing and applying a theory-ingrained and -integrated PPM evaluation framework.
Further research can show if the PPM evaluation framework can improve PPM in practice and
apply the artifact in other organizational contexts to test its applicability and generalizability
beyond the specific instantiation provided in this paper.
Table 3: Data display from instantiation of PPM evaluation framework
Questions, scores, and comments
Q1a. To what degree does your organization use (traditional) PPM processes and techniques
(Reyck et al., 2005, Jeffery and Leliveld, 2004)? For example, NPV, ROI, Bubble charts, traffic
lights, Stage-Gate models, etc.
Q1b. How important are traditional PPM processes and techniques to the success of the portfolio?
Q2a. How integrated are your PPM processes? We
understand “integrated” as the seamless
coordination and control across functions, units, and hierarchical levels
(Hansen et al., 2017,
Hansen and Kræmmergaard, 2013). This covers the process from projects entering the pipeline to
follow-up on their completion (Ghasemzadeh et al., 1999, Archer and Ghasemzadeh, 2007).
Q2b How important is the aforementioned “integration” of your PPM processes to the success of
Q3a. How formalized are your PPM processes and practices (Teller et al., 2012)?
Q3b. How important is formalization of your PPM processes to the success of the portfolio?
Q4. To which extent is the PPM design tailored to the organizational design (Aubry and Lavoie-
Q4b. How important is tailoring of the PPM design to the organizational design to the success of
Q5a. In what degree does the PMO exercise the roles of coordinating, controlling, and supporting
(Unger et al., 2012)?
Score: coordinating 2, controlling 3, and supporting 1
Q5b. How important is the PMO effort of coordinating, controlling, and supporting to the success
of the portfolio?
Score: coordinating 4, controlling 1 and supporting 5
Q6a. What are the levels of information quality, allocation quality, and cooperation quality in PPM
(Jonas et al., 2013)?
Score: information quality 3, allocation quality 2, and cooperation quality 4
Q6b. How important are information quality, allocation quality, and cooperation quality to the
success of the portfolio?
Score: information quality 5 (e.g. transparency of activities via use of backlogs), allocation quality 4
(though many teams are organized as dedicated resources, they still need coordination), and cooperation
Q7a. To which extent does the organization use maturity assessment of PPM processes (OGC,
2006, PMI, 2008)? E.g. by assessing the maturity of the portfolio by the use of assessment tools
such as the P3M3 tool developed by OGC (2006).
Q7b. How important is the use of maturity assessment of PPM processes to the success of the
Q8a. To which extent does the organization use assessment of PPM governance (Lappi et al., 2018)?
E.g. by benchmarking the organization’s governance models to other companies’ governance.
Q8b. How important is the use of governance assessment of PPM processes to the success of the
Q9a. To which extent does the organization use external or internal comparison of the project
portfolio balance? E.g. by benchmarking the portfolio’s distribution between: informational,
trategic, transactional, infrastructure (Weill and Aral, 2006), or balance of strategic buckets
(Chao and Kavadias, 2008).
Q9b How important is the use of external or internal comparison to the success of the portfolio?
Q10a. To which extent does the
organization use data on project portfolio costs in internal and
external benchmarking (Verhoef, 2002, 2005)?
Q10b. How important is the use of cost benchmarking to the success of the portfolio?
Q11a. How high is the effectiveness of strategic attributes? Effectiveness of strategic attributes
involves strategic alignment, adaptability, and delivering the expected value (Patanakul, 2015).
Score: in the planning process 5, in the retrospective process 1
Q11b. How important is the effectiveness of strategic attributes to the success of the portfolio?
Score: in the planning process 5, in the retrospective process 5
Q12a. How high is the PPM effectiveness of operational attributes? This involves project visibility,
transparency in decision-making, and predictability of project delivery) (Patanakul, 2015).
Score: 5 (but lower in the process dealing with the retrospective)
Q12b. How important is the PPM effectiveness of operational attributes to the success of the
Score: in the planning process X, in the retrospective process X (no score can be given to this question;
see below comments to the question)
Q13a. In what degree does your organization measure value creation at portfolio level (Laursen
and Svejvig, 2016)?
Score: expected value 4, realized value 1
Q13b. How important is measuring the value creation at portfolio level (Laursen and Svejvig,
Score: expected value 5, realized value 5
Q14a. To which extent is sustainability included in PPM? This includes non-commercial issues
such as ecological, environmental, social, health, and safety values (Martinsuo and Killen, 2014).
Q14a. How important is sustainability in PPM to the success of the portfolio?
Q15a. To what extent is the logic of effectuation used in PPM? Effectuation is understood as
decision-making using available resources rather than pre-
defined goals to shape projects and
emphasis on partnerships and networks over competitive analyses (Nguyen et al., 2018).
Q15b. How important is effectuation to the success of the portfolio?
Q16a. To what extent does your organization use reoccurring organizational routines as learning
loops (Stettina and Hörz, 2015)?
Q16b. How important is the use of reoccurring organizational routines (as learning loops) to the
success of the portfolio?
Q17a. What is the degree of responsiveness and flexibility in PPM (Kock and Georg Gemünden,
2016, Kopmann et al., 2017)?
Q17b. How important is responsiveness and flexibility to the success of the portfolio?
Q18. In what degree does your organization use a high frequency of face-to-face interaction in
PPM (Stettina and Schoemaker, 2018)?
Q18. How important is a high frequency of face-to-face interaction to the success of the portfolio?
Q19a. In what degree do projects play a formative role in the enactment of the portfolio (Sweetman
and Conboy, 2018)?
Q19b. How important is the formative role of projects to the success of the portfolio?
Q20a. In what degree does your organization use scaling learning from project retrospective to the
portfolio level (Dingsøyr et al., 2018b)?
Q20b. How important is the use of scaling learning to the success of the portfolio?
ANDERSEN, E. S. & JESSEN, S. A. 2003. Project maturity in organisations. International journal of project
management, 21, 457-461.
ARCHER, N. & GHASEMZADEH, F. 2007. Project portfolio selection and management. Morris, P./Pinto, JK
(2007), The Wiley Guide to Project, Program & Portfolio Management, 94-112.
ARCHER, N. P. & GHASEMZADEH, F. 1999. An integrated framework for project portfolio selection.
International Journal of Project Management, 17, 207-216.
ARGYRIS, C. 1977. Double loop learning in organizations. Harvard Business Review, 55, 115-125.
AUBRY, M. & LAVOIE-TREMBLAY, M. 2018. Rethinking organizational design for managing multiple
projects. International Journal of Project Management, 36, 12-26.
BAKER, N. R. & POUND, W. 1964. R&D Project Selection: Where We Stand. IEEE Trans. Engineering
Management, Vol. EM-lI, 124-134.
BERNSTEIN, E., BUNCH, J., CANNER, N. & LEE, M. 2016. THE BIG IDEA Beyond the Holacracy HYPE. Harvard
Business Review, 94, 38-49.
BLICHFELDT, B. S. & ESKEROD, P. 2008. Project portfolio management–There’s more to it than what
management enacts. International Journal of Project Management, 26, 357-365.
CHAO, R. O. & KAVADIAS, S. 2008. A theoretical framework for managing the new product development
portfolio: When and how to use strategic buckets. Management Science, 54, 907-921.
CHEN, H. T. 2015. Practical Program Evaluation: Theory-Driven Evaluation and the Integrated Evaluation
Perspective, Thousand Oaks, SAGE Publications Inc.
CLEGG, S., KILLEN, C. P., BIESENTHAL, C. & SANKARAN, S. 2018. Practices, projects and portfolios: Current
research trends and new directions. International Journal of Project Management, 36, 762-772.
COOPER, R. G., EDGETT, S. J. & KLEINSCHMIDT, E. J. 2000. New problems, new solutions: making portfolio
management more effective. Research-Technology Management, 43, 18-33.
DAHLER-LARSEN, P. 2013. Evaluering af projekter - og andre ting, som ikke er ting, Odense, Syddansk
DINGSØYR, T., BJØRNSON, F. O., MOE, N. B., ROLLAND, K. & SEIM, E. A. Rethinking coordination in large-
scale software development. Proceedings of the 11th International Workshop on Cooperative and
Human Aspects of Software Engineering, 2018a. ACM, 91-92.
DINGSØYR, T., MIKALSEN, M., SOLEM, A. & VESTUES, K. Learning in the Large-An Exploratory Study of
Retrospectives in Large-Scale Agile Development. International Conference on Agile Software
Development, 2018b. Springer, 191-198.
DRAZIN, R. & VAN DE VEN, A. H. 1985. Alternative forms of fit in contingency theory. Administrative science
ELONEN, S. & ARTTO, K. A. 2003. Problems in managing internal development projects in multi-project
environments. International Journal of Project Management, 21, 395-402.
ENGWALL, M. & JERBRANT, A. 2003. The resource allocation syndrome: the prime challenge of multi-
project management? International journal of project management, 21, 403-409.
GERALDI, J. & SÖDERLUND, J. 2018. Project studies: What it is, where it is going. International Journal of
Project Management, 36, 55-70.
GHASEMZADEH, F., ARCHER, N. & IYOGUN, P. 1999. A Zero-One Model for Project Portfolio Selection and
Scheduling. The Journal of the Operational Research Society, 50, 745-755.
GIESSMANN, A. & LEGNER, C. 2016. Designing business models for cloud platforms. Information Systems
Journal, 26, 551-579.
GOLDKUHL, G. 2012. Pragmatism vs interpretivism in qualitative information systems research. European
Journal of Information Systems, 21, 135-146.
GREGOR, S. 2006. The nature of theory in information systems. MIS quarterly, 611-642.
HANSEN, L. K. & KRÆMMERGAARD, P. 2013. Transforming local government by project portfolio
management: Identifying and overcoming control problems. Transforming Government: People,
Process and Policy, 7, 50-75.
HANSEN, L. K. & KRÆMMERGAARD, P. 2014. Discourses and theoretical Assumptions in IT Project Portfolio
Management: a review of the literature. International Journal of IT Project Management 5, 47.
HANSEN, L. K., KRÆMMERGAARD, P. & MATHIASSEN, L. 2017. IT project portfolio governance practice: An
investigation into work design problems. Journal of Information Technology Case and Application
Research, 19, 81-101.
HANSEN, L. K. & SVEJVIG, P. 2018. Towards rethinking Project portfolio management. European Academy
of Management (EURAM). Iceland: EURAM.
HITCHCOCK, L. 1963. Selection and Evaluation of R&D Projects. Research Management, 6, 231-244.
JEFFERY, M. & LELIVELD, I. 2004. Best practices in IT portfolio management. MIT Sloan Management
Review, 45, 41.
JENSEN, A., THUESEN, C. & GERALDI, J. 2016. The projectification of everything: projects as a human
condition. Project Management Journal, 47, 21-34.
JONAS, D., KOCK, A. & GEMÜNDEN, H. G. 2013. Predicting project portfolio success by measuring
management quality—a longitudinal study. IEEE Transactions on Engineering Management, 60,
KOCK, A. & GEORG GEMÜNDEN, H. 2016. Antecedents to Decision-Making Quality and Agility in Innovation
Portfolio Management. Journal of Product Innovation Management, 33, 670-686.
KOPMANN, J., KOCK, A., KILLEN, C. P. & GEMÜNDEN, H. G. 2017. The role of project portfolio management
in fostering both deliberate and emergent strategy. International Journal of Project Management,
LAPPI, T., KARVONEN, T., LWAKATARE, L. E., AALTONEN, K. & KUVAJA, P. 2018. Toward an Improved
Understanding of Agile Project Governance: A Systematic Literature Review. Project Management
LAURSEN, M. & SVEJVIG, P. 2016. Taking stock of project value creation: A structured literature review
with future directions for research and practice. International Journal of Project Management, 34,
LEFFINGWELL, D. 2007. Scaling software agility: best practices for large enterprises, Pearson Education.
LINZALONE, R. & SCHIUMA, G. 2015. A review of program and project evaluation models. Measuring
Business Excellence, 19, 90-99.
MARTINSUO, M. & KILLEN, C. P. 2014. Value Management in Project Portfolios: Identifying and Assessing
Strategic Value. Project Management Journal, 45, 56-70.
MARTINSUO, M. & LEHTONEN, P. 2007. Role of single-project management in achieving portfolio
management efficiency. International journal of project management, 25, 56-65.
MATHIASSEN, L. 2002. Collaborative practice research. Information Technology & People, 15, 321-345.
MERTENS, D. M. & WILSON, A. T. 2012. Program evaluation theory and practice : a comprehensive guide,
New York, N.Y., Guilford Press.
NGUYEN, N. M., KILLEN, C. P., KOCK, A. & GEMÜNDEN, H. G. 2018. The use of effectuation in projects: The
influence of business case control, portfolio monitoring intensity and project innovativeness.
International Journal of Project Management, 36, 1054-1067.
NIKKHOU, S., TAGHIZADEH, K. & HAJIYAKHCHALI, S. 2016. Designing a Portfolio Management Maturity
Model (Elena). Procedia - Social and Behavioral Sciences, 226, 318-325.
OGC 2006. Portfolio, programme and project management maturity model (P3M3). Office of Government
Commerce London, England.
PADOVANI, M. & CARVALHO, M. M. 2016. Integrated PPM Process: Scale Development and Validation.
International Journal of Project Management, 34, 627-642.
PATANAKUL, P. 2015. Key attributes of effectiveness in managing project portfolio. International journal
of project management, 33, 1084-1097.
PAULK, M. C., CURTIS, B., CHRISSIS, M. B. & WEBER, C. V. 1993. The capability maturity model for software.
Software engineering project management, 10, 1-26.
PENNYPACKER, J. S. 2005. Project portfolio management maturity model. Pennsylvania, USA: Center for
PLATJE, A., SEIDEL, H. & WADMAN, S. 1994. Project and portfolio planning cycle: project-based
management for the multiproject challenge. International Journal of Project Management, 12,
PMI. Organizational Project Management Maturity Model (OPM3): Knowledge Foundation. 2008. Project
PMI 2013. The Standard for Portfolio Management. Newtown Square, United States: Project Management
RANK, J., UNGER, B. N. & GEMÜNDEN, H. G. 2015. Preparedness for the future in project portfolio
management: The roles of proactiveness, riskiness and willingness to cannibalize. International
Journal of Project Management, 33, 1730-1743.
RAUTIAINEN, K., VON SCHANTZ, J. & VAHANIITTY, J. Supporting scaling agile with portfolio management:
case Paf. com. System Sciences (HICSS), 2011 44th Hawaii International Conference on, 2011.
REYCK, B. D., GRUSHKA-COCKAYNE, Y., LOCKETT, M., CALDERINI, S. R., MOURA, M. & SLOPER, A. 2005. The
impact of project portfolio management on information technology projects. International Journal
of Project Management, 23, 524-537.
RODE, A. L. G., FREDERIKSEN, S. H. & SVEJVIG, P. 2018. Project Half Double: training practitioners, working
with visuals, practice reflections and small and medium-sized enterprises, December 2018. Aarhus
RODE, A. L. G. & SVEJVIG, P. 2018a. Project evaluation: one framework - four approaches. Dansk Projekt
RODE, A. L. G. & SVEJVIG, P. 2018b. Project evaluation: one framework - four approaches. In: SVEJVIG, P.
& HANSEN, M. R. P. (eds.) The Danish Project Management Research Conference. Copenhagen:
ROSEN, B. H. 1956. How to pick best projects Chemical Engineering, 63.
RUBENSTEIN, A. H. 1957. Setting Criteria for R&D. Harvard Business Review, 35, 95-104.
SCHIPPER, R. R. & SILVIUS, A. G. 2018. Towards a conceptual framework for sustainable project portfolio
management. International Journal of Project Organisation and Management, 10, 191-221.
SCHOPER, Y.-G., WALD, A., INGASON, H. T. & FRIDGEIRSSON, T. V. 2018. Projectification in Western
economies: A comparative study of Germany, Norway and Iceland. International Journal of Project
Management, 36, 71-82.
SCHWABER, K. & SUTHERLAND, J. 2011. The scrum guide. Scrum Alliance, 21.
SEIN, M. K., HENFRIDSSON, O., PURAO, S., ROSSI, M. & LINDGREN, R. 2011a. Action Design Research. MIS
Quarterly, 35, 37-56.
SEIN, M. K., HENFRIDSSON, O., PURAO, S., ROSSI, M. & LINDGREN, R. 2011b. Action design research. MIS
SHADISH, W. R., COOK, T. D. & LEVITON, L. C. 1991. Foundations of program evaluation : theories of
practice, Newbury Park, Calif., SAGE.
SHENHAR, A. & DVIR, D. 2007. Reinventing project management: the diamond approach to successful
growth and innovation, Boston, Harvard Business Press.
STETTINA, C. J. & HÖRZ, J. 2015. Agile portfolio management: An empirical perspective on the practice in
use. International Journal of Project Management, 33, 140-152.
STETTINA, C. J. & SCHOEMAKER, L. Reporting in Agile Portfolio Management: Routines, Metrics and
Artefacts to Maintain an Effective Oversight. 2018 Cham. Springer International Publishing, 199-
STUFFLEBEAM, D. L. & CORYN, C. L. S. 2014. Evaluation theory, models, and applications, San Francisco,
Jossey-Bass & Pfeiffer Imprints, Wiley.
SWEETMAN, R. & CONBOY, K. 2018. Portfolios of Agile Projects: A Complex Adaptive Systems’ Agent
Perspective. Project Management Journal, 8756972818802712.
TEECE, D., PETERAF, M. & LEIH, S. 2016. Dynamic capabilities and organizational agility: Risk, uncertainty,
and strategy in the innovation economy. California Management Review, 58, 13-35.
TELLER, J., UNGER, B. N., KOCK, A. & GEMÜNDEN, H. G. 2012. Formalization of project portfolio
management: The moderating role of project portfolio complexity. International Journal of Project
Management, 30, 596-607.
UNGER, B. N., GEMÜNDEN, H. G. & AUBRY, M. 2012. The three roles of a project portfolio management
office: Their impact on portfolio management execution and success. International Journal of
Project Management, 30, 608-620.
VERHOEF, C. 2002. Quantitative IT portfolio management. Science of computer programming, 45, 1-96.
VERHOEF, C. 2005. Quantifying the value of IT-investments. Science of Computer Programming, 56, 315-
WEILL, P. & ARAL, S. 2006. Generating premium returns on your IT investments. MIT Sloan Management
Review, 47, 39.