Content uploaded by Vesa Kuikka
Author content
All content in this area was uploaded by Vesa Kuikka on Mar 23, 2022
Content may be subject to copyright.
A Method for Assessing Effects of Technological
Development on Military Capabilities
1st Vesa Kuikka
Information Technology Division
Finnish Defence Research Agency
Riihimäki, Finland
vesa.kuikka@mil.fi
2nd Sami Peltotalo
Information Technology Division
Finnish Defence Research Agency
Riihimäki, Finland
sami.peltotalo@mil.fi
Abstract—We present a method to assess the effects of new
technologies on the future development of military capabilities.
Assessing technologies without any connection to capabilities is
insufficient to assist the decision-making of selecting development
projects or planning future procurement programs. Instead, a
method to connect technologies to capabilities is needed. To this
end, we propose a novel evaluation model to help carry out the
evaluation process most easily. The evaluation model describes
the effects of technological development on military capabilities
through the application of different systems. Although our use
cases consider military applications, the same questions are
present in other fields in the private and public sectors. The
focus of this study is to present the method while modelling
results are provided only for illustrative purposes.
Index Terms—evaluation method, evaluation model, technology
forecasting, military capability, decision-making
I. INTRODUCTION
Assessing future technological developments [1], [2] and
the consecutive impacts on socio-technical systems have been
an increasingly important topic in defence forces. There are a
plethora of methods that have been used for forecasting future
technological development and system capabilities, e.g. [3]–
[7]. Many of these methods are useful in detailed analysis and
rigorous documenting of background factors and their impacts
on system capabilities. In practical situations, resources may
be limited regarding time or personnel available for the eval-
uation process. Here, we aim to present a minimal and simple
evaluation method that has all the necessary elements needed
in high conceptual level assessments or limited technological
forecasting tasks. We require that the method can be confined
to a subset of technologies, systems and capabilities. However,
we require that the model is extendable to cover any number
of contributing factors, systems and capabilities. Also, other
detail level models, e.g. based on system engineering, can
be used to collect and produce input data to our overarching
model. Or the other way around, the simple model can be used
as a check for the corresponding results from more detailed
models.
We need only a limited number of concepts in our evaluation
methodology: a list of technologies, background factors and
capabilities together with present and future scenarios. Sce-
narios describe circumstantial settings where evaluations are
made. In non-military applications, scenarios can be included
in more general value-added networks [3]. Our experience is
that a list of technologies and capabilities is not sufficient
because of lacking documentation of contributing factors and
circumstantial settings. Not defining, explicitly or implicitly,
any background factors or scenarios probably allow too many
different and alternative personal views leading to a biased or
vague estimate of future capabilities.
Only in a rare case of narrow application area, evaluations
among subject area experts and technology specialists, the
method based on just the two concepts of technologies and
capabilities can be realisable. In this category, we executed
an evaluation project and evaluated a list of technologies
generating military capability of Command & Control (C2)
through information and communication systems. Technology
forecasting was performed in the year 2022 and the forecasting
period was 15 years. Our evaluation model of section III was
used in the project. In this case, both the list of technologies
and military capabilities were confined to the fields where
experts, including the authors, were available for evaluations.
Our research hypothesis is that, if we develop a method
based on a limited number of phases in the evaluation pro-
cess and flexible computer-aided modelling for selecting the
effecting factors, we can execute evaluation projects more
effectively.
A. Motivation
Forecasting major technological changes and their effects on
military technology is crucial for making appropriate changes
in weaponry, military preparations and defence budget priori-
ties. Planning and decisions need to be based on the analysis
of military technological developments that help to modify
tactics and operational plans in exploiting new opportunities.
We propose a method for assessing the effects of new
technologies on the future development of military capabil-
ities. Often resources like the time available for analyses or
skilled personnel are limited, or not enough information is
accessible for long term forecasting. We utilise the practical
experience that we have from different technology forecasting
efforts. Even more important than the lack of time may be
the difficulty to learn complex methodologies. Therefore our
goal is to determine a basic method that still meets some
minimal requirements. We conclude, that a simple evaluation
of technology lists is hardly a satisfactory solution, but instead,
some kind of method to connect technologies to capabilities
is needed.
B. Military Capabilities
Traditionally, capability as a general concept consists of
material, manpower, logistic and leadership viewpoints. It is a
major issue to combine these aspects to understand the strength
of military forces against different adversaries and changing
environmental situations. The concept of capability in the
military context has been standardised in many countries. For
example, the U.S. definition for capability is: “The ability to
complete a task or execute a course of action under specified
conditions and level of performance. This can be achieved
through a combination of means and ways across doctrine,
organization, training, leadership and education, materiel,
personnel, facilities, and policy." [8] NATO’s definition [9] is
very similar to this. Also, a definition for capability gap helps
understand the big picture: “The inability to meet or exceed a
capability requirement, resulting in an associated operational
risk until closed or mitigated. The gap may be the result of no
fielded capability, lack of proficiency or sufficiency in a fielded
capability solution, or the need to replace a fielded capability
solution to prevent a future gap." [8]
In the U.S. the Department of Defence manages programs
of systems with Joint Capability Areas (JCAs). JCAs are
a standardised grouping of capabilities that enable decision-
makers to allocate resources based on a program’s contribution
to joint operations. JCAs are a set of standardised definitions
that cover the complete range of military activities. JCAs
consist of a hierarchical structure of functionalities under main
capability areas such as Battlespace Awareness, Command
and Control, and Protection [8]. We require the definitions
of capability areas and their sub-capabilities not to overlap
each other and to cover the whole range of capabilities under
consideration.
Based on the conceptual definitions of capability, we need
to define a mathematical quantity to be used in our evaluation
model. In this study, we define the concept of capability
consistently with the conceptual definitions as the probability
of a successful mission or operation.
The concept of military utility is closely related to the
concept of military capability [10], [11]. The utility is charac-
terised as an ability to satisfy a particular need, usefulness.
The concept of military utility has been proposed for the
study of technological systems in military operations. The
concept helps support communication and effective decision-
making within the defence community. By using the concept
of military utility and a system approach it is possible to
explain how military capabilities are constituted and affected
by developments in technology, by different use of technology
and how military command levels are affected differently. [10],
[11]
C. Decision-making Processes
Models for describing process improvement approaches
have been developed in industrial engineering, systems en-
gineering and operations research. Based on Shewhart’s work
Deming developed the Deming Cycle, or Deming Wheel, for
managing quality processes [12]. The cycle consists of Plan-
Do-Check-Act or Plan-Do-Study-Act. In the area of healthcare
quality improvement, Donabedian has described quality with
a model that has three main elements of structure, process and
outcomes to examine the quality of care delivered. Similarities
to Donabedian’s model can be found in system architectures.
[13]
The Deming Cycle, and many other similar characterisations
in literature, can be compared with the OODA (Observe-
Orient-Decide-Act) loop often used in the military domain
[14]. Four phases are needed to describe the process of the
OODA loop but only three phases are sufficient to describe ba-
sic evaluation methods due to their shorter chain of deduction.
The two phases in Decide-Act can be merged in evaluation
methods because the aim is to assess the potential benefits of
technologies, not to describe the decision-making in a specific
combat situation. However, the Decide phase is considered im-
plicitly through the scenario if used in the evaluation process.
Usually, scenarios are more general including multiple detailed
descriptions of defence functions like combat situations. The
Deming Cycle and the OODA loop are used as guidelines in
developing our evaluation process.
In technology forecasting, only a limited number of factors
are appropriate because of uncertainties of developments in
the long run. Models describing process improvement [12]
are related to our study in that how many different factors,
at least, are needed in the evaluation model for describing
military capabilities.
D. Outline of This Paper
In section II, we summarise important forecasting methods
in the literature. In section III-A, we discuss the evaluation
process of our method. In section III-C, we demonstrate our
evaluation model of section III-B by using the Command &
Control (C2) capability area as a use case. Generic definitions
of the C2 capability area and its constituent six sub-capabilities
are used instead of any standardised specifications. Sub-
capabilities used in our examples are from the military domain
but similar concepts can be used in other fields. In section IV,
we highlight the main results and in section V we conclude
the paper.
II. RE LATE D WORK
We have identified the same fundamental phases in many
forecasting methods used in the literature [3], [4]. The method
proposed in [3] is designed for general use in evaluating
the effects of technologies on civilian sectors in society. The
recently published method in [4] is designed specifically for
the security and defence sectors. Scenarios are in a central role
in [4], [15] while they are more implicit in [3].
The Delphi method is a structured communication tech-
nique, widely used for business and technology forecasting.
It has been developed as a systematic, interactive forecasting
method that relies on a team of experts. Two structured
alternatives to traditional meetings, the Delphi method and
Prediction Markets, have been compared in [5].
III. MET HO D
We have identified a practical problem of how to organ-
ise and control a technology forecasting project. Lack of
resources, time and expertise are the main impediments. Also,
a structured communication technique is needed to help the
communication process between subject area experts and tech-
nology specialists. To answer these questions, we propose a
novel iterative evaluation model in section III-B, that is simple
and cheap to implement. Our main application areas of the
evaluation method are in technology forecasting and analysing
military capabilities. We explain in section III-A how the
technology evaluation process, as a part of the evaluation
method, can be organised. We demonstrate the evaluation
model by example in section III-C.
A. Evaluation Process
As a background task, organisations’ R&D divisions con-
tinuously monitor technologies and create technology watch
cards of the detected and selected technologies. A technology
watch card consists of the general description of the tech-
nology and also expected military applications and estimated
impacts on military capabilities in general. This background
task is the foundation of the technology forecasting process
to be presented in the following. An example of technology
watch cards can be found in [11]. In this work, we can also
refer to NATO Science and Technology Organization, which
actively pursues technology watch for the Alliance [1].
A capability owner conducts technology forecasting, for
example annually or in four-year periods. An appropriate long
term forecasting period can be, for example, fifteen years.
To carry out technology forecasting, the group of officers
representing subject area experts and the group of engineers
representing technology specialists are organised. The capabil-
ity owner appoints group members from the staff responsible
for building and utilising the military capability. Forecasting is
conducted in workshops, where the two groups can collaborate
effectively.
The capability owner has defined a scenario, which de-
scribes circumstantial settings, where evaluations are made.
The scenario can be described at a detailed level or more
generic level depending on the objectives for the forecasting
round.
The military capability is evaluated at the sub-capabilities
level via evaluating the system and sub-system effects on the
capability. The subject area experts group evaluates the present
situation of the military capability produced by the current
systems. This evaluation is a baseline, which is needed in the
following steps of the evaluation process.
The technology specialists group has a parallel task to
evaluate technological development and to select appropriate
technologies that in the future will have an impact on the sub-
capabilities under evaluation. Technologies are selected from
the technology watch card database.
The subject area experts group makes a second evaluation
round taking into account the technological forecast for the
future. This round is performed with the assistance of the tech-
nology specialists group. The technology specialists introduce
the selected technologies and the predicted new functionalities
that technologies can provide to the systems’ capabilities.
It is crucial to be able to map technologies to new system
functionalities that can be evaluated as sources of capability
enhancements. This may be the most challenging task in the
evaluation process because technology watch cards themselves
do not contain this detailed level of information. The actual
evaluation is still conducted by the subject area experts as only
the military perspective is evaluated, and not the hype of the
technology.
After the second evaluation round, the impact of tech-
nologies on the sub-capability level has been evaluated. The
capability owner chooses some or all of the most effective
technologies for further investigation to be utilised in capabil-
ity development programs.
Overall technological development could also be taken into
account at the second evaluation round. We refer to this as
normal iterative software development, for instance. However,
it is important to keep the different types of enablers in
different categories so that game-changer technologies are
detected.
A structured communication technique is needed to help
collaboration between subject area experts and technology
specialists. A spreadsheet application and the technology
watch cards are used for this purpose. Visual presentations
of the spreadsheet application outputs are shown in real-time
to all participants to enhance the usability of the technique
considerably.
Subject area experts are primarily responsible for the evalu-
ation of sub-capabilities. However, if found more suitable, both
groups in the team can cooperate to reach an understanding in
the evaluation of the effects of systems on the capability level.
The evaluation process and our proposed model are iterative,
where the evaluation team members can adjust the input values
of the model, in any order, until the team reaches a consensus
or compromise.
B. Evaluation Model
In this study, we discuss practical problems in organising
and performing technology forecasting projects. We propose
a minimal – but extensible – model to simplify and shorten
the evaluation process. Preliminary results can be obtained in a
short time, which encourages the project team to continue with
more detailed, or broader content of technologies, evaluation
criteria, contributing factors and sub-capabilities.
The schematic presentation in Fig. 1 illustrates the flow of
technological effects in the evaluation model. The results can
be analysed and illustrated from all the three main viewpoints
of Fig. 1. A scenario can be given implicitly in some simple
and limited evaluations. Contributing and effective factors
are catalysts in producing the capabilities. We have identi-
fied several possible alternative factors that can be used for
this purpose, for example, information and communication
systems, organisation units, processes and functions. These
are just different perspectives to make the same conclusions
about changing capability values. For example, information
and communication systems are used by users in organisation
units and users are performing processes and functions.
Fig. 1. A schematic figure of the main factors in evaluating the effects of
technological developments through systems on capabilities.
In this study, the capability is defined as the probability
of a successful operation or mission [16]. This definition is
in agreement with the conventional definitions in [8] and [9].
The approach based on the probabilistic interpretation has been
discussed in more detail in our earlier studies [6], [16].
We have implemented the evaluation model as a spreadsheet
application as depicted in Fig. 2. In the figure, editable cells are
shown with green colour shades and calculated model results
with blue, orange, white and grey colours. Editable cells are
used to provide input data to the model and to adjust the effects
of technologies. The input values can be changed and adjusted
in any order. In the first phase, the input values may be easier
to evaluate in this order: A, B and C. Even the individual
cells in these three panels can be changed in any order. The
model supports the iterative working method and cooperation
between technology domain experts and military personnel.
Fig. 2. The evaluation model is implemented as a spreadsheet application.
As a practical method, we evaluate the sub-capability values
by evaluating the sub-capability gaps, i.e. the deficit from
the theoretical full capability value. This is only a matter
of convenience but in our experience capability gaps are
easier to evaluate than the actual higher capability values in
percentages. Because information and communication systems
can have overlapping functionalities and combined use of
systems generates additional capabilities, systems are not
completely independent. As a consequence, the formulas for
total capability, as products of their sub-capabilities, are only
approximations. However, if capability gaps are small, let’s
say less than 10%, the approximation is fairly good. If the
systems that produce capabilities have interdependence or their
use has significant side effects, systems’ functionalities should
be considered on a more detailed level [6], [16].
Evaluations are based on estimating sub-capability gaps.
Panels of the spreadsheet are labelled as follows: A) Per-
centages of sub-capability gaps (1 - sub-capability values);
B) Coarse-tuning; C) Fine-tuning and D) Calculated decom-
position of sub-capability gap values. Panels A, B and C
contain editable cells and the percentage values of Panel
D are calculated from the input data of Panels A, B and
C. Input values for Panel A are provided as percentages of
sub-capability gaps. Lists of sub-capabilities (horizontal, top),
current systems (vertical, top left) and emerging technologies
(vertical, bottom left) are shown with the brown colour in the
figure. Panel E is reserved for describing effects of technolog-
ical development as differences to sub-capability gap values:
negative for decreasing sub-capability gaps and positive for
increasing sub-capability gaps.
If needed, other mathematical models, such as factor analy-
sis, principal component analysis and linear regression, can
be used in aggregating the results of multiple rounds of
estimations [7].
High-level military capabilities, like Command & Control
(C2), can be divided conceptually into sub-capabilities. In
step 1, sub-capability gaps are evaluated as percentage values.
When filling these values we should also consider the value
of the total capability gap which is calculated from the sub-
capability gap values of Panel A.
The percentage values of sub-capabilities describe probabil-
ities of having enough sub-capabilities to achieve successful
total capability when sub-capabilities are exploited together.
Here, we assume that each sub-capability is optimised to
achieve the best total capability [16]. However, we observe that
this assumption can be relaxed and allow sub-capabilities to
exceed or fall below the optimal percentage level. This change
generalises our basic definition of capability as the probability
of a successful operation or a mission. After step 1 we have the
values of vector 𝐴whose elements we denote by (𝐴)𝑗=𝑎𝑗,
where 𝑗is the index of sub-capabilities.
In step 2, we evaluate how the use of different systems
provide the capability. This is performed coarsely in Panel B
in Fig. 2 on the capability level. Fine-tuning in Step 3 can
be performed by adjusting values in Panel C. In this phase,
we evaluate the effects of systems on the sub-capability level.
After steps 2 and 3, we have the values of vector 𝐵and
matrix 𝐶. We denote the elements of vector 𝐵as 𝑏𝑖and the
elements of matrix 𝐶as (𝐶)𝑖, 𝑗 =𝑐𝑖, 𝑗 , where 𝑖is the index of
information/communication system in this use case and 𝑗is
the index of sub-capabilities. The values of matrix elements
(𝐷)𝑖, 𝑗 =𝑑𝑖, 𝑗 are calculated automatically in real-time from
the values of 𝑎𝑗,𝑏𝑖and 𝑐𝑖, 𝑗 . The absolute values of the
model parameters 𝑏𝑖and 𝑐𝑖, 𝑗 have no interpretation other than
being proportional weights in our mathematical model. (In our
example of Fig. 2, we have Í𝑖𝑏𝑖=9.5and Í𝑖, 𝑗 𝑐𝑖, 𝑗 =100.)
In Eqs. (2) and (3), we make a simplifying assumption
that capability gaps are small and systems generating the
capabilities are approximately statistically independent. For
non mutually exclusive events 𝑋𝑚, the probability 𝑃of a union
of events is approximately the sum of individual probabilities
as 𝑃(Ð𝑚𝑋𝑚) ≈ Í𝑚𝑃(𝑋𝑚).
We calculate the matrix elements in Panel D as
𝑑𝑖, 𝑗 =
𝑎𝑗𝑐𝑖, 𝑗 𝑏𝑖
𝑛𝑗
, 𝑖 =1, . .., 𝑇 , 𝑗 =1, ..., 𝑆, (1)
where the normalisation factor 𝑛𝑗is
𝑛𝑗=
𝑇
𝑖=1
𝑐𝑖, 𝑗 𝑏𝑖, 𝑗 =1, ..., 𝑆. (2)
Because of the normalisation in Eq. (1) we have
𝑁=
𝑇, 𝑆
𝑖=1, 𝑗=1
𝑑𝑖, 𝑗 =
𝑆
𝑗=1
𝑎𝑗.(3)
In Eqs. (1), (2) and (3), we have denoted the number of
information and communication systems, and sub-capabilities
as 𝑇and 𝑆respectively. According to the product rule, the
total capability value is calculated as the product of (1−𝑑𝑖, 𝑗 )
values and therefore the corresponding capability gap is
𝑁′=1−
𝑇, 𝑆
Ö
𝑖=1, 𝑗=1
(1−𝑑𝑖, 𝑗 ).(4)
For example, in our demonstration of Fig. 2, 𝑁=0.5in Eq.
(3) and 𝑁′=0.412 i Eq. (4) on the total capability level. The
values of 𝑁and 𝑁′are displayed with the orange colour in
Fig. 2 (top right).
In the rightmost column of Fig. 2, the values of corre-
sponding values from the product rule are for non mutually
exclusive events shown by the grey colour. As we can see
the approximation is good. However, calculations on the sub-
capability level provide a difference between the gap values
𝑁and 𝑁′. Next to the rightmost column, row sums of the
calculated values are shown with the white colour.
On the total capability level, the difference between the two
values is quite high but on the detailed level, the sum rule
results are close to the product rule results. Later, we assume
non mutually exclusive events corresponding to the matrix
elements in Panels D and E in Fig. 2. As the gap values are
higher for the sum rule we can take the values corresponding
to the sum rule as a safer alternative.
In technology forecasting, evaluation is first conducted in
the current situation and, in the second round, changes are
evaluated in the future scenario. A convenient method is to
have a value less than one (𝑁 < 1) for the capability gap in a
specific scenario that describes the current situation. Changes
that are expected after the forecasting period are evaluated
with respect to the present level. In the application of Fig.
2, this is easy to accomplish by adding the new panel Panel
E under Panel D for the differences, that can be negative or
positive. The negative values describe the decreasing of the
sub-capability gaps.
C. Application of the Evaluation Model
In this section, we demonstrate our evaluation model of
section III-B by applying it to technology forecasting on the
capability area of Command & Control (C2) for the next 15
year period. The model can be applied to other capability areas
and also to the analysis of existing systems and capabilities.
Notice that the numerical values in this article are fictitious.
Fig. 3 shows examples of possible visualisations of the
results. From matrix 𝐷, we obtain two different views as
𝐷·, 𝑗 =
𝑇
𝑖=1
𝑑𝑖, 𝑗 , 𝑗 =1, ..., 𝑆. (5)
and
𝐷𝑖, ·=
𝑆
𝑗=1
𝑑𝑖, 𝑗 , 𝑖 =1, . .., 𝑇 (6)
Equation (5) describes the values of sub-capabilities and Eq.
(6) refers to systems that generate the capabilities.
Fig. 3. Example visualisations of the results. The left figure shows the present
values of sub-capabilities and effects of technological development after 15
years of development according to the product rule as explained in the text
(𝑁′=0.412). The middle figure shows the individual contributions of three
technologies. The right figure shows the present C2 capability and the effect
of the technologies on the C2 capability after 15 years.
The left panel in Fig. 3 shows an example of using Eq.
(5) for the present year and the year 2037. As an example,
the middle panel in Fig. 3 shows the effect of technological
development in some representative technologies. Technolog-
ical changes have been calculated as row sums from Panel E
in Fig. 2. The right panel of Fig. 3 shows the change of the
total capability value induced by the three technologies in the
middle panel. The change is calculated by the approximate
sum rule as Í𝑆
𝑗=1𝑒𝑗, ·from Eq. (7).
𝑒𝑗=𝐸·, 𝑗 =
𝑇𝑡
𝑡=1
𝑒𝑡, 𝑗 , 𝑗 =1, ..., 𝑆 (7)
Sub-capability gap values after the forecasting period calcu-
lated from Eq. (5) and Eq. (7) are 𝐷·, 𝑗 +𝐸·, 𝑗 and the total
capability gap value is Í𝑗𝐷·, 𝑗 +𝐸·, 𝑗 . If we assume that all
the sub-capabilities are necessary for a successful operation in
the scenario, the product rule holds and we get for the total
capability in our user case of Fig. 2
𝑇, 𝑆
Ö
𝑖=1, 𝑗=1
(1−𝑑𝑖, 𝑗 )
𝑇𝑡,𝑆
Ö
𝑡=1, 𝑗=1
(1−𝑒𝑡, 𝑗 ) ≈ 0.59 ·1.10 =0.66.(8)
The index of technologies is denoted by 𝑡and the number of
technologies by 𝑇𝑡.
In a more detailed and accurate model, the effects of tech-
nologies are evaluated separately for each system. Equation 7
is then replaced by
𝑒𝑗=𝐸·, 𝑗, ·=
𝑇𝑡,𝑇
𝑡=1,𝑖=1
𝑒𝑡, 𝑗 ,𝑖 , 𝑗 =1, ..., 𝑆. (9)
The index of a system is denoted by 𝑖and the number of
systems by 𝑇. Adding dimensions to the model enables various
new ways of analysing and visualising the results. In addition
to the technological development, over time new functionali-
ties are implemented in information and communication sys-
tems and also operations are rationalised and streamlined. All
these factors should be considered by extending the model (by
adding a panel below Panel E in Fig. 2) when all components
of the total capability are evaluated and modelled. To avoid
double-counting effects, functionalities with new technologies
should be listed apart from the normal automatisation of
manual processes.
IV. RES ULT S
The main contribution of this study is to describe a tech-
nology evaluation method, discuss practical experiences and
propose a model for evaluating technological impacts on
capabilities. Here, we define capability as the probability of
a successful operation. The evaluation process is explained in
section III-A.
The evaluation model describes the impact of technologies
on capabilities through various systems. Our proposed evalu-
ation model is presented in section III-B. In section III-C we
demonstrate by the example of the Command & Control (C2)
capability area how the evaluation model can be used. The
method is general as it can be applied to both military and
non-military applications.
Our novel evaluation model together with the structured
communication process is an effective method to carry out
technology forecasting and capability evaluation projects. If
desired, the proposed evaluation model can be used as a tool
to get started in a larger evaluation project that uses dedicated
or more complex methods like the Delphi method [3]–[5]. The
general evaluation method is flexible for applications in small
or large projects that can also use other formal techniques.
V. CONCLUSIONS
Most emerging technologies represent incremental improve-
ments and enhance the competencies of the military. We have
presented a model for evaluating the impact of technological
developments on military capabilities. The model supports
an iterative working method in teams composed of subject
area experts and technology specialists. The method helps to
carry out technology forecasting in military and non-military
environments. We have demonstrated the evaluation method in
the real-world use case of the C2 capability area in the military
domain and developed an easy-to-use computer application to
support the method. Our method, by assessing technologies
with strong and direct connection to military capabilities, can
assist the decision-making of selecting development projects
or planning future procurement programs. Strong and direct
connection means that new technologies are mapped to new
system functionalities that can be evaluated as sources of
capability enhancements. The proposed evaluation model can
also be used as a tool to get started in a larger evaluation
project that uses dedicated or more complex methods.
REFERENCES
[1] NATO Science & Technology Organization, “Science & Technology
Trends, Exploring the S&T Edge 2020-2040.”, 2020.
[2] G. Kindvall, A. Lindberg, C. Trané and J. Westman, “Exploring furure
technology developmet.” FOI Raport 4196, 2017.
[3] R. Linturi and O. Kuusi, “Societal transformation 2018–2037: 100
anticipated radical technologies, 20 regimes, case Finland.” Helsinki,
Parliament of Finland, Committee for the Future. 485 s. Publication of
the Committee for the Future 10/2018, 2019.
[4] NATO Science & Technology Organization, “Futures Assessed Along-
side Socio-Technical Evolutions.” Pre-Released STO Technical Report,
TR-SAS-123, 2021.
[5] K. C. Green, Kesten C, J. S. Armstrong and A. Graefe, “Methods
to Elicit Forecasts from Groups: Delphi and Prediction Markets Com-
pared.” MPRA Paper No. 4663, 2007.
[6] V. Kuikka, J.–P. Nikkarila and M. Suojanen, “Dependency of Military
Capabilities on Technological Development.” Journal of Military Stud-
ies, 6(2), 2015, pp. 29–58.
[7] S. R. Walk, Quantitative Technology Forecasting Techniques, In: Teix-
eira, A., (ed.) Technological Change, InTech, 2012.
[8] U.S. Joint Chiefs of Staff, “Charter of the Joint Requirements Oversight
Council and Implementation of the Joint Capabilities Integration and
Development System”, CJCSI 5123.01I, 2021.
[9] NATO Standardization Office, “NATO Glossary of Terms and Defini-
tions.” AAP-06 Edition 2021.
[10] S. Silfverskiöld, K. Andersson and M. Lundmark, “Does the method
for Military Utility Assessment of Future Technologies provide utility?”
Technology in Society 67, 101736, 2021.
[11] K. Andersson, M. Lundmark and S. Silfverskiöld, “The Military Util-
ity Assessement Method for Future Technologies.” Försvarshögskolan,
Stockholm, Sweden, Report, 2019.
[12] W. A. Shewhart and W. E. Deming, “Statistical Method from the
Viewpoint of Quality Control.” Washington, The Graduate School, The
Dept. of Agriculture, 1939.
[13] A. Donabedian, “Evaluating the Quality of Medical Care.” The Milbank
memorial fund quarterly, vol. 44, no. 3, pp. 166–206, 1966.
[14] J. R. Boyd, “The Essence of Winning and Losing.,” Slide presentation,
1995.
[15] F. Geels, Multi-Level Perspective on System Innovation: Relevance
for Industrial Transformation. Olsthoorn X., Wieczorek A. (ed) Un-
derstanding Industrial Transformation. Environment and Policy, vol 44.
Dordrecht: Springer, 2006.
[16] V. Kuikka, “Number of system units optimizing the capability re-
quirements through multiple system capabilities.” Journal of Applied
Operational Research 8(1), 2016, pp. 26–41.