Conference PaperPDF Available

Structured Analysis of Methodologies for the Assessment of the Technological Capability of RTOs -Using a Method Engineering Approach

Authors:
Structured Analysis of Methodologies for the Assessment of the Technological Capability of
RTOs Using a Method Engineering Approach
Fabian Hecklau1, Florian Kidschun1, Prof. Dr. Holger Kohl2, Sokol Tominaj1
1 Fraunhofer IPK, Division Corporate Management, Berlin, Germany
2 Technical University of Berlin, Department Sustainable Corporate Development, Berlin, Germany
fabian.hecklau@ipk.fraunhofer.de
florian.kidschun@ipk.fraunhofer.de
holger.kohl@tu-berlin.de
sokol.tominaj@ipk.fraunhofer.de
Abstract
New state-of-the-art technologies in evolving markets enable organizations to lead the competition
and gain advantages over their competitors. The constantly changing technological developments and
market demands not only affect industrial companies but also Research and Technology Organizations
(RTOs). As RTOs are situated between universities and organizations that pursue fundamental
research on the one hand, and business entities on the other, they are strongly influenced by changing
economical environments. Hence, to generate value for their customers, RTOs need to use state-of-
the-art technologies in order to be able to provide high-quality products and services to their
customers. In consequence, regular analysis and assessment of the technological capability of RTOs is
an important strategy to ensure progress and success.
In previous research work a generic process model for the structured analysis of methods used in the
context of the technological capability of Research and Technology Organizations has been developed.
Therefore, method engineering approaches have been used as a basis to create a holistic model that
can be used as a structured approach for the analysis of methodologies. Building on this, this paper
uses the approach to analyze different methodologies. The generic process model of method
engineering will be applied to analyze existing methodologies for the assessment of technological
capabilities of RTOs in a structured way.
Therefore, a literature review is conducted to identify existing methodologies in the context of the
technological capabilities of organizations with a special interest in RTOs. Furthermore, the identified
methodologies are assessed using the method engineering approach. In a final step, limitations and
further developments of methodologies for the assessment of technological capabilities are discussed.
Key Words: Method Engineering, Method Development, Capability Methodologies, Research and
Technology Organizations, RTO, Technological Capability
1. Introduction
In a continuously fluctuating, dynamic environment, organizations in all areas must adapt quickly to
changes, such as the digital transformation, in order to ensure their sustainable development and
growth. Furthermore, the capability of generating and using new technologies effectively and
efficiently has a direct impact on the organization’s competitiveness and long-term success.
Organizations with a high technological capability are able to achieve greater differentiation through
more innovative products and to increase efficiency or reduce costs through process innovation.
Moreover, an advancing liberalization of markets intensifies competition not only for regular
businesses, but also for Research and Technology Organizations (RTOs) that cooperate with these
businesses. RTOs are facing the challenge of shorter innovation cycles, while at the same time the
demand for advanced and market-ready product and process solutions is rising.
In order to be successful, RTOs have to pursue technical solutions that eventually lead to state-of-the-
art products and services. It is therefore necessary for RTOs to use technological resources and
therefore acquire or maintain technological capabilities that make it possible to achieve the
aforementioned technical solutions, which ultimately end in high-quality and innovative products.
Hence, sustainable success and a competitive edge are the result of technological capabilities that are
optimally cultivated and used. (Wang et al. 2008)
Subsequently, RTOs have to be able to analyze and assess their technological capability, given that an
advanced technological capability is imperative for their competitiveness. A tool or method that
assesses the technological capability of RTOs in an objective and practical way, while being suitable to
the special requirements of RTOs, is needed. While several other authors have previously developed
models for the assessment of technological capability in organizations, there is yet to be developed a
method that fully meets the specific requirements of RTOs. Therefore, in this paper, existing
methodologies will be analyzed and reviewed.
Within a previous paper of the authors (see (Hecklau et al. 2020)) a literature review has been
conducted that gives an overview of different method engineering approaches, that have been used
to create a generic process model. This method engineering process model is described in chapter 2
and is used for the structured analysis of technology capability methodologies in this paper.
A specific method that evaluates the technological capability of RTOs in an objective and practical way,
while being suitable to the unique requirements of RTOs, has not been developed so far. While several
authors or institutions such as Javier Garcia-Arreola with the Technology Effectiveness Audit Model or
Phaal et al. with Technology Management Process Assessment have previously developed models for
the assessment of technological capability in organizations, there is yet to be developed a method
that fully meets the specific requirements of RTOs. (Phaal et al. 2001; Garcia-Arreola 1996) As a basis
for the development of a methodology for the assessment of the technological capability of RTOs, the
aforementioned approaches will be analyzed in chapter 3.
2. Method Engineering as Structured Approach for the Analysis of Methods
In a previous paper (see (Hecklau et al. 2020)) a literature review has been conducted that gives an
overview about different method engineering approaches, which mainly originate from the
application in software development projects. As these identified approaches build the basis for a
generic process model, which will be used for the structured analysis of RTO capability methodologies,
this paper focuses on the specificities of RTOs and not on the requirements of software development
projects.
According to Gutzwiller, methods can be classified according to the principles of method engineering
using the following five building blocks: meta model, results, techniques, activities and roles.
(Gutzwiller 1994)
The meta model describes the structuring of the results of a method using a holistic data model. A
method defines a procedure in the form of activities. Activities are structured and can be broken down
into several sub-activities (‘activity structure’) as well as put into a sequence that defines which
activities take place before or after a certain activity. The structured arrangement of the activities and
the determination of a certain sequence among themselves form the procedural model. Therewith,
activities generate or use one or more results. Techniques support the generation of results with the
help of various tools, such as databases. All results are summarized in a documentation model. Results
capture the concept and represent decisions and illustrate information and their interrelations. They
can be structured complex-forming, i.e. they can be broken down into their components. The initiators
or the executing instances of activities are called roles and thus describe the persons or organizational
units associated with the execution of the method. (Gutzwiller 1994)
Figure 1: Components of the method description in method engineering (Edited based on (Gutzwiller 1994))
The elements of method description are suitable for the structured analysis of existing methods.
Therefore, the elements are used in the following chapter to analyze methodologies for the
assessment of the technological capability of organizations.
3. Analysis of Methodologies for the Assessment of the Technological Capability of RTOs
In this chapter, the described method engineering approach is used to analyze different
methodologies for the analysis and assessment of the technological capability of organizations.
Therefore, the elements of method description are used to structure the analysis (see Figure 1:
Components of the method description in method engineering (Edited based on (Gutzwiller 1994))
above):
Role
Activity
Technique
Result
Meta Model
The following methodologies will be analyzed:
Technology Management Process Assessment (Phaal et al. 2001)
Technology Effectiveness Audit Model (Garcia-Arreola 1996)
Model for Technological Capability Assessment in R&D Organizations (Mohammad et al. 2010)
3.1 Technology Management Process Assessment (Phaal et al. 2001)
The Technology Management Process Assessment by Phaal et al. is a top-down assessment model that
analyzes and evaluates technology management practices in a business unit. (Phaal et al. 2001) The
methodology is based on the five-process model of Gregory: (1) Identification, (2) Selection, (3)
Acquisition, (4) Exploitation and (5) Protection. (Gregory 1995)
The methodology has been developed and tested in the context of an "action research approach",
which allows the investigation of business systems through a process of active intervention (i.e.
collaborative participation). The primary aim of this practical approach is to support communication,
decision-making processes and action in companies, which requires close collaboration with industry.
(Phaal et al. 2001) A further specification of the methodology, according to the method description
approach of Gutzwiller as described in chapter 2 is elaborated in the following.
3.1.1 Role
A variety of employees of the organization participate in the application of this methodology, which is
carried out in three workshops. In the application example by Phaal et al., where a high-volume
manufacturer (approx. 50,000 different components) from the UK is being evaluated, senior managers
from different business divisions, such as product development, supply processes, marketing, quality
and technology areas take part in the workshops to support the assessment. That way, knowledge
from all areas of the organization is bundled. The workshops are led by a moderator and also
supported by an internal assessment “champion", who is not explained in more detail. (Phaal et al.
2001)
3.1.2 Activity
The assessment procedure consists of the following three workshop-based stages:
1. Strategic overview;
2. Process overview;
3. Process investigation
The outcome of the methodology is the identification of improvement potentials in specific
technology-business areas. Figure 3 displays the procedural steps of the methodology.
Figure 2: Technology Management Process Assessment Procedural steps of the methodology (Phaal et al. 2001)
In the first stage, the “Strategic overview”, the business is divided into business and technology areas.
In a session of discussion and brainstorming, the participants agree on the different business
segments. The segmentation process is intended to generate a healthy dialogue between the various
functions within the company and to create an interface between corporate strategy and technology
management (de Wet 1996). Thereafter, the impact of each technology area on each business area is
estimated by the participants, regarding value, effort and risk. These parameters (value, effort and
risk) are defined as:
Value: “What level of competitive advantage does each technology area provide for each
business area?”
Effort: “What level of effort is being directed at each technology area for the benefit of each
business area?“
Risk: “What level of risk is associated with realizing the competitive advantage of each
technology area for each business area?”
Each of the business-technology relationships is assessed by a 4-point Likert-Scale (Likert 1932): high
(H), medium (M), low (L), or not significant (-), according to their estimated value, effort and risk.
Naturally, these parameters correlate. If there is a mismatch in a business-technology relationship, a
debate regarding the cause of the mismatch is started. Hence, following the discussion about the
results of the strategic overview, points of interest are chosen, which need further investigation in the
next stage due to their poor outcome.
In the second stage, the “Process overview”, the critical technology area is decomposed into its key
technologies. Further, internal and external dependencies of these key technologies are elaborated,
with special regards to the business areas they are related to. Then, recent events and activities that
are associated with the identified technologies are listed and classified according to Gregory’s five
process-model (Gregory 1995). The activities are also structured into their time of happening. This
step is followed by an assessment of the activities in the five-process-model. Therefore, the elaborated
activities are assessed using the three components of a generic systems model, namely inputs, process
and outputs. Thus, the participants rank the activities on scale from 1 (strongly disagree) to 5 (strongly
agree) with respect to the following statements:
Inputs: “The requirement for this activity was always clearly defined”.
Process: “The activity was always well managed”.
Outputs: “The results for this activity were always exploited”.
Subsequently, the processes are further subdivided into sub-categories. Identification, selection and
protection processes are for example split into reactive and proactive types. Further subdivisions are
made to go into detail of the roots and causes of problems.
In the third stage, the “Process investigation”, participants go into detail on areas identified in the
second step. Hence, the processes are mapped and compared to other generic process models of the
same type. This induces a rich debate among participants, questioning the hows and whys of the
process activities. Strengths and weaknesses of the process are discovered and are the basis for
learnings and improvements.
3.1.3 Technique
The process includes a series of moderated workshops based on group discussions and open critical
debate. This allows the knowledge of several employees from different areas to be incorporated into
the process models. Gregory’s five-process model provides a framework for the workshops. A detailed
workbook, which contains various procedures and guidelines, serves as an aid. This includes
questionnaires and ranking activities as well as the creation of time-based activity diagrams and
process mapping.
It is also recommended to have feedback sessions after each workshop to link operational and
strategic views and to enable the transfer of results to other technology and business units.
3.1.4 Results
The primary goal of this methodology is to support communication, decision-making processes and
actions within a company. These are achieved through various discussions and debates between the
different business units in the workshops and generate value, since they help to combine the
organization’s strategy and technology management. (de Wet 1996)
After the "Strategic Overview" stage evaluates the impact of the technologies on the company, the
"Process Overview" and "Investigation" stages provide detailed insights into the activities of
technology management. In Phaal's application example, the acquisition processes of the selected
technologies were compared with a generic process model for technology acquisition. This included
the development of new products as well as the acquisition of a company for a recovery action. In
group discussions, the strengths and potential weaknesses of the individual areas were identified with
regard to the generic technology management processes. As a direct result, the functions of the
processes and the respective technology management, as well as the connection to the business units,
are better understood, so that decisions can be made on technological (and financial) feasibility.
3.1.5 Meta Model
A meta model of the underlying methodology in a broad sense is Gregory’s five process-model as a
framework that is used to structure the technology management activities according to the five
following steps:
1. Identification of potentially relevant technologies
2. Selection of appropriate technologies, that should be used in the organization
3. Acquisition and assimilation of the selected technologies
4. Exploitation of technologies to generate value
5. Protection of knowledge and expertise embedded in usage of products and/or manufacturing
systems
Gregory’s framework is a generic model that encompasses all activities in a company. It is often closely
linked to the processes of innovation and new product development.
A rough summary of the activities in the individual process steps is shown in the following figure:
Figure 3: Gregory's five process model (Gregory 1995)
3.1.6 Critical Analysis
The methodology is a quite generic approach that can be used in different organizations to assess
technology management processes. It consists of a series of moderated workshops based on a
detailed workbook with procedures and guidelines. However, it serves more as a self-assessment tool,
which is conditionally suitable for an objective assessment of technology management in an
organization.
The Strategic Overview stage considers the impact of current technologies on current business units.
Nevertheless, to support strategic planning, it is also necessary to include a future perspective, which
is not considered here.
Moreover, this approach is completely qualitative. It does not analyze hard factors and technology
content, but rather the functions as well as strengths and weaknesses of technology management to
develop key technologies. It is therefore only of limited use for the assessment of technological
resources and capabilities.
3.2 Technology Effectiveness Audit Model (Garcia-Arreola 1996)
The Technology Effectiveness Audit Model by Garcia-Arreola provides an approach to evaluate the
technological capabilities of a company. This strategic tool supports the user to identify a gap between
the existing and desired technological situation. By applying this methodology, the results can
encourage further technological developments. Considering technologically relevant aspects such as
approach, methods, strategies, plans, goals and policies, the methodology attempts to generate
entrepreneurial and organizational knowledge. It aims to:
1. Determine current technological status,
2. Stress areas of opportunity,
3. Take advantage of the organization’s strong capabilities. (Garcia-Arreola 1996)
3.2.1 Role
The roles in the audit are clearly divided. The participants consist of internal company representatives,
such as employers and employees, who have to follow a specific questionnaire and classify their
answers into a rating scale. The auditor initiates, leads and evaluates the survey. To do this, it is
essential that the performing evaluator disposes of a comprehensive knowledge of processes, assets
and other resources of the company. When conducting audits, the auditor may identify weak or non-
existing linkages to business strategies, a narrow focus on innovation, the inability to measure
efficiency and a lack of communication between layers. By exposing these deficiencies in corporate
planning, further precautions can be taken by decision-makers. (Garcia-Arreola 1996; Dolinšek et al.
2007)
3.2.2 Activity
The methodology provides a quantitative scoring form to be filled in by participants of the company
under investigation. Employers and employees answer a questionnaire covering twenty technology
assessment areas. The respondents categorize their answers into a five-point Likert scale. A score of
5 is outstanding, 4 is good, 3 is average, 2 is below average, and 1 is poor (Khalil 2000). After the survey
is completed, all answers and their numerical counterparts are added together to give a mean value
that represents a general technology status of the company. The process of evaluating the score is led
by an auditor and proceeds as follows (Khalil 2000; Garcia-Arreola 1996):
1. Analyze the internal technological products and processes to identify core competencies.
2. Pinpoint external and basic technologies.
3. Identify technological gaps and scope for action.
4. Investigate the science push and market pull.
5. Establish both factors (Science Push/Market Pull) in the innovation process.
6. Check the time to market/implementation time.
7. Control the R&D strategy. Is the strategy consistent with Science Push/Market Pull?
8. Check for overlaps in core technologies, R&D and marketing.
9. Monitor progress and improvement in production and execution.
10. Evaluate partnerships and cooperation. Do they fit into the overall strategy?
11. Analyze the knowledge management organization. How does the company ensure that
acquired knowledge is recorded and passed on?
12. Analyze the corporate structure. Is it flexible? How does the communication between all levels
take place?
3.2.3 Technique
The methodology can be described as a diagnostic tool with limitations in the applicability. Thus, it
requires specific skills and techniques to ensure a reliable result based on further strategic technology
planning. In addition to the roles and activities (as described in 3.2.1 and 3.2.2), the methodology
requires a defined procedure and application. The assessment process uses an evaluation form that is
carried out by internal experts or auditors using a five-level Likert scale. The aim of the questionnaire
is to provide a quantitative measure that allows auditors to gain insights into the technological
performance of the organization. (Štrukelj and Dolinšek 2011). With the help of the checklist and the
result of the survey, in which 20 assessment areas are reviewed, the auditor can draw comprehensive
conclusions on the six main categories. Accordingly, it can provide indications for improvements and
strategic adjustments. (Garcia-Arreola 1996)
3.2.4 Result
The methodology’s aim is to result in a quantitative technology assessment. Each answer of the given
questionnaire is added and results in a score from which information about the technological status
of the company and its management capabilities can be obtained. Apart from a numerical result, the
methodology does not provide explicit recommendations for action. It does, however, offer a starting
point for highlighting the strengths and weaknesses of the organisation, which can lead to further
knowledge and insights. In order to guarantee a consistent result, the assessment procedure can be
repeated after a reasonable period of time. Whereas the auditors have to focus on the companys
performance results and progress. Once it is not satisfactory, a strategic change should be considered.
(Garcia-Arreola 1996)
The Technology Effectiveness Audit Model and its results could be an important diagnostic tool for
monitoring the progress and effectiveness of technological development and implementation. In
addition, it provides an overall assessment of the company’s competitive environment and enables
the auditor to identify and targeted use of important options for action. Ideally, the self-assessment
leads to continuous improvement and technological development. (Khalil 2000)
3.2.5 Meta Model
The Technology Effectiveness Audit Model comprises a three-level model. The first level is divided into
six categories that are of interest in assessing the technological position of the organization:
technological environment, technologies categorization, markets and competitors, innovation
process, value-added functions, acquisition and exploitation of technology. At the second level, 20
assessment areas enroll a closer look on each category. Finally, the areas unfold into 43 assessment
elements that form the third level. The methodology allows a more specific analysis by going deeper
into each level and revealing the technological position of a chosen organization. The process model
is shown in the following figure. (Garcia-Arreola 1996; Khalil 2000)
Figure 4: Process Model of the Technology Effectiveness Audit Model (Garcia-Arreola 1996; Khalil 2000)
3.2.6 Critical Analysis
The methodology provides a useful starting point for obtaining a general overview of the technological
status of the company. Nevertheless, it has some shortcomings in its applicability:
On the one hand, multiple categories (e.g. markets and competitors), assessment areas and elements
(e.g. market pull, entrepreneurship) are not directly related to technology. Thus, it goes beyond a
mere assessment of technological capability, which could lead to a too complex process in a large
organization, and also focuses on aspects of strategic management issues. On the other hand, the
methodology does not define how exactly the auditors are required to rate the performance for the
assessment areas or elements. Hence, the rating is highly subjective and could differ for a single
organization if different auditors apply the methodology. Thus, it is not possible to compare the results
of different auditors in different organizations.
It also does not describe the practical measures resulting from the assessment achieved by the
methodology. Consequently, the auditors do not receive any guidance on how to proceed once the
assessment is completed. In addition, the organization’s assessment scale is not a purely quantitative
assessment, as there is no detailed description of the numbers used for the assessment. Since 1 = poor
and 5 = outstanding, the auditors could also use different numbers for these qualitative
determinations. Subsequently, this could result in inconsistencies, as these indexing numbers are not
quantitative determinations but rather numerical representations of qualitative determinations
(Štrukelj and Dolinšek 2011).
Finally, the assessment model seems to be complex and therefore has limitations in the application of
larger organizations, as the effort would be too high. Moreover, the methodological approach does
not allow a precise derivation from it and no instructions for action.
3.3 Model for Technological Capability Assessment in R&D Centers (Mohammad et al. 2010)
Mohammad et al. developed a model for the assessment of the technological performance of R&D
centers with the main focus on the development of technologies. Despite some limitations, this model
is considered to be the most promising approach for the purposes of this paper, as it is the only one
focusing on R&D organizations with their specific requirements.
3.3.1 Role
In the description of the model by Mohammad et al. it is not clear, which roles are involved in the
assessment process. However, it can be assumed that several roles exist, since the technological skills
are evaluated separately. It can also be assumed that an expert is involved who uses the model to
assess the technological capabilities of the R&D organization.
3.3.2 Activity
In this model, the four core capabilities (see 3.3.5, meta-model) are to be evaluated using macro and
micro indicators. Each technological capability should be assessed separately.
For the evaluation of the capabilities Mohammad et al. suggest using a scoring table that allows the
assessment of each capability with an individual indicator. This indicator can be rated with a number
between 1 (very poor) and 5 (very good). Subsequently, the points achieved per indicator are added
up and set in relation to the maximum number of points that could be achieved. In a weighting table,
the indicators are then assigned relative weightings. As a result, a final score is calculated for each
type of the four capabilities by multiplying the result of the scoring and weighting table. (Mohammad
et al. 2010)
The final step is to define a scale for the final evaluations which, together with the assessments, is
needed to evaluate technological capability in a technological area. According to Mohammad et al.
these three tables enable the auditors to identify the causes of weak, mediocre or good capabilities.
(Mohammad et al. 2010)
3.3.3 Technique
Mohammad et al. describe two basic evaluation tools. The first tool is a scoring table, which serves to
evaluate the technological capability in a technological area. (Mohammad et al. 2010) Although this is
not explicitly described by Mohammad et al., it can be assumed that the scoring table contains four
main aspects. The first column contains the technological capability to be assessed. In the second
column follow the indicators that need to be evaluated. After the evaluation is made, the third column
assesses the achieved score of technological capability for each indicator. Finally, a column with the
result follows. This is a relative value from the achieved and the maximum score.
In addition to the scoring table, Mohammad et al. describe a weighting table in which relative
weightings are assigned to the indicators to be assessed. (Mohammad et al. 2010)
3.3.4 Results
The results of the methodology are a qualitative or semi-quantitative analysis of the organization’s
technological capability, operationalized in the four technological areas and their indicators. The
outputs of the process are the scoring table, the scoring table combined with weights for the indicators
and the scale for the final scores. (Mohammad et al. 2010) There is no explanation about further
results, such as hints for the next steps, action planning or a guideline for future improvements of the
technological capability of the R&D organization.
3.3.5 Meta Model
The model assesses technological capabilities on the macro and micro level. The macro level within
the methodology comprises criteria that are common among innovation organizations, such as:
The position of innovation in the organization,
Knowledge management and importance of knowledge acquisition,
The position of innovation in developing strategies,
Learning,
Team working,
Training.
These criteria are the basis for the analysis of “innovation culture in the organization”. (Mohammad
et al. 2010)
On the micro level the technological performance is assessed. For this purpose, Mohammad et al.
describe four core capabilities that need to be evaluated. Each of these core capabilities will be
assessed separately:
Capability of internal development of technologies,
Capability of technology development via cooperative R&D,
Capability of performing basic researches,
Capability of presenting consultation services to industry. (Mohammad et al. 2010)
There are 6 groups of indicators identified for the evaluation of these core capabilities:
Human resource indicators,
Equipment,
Knowledge management and communication indicators,
Management indicators,
Marketing and sales indicators,
Achievements indicators. (Mohammad et al. 2010)
The following Figure 5 summarizes the meta model of the methodology of Mohammad et al. for the
assessment of the technological capability in R&D organizations.
Figure 5: Meta model of the model for the technological capability assessment in R&D organizations
3.3.6 Critical Analysis
The technology audit model of Mohammad et al. is not a general model for assessing the technological
performance of companies but focuses solely on technology development in R&D centers.
For the assessment of the technological performance of such R&D centers, the authors consider all
factors that can have a certain influence on an organization. There is no restriction to essential
elements of technological performance. This has the effect of avoiding the arbitrary selection or
exclusion of factors, but makes the model more complex, which makes it more difficult to achieve a
reasonable cost-benefit ratio. In addition, this contradicts the assumption that the model should
follow practical evaluation criteria, such as simplicity, speed and ease, rather than theoretical aspects
of technological performance.
Furthermore, the evaluation at the macro level does not aim at a direct assessment of technological
performance, but rather at the analysis of the innovation culture. The indicators at this level are very
abstract, as they assess issues common to all innovative organizations. However, this very generic
level contradicts the fact that the model was designed specifically for R&D organizations and not for
other (innovative) organizations. It also remains unclear what kind of questions can in principle be
derived to operationalize these abstract indicators.
Moreover, the model does not reveal the extent to which the macro level is linked to the micro level
and the relationship between the results of these levels. Therefore, the question arises as to why the
model consists of different levels, whereby the assessment of technological performance is only
carried out in one of these levels.
Furthermore, it is not clear whether the micro level only presupposes R&D centers involved in all four
main groups of activities mentioned above, or whether R&D centers involved in only some of them
can also be assessed. It is clear, however, that research organizations that only carry out basic research
cannot be assessed according to this model, as it requires organizational units that develop
technologies.
The micro-level indicators used to assess technological performance do not relate directly to the
technologies themselves. Here, a more general assessment of an organization’s performance (e.g.
managerial, educational, financial, equipment, communication, marketing, sales) is made. There is
also an inconsistency in the assessment of these indicators: qualitative determinations (from very
weak to very good) are evaluated quantitatively (from 1 to 5). However, the model does not indicate
how the evaluation process should be carried out. Furthermore, the calculation of the proposed total
score by weighting and numerous mathematical operations is prone to errors: on the one hand, it is
not clear according to which criteria the weighting is carried out. On the other hand, mathematical
operations with qualitative determinations or their index numbers (quantitative determinations) are
not possible or do not make sense. Moreover, the total number merely indicates that an R&D center
is weak or good, but not what the reasons for this are. (Štrukelj and Dolinšek 2011)
4. Discussion of Results
In a first step, an approach based on method engineering for the analysis of methodologies has been
described. Especially the elements of method description as explained in chapter 2 proved to be
suitable for the detailed and structured analysis of the assessment methodologies in chapter 3. It could
be shown, that the method engineering approach allows the examination of the methodologies in a
standardized way with benefits in understanding the methods and the differences.
Based on the structured analysis of the four methods in chapter 3, it could be shown that most of the
methods are valuable approaches for the assessment of the technological capability of organizations.
However, weaknesses and shortcomings were also found in each method. The model of Garcia-
Arreola (chapter 3.2) is considering many aspects, that are broader as the technological capability, i.e.
strategic questions concerning market demands. Therefore, this model is not practical as it is very
complex and the effort for the application in larger organizations would be too high. In comparison to
the aforementioned approaches, the technology management process assessment by Phaal et al.
(chapter 3.1) is completely qualitative and does not consider hard factors and technology content.
When regarding the special requirements of research and technology organizations only the method
of Mohammad et al. (chapter 3.3) is focusing on this special target group, which is of particular interest
in this paper. Nevertheless, this model has several weaknesses as i.e. the focus on the innovation
culture rather than the focus on technological capability.
As a result, it can be stated, that all of the methodologies contribute to the discussion on how to
analyze and assess the technological capability of RTOs, but no analyzed methodologies fully meet the
specific requirements of RTOs.
5. Outlook
As described above, none of the mentioned methods are fully applicable for the analysis and
assessment of the technological capability of RTOs. Therefore, the need for the creation of a new
methodology for the analysis and assessment of the technological capability of RTOs could be pointed
out within this paper. Further research work of the authors will focus on the creation of a methodology
that fully meets specific requirements. The results will then be used to create a new method for the
assessment of the technological capability of RTOs.
In a critical review, a limitation of this paper is the limited number of the considered approaches. In
order to extend the base of considered methods, further approaches need to be identified and
analyzed. The results of further research work need to be integrated into the final approach of this
paper. The research in this paper as well as the mentioned future research is partly supported by the
European Commission through the H2020 project EPIC - Centre of Excellence in Production Informatics
and Control under grant No. 739592.
6. Literature
de Wet, Gideon (1996): Corporate strategy and technology management: Creating the interface. In:
Management of technology V. Technology management in a changing world. Proceedings of the
Fifth International Conference on Management of Technology; Official conference of the
International Association for Management of Technology; February 27 - March 1, 1996, Miami,
Florida, USA. With assistance of Robert M. Mason, Louis A. Lefebvre, Tarek. M. Khalil. Oxford:
Elsevier Advanced Technology.
Dolinšek, S.; Janeš, A.; Čosić, P.; Ekinović, S. (2007): Development of the technology audit model. In
Proceedings of the 8th International Conference of the Faculty of Management Koper, 20th-24th
November 2007, Congress Centre Bernardin. Portorož, Slovenia, 2007.
Garcia-Arreola, J. (1996): Technology Effectiveness Audit Model (TEAM): A Framework for
Technology Auditing. Dissertation. University of Miami, Miami, FL.
Gregory, M. J. (1995): Technology management- a process approach. In Proceedings of the
Institution of Mechanical Engineers (Vol. 209), pp. 347356.
Gutzwiller, Thomas A. (1994): Das CC RIM-Referenzmodell für den Entwurf von betrieblichen,
transaktionsorientierten Informationssystemen. Heidelberg: Physica-Verl. (Betriebs- und
Wirtschaftsinformatik, 54).
Hecklau, Fabian; Kidschun, Florian; Kohl, Holger; Tominaj, Sokol (2020): Generic Process Model for
the Structured Analysis of Methods. A Method Engineering Approach for the Analysis of RTO
Capability Methodologies. In Manuel Au-Yong Oliveira, Carlos Costa (Eds.): Proceedings of the 19th
European Conference on Research Methodology for Business and Management Studies (ECRM):
Academic Conferences and Publishing International (ACPI).
Khalil, Tarek. M. (2000): Management of Technology: The Key to Competitiveness and Wealth
Creation. US: Irwin Mcgraw-Hill.
Likert, R. (1932): A Technique for the Measurement of Attitudes. In Archives of psychology 140 (22),
pp. 555.
Mohammad, A. P.; Razaee, S.; Shayegh, F.; Torabi, F. (2010): A Model for Technological Capability
Assessment in R&D Centers. Proceedings of the 14th international Oil, Gas & Petrochemical
Congress.
Phaal, R.; Farrukh, C.J.P.; Probert, D. R. (2001): Technology management process assessment. A case
study. In Int Jrnl of Op & Prod Mnagemnt 21 (8), pp. 11161132. DOI: 10.1108/EUM0000000005588.
Štrukelj, Peter; Dolinšek, Slavki (2011): Towards A New Comprehensive Technology Audit Model.
Edited by Faculty of Management. University of Primorska. Koper, Slovenia.
Wang, Chun-hsien; Lu, Iuan-yuan; Chen, Chie-bein (2008): Evaluating firm technological innovation
capability under uncertainty. In Technovation 28 (6), pp. 349363. DOI:
10.1016/j.technovation.2007.10.007.
... Acting as intermediaries within regional and national innovation systems, connecting businesses, the industrial sector, research centers, and universities (European Association of Research and Technology Organisations [EARTO] n.d.). Their geographical positioning impacts their activities, collaborations, and socio-economic contributions to territorial development (Hecklau et al. 2020). However, this also poses challenges, as their activities do not have clearly defined boundaries and extend beyond organizational limits. ...
Article
Full-text available
The frameworks governing Responsible Research and Innovation (RRI) are heterogeneous and constantly evolving and adapting. We build upon this adaptability to explore the potential development of RRI in Research and Technology Organizations. We explore these issues through a case study of nine Spanish technological centers to analyze the level of commitment to the RRI philosophy. We develop a process-maturity framework to evaluate and understand how these organizations engage with RRI principles. The development of the proposed framework has provided the centers involved with a learning experience as regards their developments and practices in this area. The findings allow us to obtain a holistic and comprehensive view of the responsible behavior of these centers in their innovative activity. Likewise, the results allow us to bridge the gap between theoretical concepts and organizational reality at a meso-level. This process maturity framework is intended to be used as an evaluation methodology and management system.
... Technology centres (TC) represent a set of agents and institutions with a clearly identified role within the National Innovation System (NIS): they are in charge of driving digital transformation and economic growth through the improvement they produce in the innovative capabilities of industry and the development of key technologies for industry. (Hecklau et al., 2020). Their role as intermediaries between knowledge-producing centres and the industrial sector is key, as they act as institutions that connect other NIS organisations and drive innovation for the economy, society and government. ...
Conference Paper
El estudio de la integración social de la ciencia y la innovación, así como la responsabilidad de los actores del sistema de ciencia, tecnología e innovación con esta integración, es un tema que cada vez requiere mayor protagonismo. Actualmente se plantea la cuestión de cómo se pueden modificar las estructuras o los actores del sistema de ciencia, tecnología e innovación y, por tanto, cómo se pueden diseñar e integrar prácticas responsables para inducir el cambio institucional. Los Centros Tecnológicos, como agentes del ecosistema de innovación, son organizaciones de investigación privadas y sin ánimo de lucro que cuentan con los recursos materiales y humanos propios necesarios para llevar a cabo actividades dirigidas tanto a la generación de conocimiento tecnológico como a facilitar su explotación, ya sea por parte de las empresas existentes o generando nuevas iniciativas empresariales. Como agentes del sistema de innovación, su actividad debe contribuir al avance socioambiental en su territorio. En este trabajo proponemos una metodología basada en indicadores de RRI adaptados para evaluar la responsabilidad social de las actividades realizadas en los centros tecnológicos. La aplicamos al caso concreto de los Centros Tecnológicos de la Comunidad Valenciana. Los resultados muestran diferentes grados de nivel de madurez para cada una de las dimensiones de RRI analizadas.
Article
Full-text available
The effective management of technology as a source of competitive advantage is of vital importance for many organisations. It is necessary to understand, communicate and integrate technology strategy with marketing, financial, operations and human resource strategies. This is of particular importance when one considers the increasing cost, pace and complexity of technology developments, combined with shortening product life cycles. A five-process model provides a framework within which technology management activities can be understood: identification, selection, acquisition, exploitation and protection. Based on this model, a technology management assessment procedure has been developed, using an “action research” approach. This paper presents an industrial case study describing the first full application of the procedure within a high-volume manufacturing business. The impact of applying the procedure is assessed in terms of benefits to the participating business, together with improvements to the assessment procedure itself, in the context of the action research framework.
Article
Full-text available
Technology innovation capability (TIC) is a complex, elusive, and uncertainty concept that is difficult to determine. Measuring TICs requires simultaneous consideration of multiple quantitative and qualitative criteria. By adopting a fuzzy measure and non-additive fuzzy integral method, this study evaluates the performance of synthetic TICs in hi-tech firms. The analytical results indicated that the non-additive fuzzy integral is an effective, simple and suitable method for identifying the primary criteria influencing TICs at hi-tech firms, especially when evaluation criteria are interactive and interdependent. The proposed approach is an effective method for assessing the TICs of a firm and obtains useful information regarding hierarchical TIC frameworks.
Article
The project conceived in 1929 by Gardner Murphy and the writer aimed first to present a wide array of problems having to do with five major "attitude areas"--international relations, race relations, economic conflict, political conflict, and religion. The kind of questionnaire material falls into four classes: yes-no, multiple choice, propositions to be responded to by degrees of approval, and a series of brief newspaper narratives to be approved or disapproved in various degrees. The monograph aims to describe a technique rather than to give results. The appendix, covering ten pages, shows the method of constructing an attitude scale. A bibliography is also given.
Proceedings of the Fifth International Conference on Management of Technology; Official conference of the International Association for Management of Technology
  • Gideon De Wet
de Wet, Gideon (1996): Corporate strategy and technology management: Creating the interface. In: Management of technology V. Technology management in a changing world. Proceedings of the Fifth International Conference on Management of Technology; Official conference of the International Association for Management of Technology; February 27 -March 1, 1996, Miami, Florida, USA. With assistance of Robert M. Mason, Louis A. Lefebvre, Tarek. M. Khalil. Oxford: Elsevier Advanced Technology.
Development of the technology audit model
  • S Dolinšek
  • A Janeš
  • P Čosić
  • S Ekinović
Dolinšek, S.; Janeš, A.; Čosić, P.; Ekinović, S. (2007): Development of the technology audit model. In Proceedings of the 8th International Conference of the Faculty of Management Koper, 20th-24th November 2007, Congress Centre Bernardin. Portorož, Slovenia, 2007.
Technology management-a process approach
  • M J Gregory
Gregory, M. J. (1995): Technology management-a process approach. In Proceedings of the Institution of Mechanical Engineers (Vol. 209), pp. 347-356.
Das CC RIM-Referenzmodell für den Entwurf von betrieblichen, transaktionsorientierten Informationssystemen
  • Thomas A Gutzwiller
Gutzwiller, Thomas A. (1994): Das CC RIM-Referenzmodell für den Entwurf von betrieblichen, transaktionsorientierten Informationssystemen. Heidelberg: Physica-Verl. (Betriebs-und Wirtschaftsinformatik, 54).
Management of Technology: The Key to Competitiveness and Wealth Creation
  • Tarek M Khalil
Khalil, Tarek. M. (2000): Management of Technology: The Key to Competitiveness and Wealth Creation. US: Irwin Mcgraw-Hill.
A Model for Technological Capability Assessment in R&D Centers
  • A P Mohammad
  • S Razaee
  • F Shayegh
  • F Torabi
Mohammad, A. P.; Razaee, S.; Shayegh, F.; Torabi, F. (2010): A Model for Technological Capability Assessment in R&D Centers. Proceedings of the 14th international Oil, Gas & Petrochemical Congress.