ArticlePDF Available

Towards Identifying Data Analytics Use Cases in Product Planning

Authors:

Abstract and Figures

Cyber-physical systems (CPS) generate huge amounts of data during the usage phase. By analyzing these data, CPS providers can systematically uncover hidden product improvement potentials for future product generations. But many companies face difficulties starting industrial data analytics projects as they cannot rely on experience and miss orientation. Following the canonical action research methodology, this study aims to investigate the definition and specification of data analytics use cases. The results show a clear need for supporting methods and tools for defining and specifying use cases in usage data-driven product planning.
Content may be subject to copyright.
Available online at www.sciencedirect.com
ScienceDirect
Procedia CIRP 00 (2021) 000000
www.elsevier.com/locate/procedia
2212-8271 © 2021 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 54th CIRP Conference on Manufacturing System
10.1016/j.procir.2021.11.198
54th CIRP Conference on Manufacturing Systems
Towards Identifying Data Analytics Use Cases in Product Planning
Maurice Meyera,*, Melina Panznera, Christian Koldeweya,
Prof. Dr.-Ing. Roman Dumitrescua,b
aHeinz Nixdorf Institute, University of Paderborn, Fürstenallee 11, 33102 Paderborn, Germany
bFraunhofer Institute for Mechatronic Systems Design, Zukunftsmeile 1, 33102 Paderborn, Germany
* Corresponding author. Tel.: +49 5251 60 6227; E-mail address: Maurice.Meyer@hni.uni-paderborn.de
Abstract
Cyber-physical systems (CPS) generate huge amounts of data during the usage phase. By analyzing these data, CPS providers can systematically
uncover hidden product improvement potentials for future product generations. But many companies face difficulties starting industrial data
analytics projects as they cannot rely on experience and miss orientation. Following the canonical action research methodology, this study aims
to investigate the definition and specification of data analytics use cases. The results show a clear need for supporting methods and tools for
defining and specifying use cases in usage data-driven product planning.
© 2021 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 54th CIRP Conference on Manufacturing System
Keywords: Data-Driven Product Planning; Data Analytics Use Cases; Industrial Data Analytics; Strategic Product Planning
1. Introduction
Recent technical developments enable the systematic
collection and analysis of huge amounts of usage data from
cyber-physical systems. The insights generated from data
analyses can help manufacturers to uncover hidden customer
needs and product weaknesses [1].
As a general framework, standard models like CRISP-DM
show the relevant process steps of data analytics projects:
business understanding, data understanding, data preparation,
modeling, evaluation, and deployment [2]. Yet, these models
assume that promising analytics use cases are already or easily
defined. In reality, this step presents a major challenge for
manufacturers [3, 4]. Also, the translation and specification
process from a business-driven use case towards a concrete
analytics task is challenging [5]. Here, stakeholders involved
such as business or product managers and data scientists need
to work closely together and establish a mutually
comprehensible communication to ensure that the business
objectives are aligned with the analytics activities. This
intersection has proven to be difficult [6]. While product
managers have a clear understanding of their strategic goals
(e.g., increasing a product’s reliability), they often do not know
how analytics can help them achieve those goals. Therefore,
they have difficulties defining appropriate use cases. Similarly,
data scientists tend to find it difficult to relate their algorithms
and analysis results to the business goals and generate real
business value. Consequently, establishing a well-functioning
communication is essential to unlock the potentials of data
analytics in product planning [7].
As a first step, it is important to bridge the gap between product
managers and data scientists. Here, two research questions
arise: (1) How can promising data analytics use cases be
defined in product planning? (2) How can these be specified
and translated into concrete analytics tasks?
The paper at hand is structured as follows: In section 2, we
describe our research design. To ensure a comprehensible
presentation, the results for the definition of use cases are
displayed in section 3 while the results for the specification of
use cases are shown in section 4. Finally, section 5 presents the
discussion of the results and the conclusion.
Manuscript
2 Maurice Meyer et al. / Procedia CIRP 104 (2021) 1179–1184
2. Research Design
The identification of data analytics use cases in product
planning presents a major challenge for both manufacturing
companies and research. Therefore, both perspectives must be
considered in the research process. We chose the canonical
action research method (CAR) as it represents an iterative,
rigorous, and collaborative approach for the development of
organizations and the generation of scientific knowledge [8].
CAR consists of the five stages diagnosis, action planning,
action taking, evaluation and reflection (see Fig. 1) which are
explained below [9].
Figure 1: Canonical Action Research Process Model [9]
The entrance into the CAR process is the diagnosis stage.
Here, the problem at hand is identified and analyzed. On this
basis, the intended actions are specified during action planning.
In action taking, the actions are implemented, and data are
collected. Subsequently, the evaluation of the data aims at
comparing the outcomes with the objectives. Within the
reflection stage, a decision to start a new CAR cycle or exit the
process needs to be made [9]. The implementation and results
of the CAR process are described in the next sections.
3. Definition of Data Analytics Use Cases
The definition of promising data analytics use cases for
product planning confronts manufacturing companies with lots
of challenges. This section comprises the results of the CAR
process for this task.
3.1. Diagnosis
The definition of promising data analytics use cases
comprises several aspects to be dealt with. The central aspect,
however, is the question or hypothesis to be investigated. In our
previous research, we used so-called product hypotheses to find
opportunities for data analyses [10, 11]. While this systematic
process has led to promising product hypotheses, the research
needs to be extended because open, more exploratory questions
seem to provide more valuable insights than hypotheses [12].
Consequently, in addition to our approach for the hypotheses,
another approach to formulate questions for data analysis (DA
questions) was required.
In a first quick test with four manufacturing companies, we
noticed that practitioners needed strong support when
formulating DA questions. By constructing a few examples
(e.g., which events precede a certain failure?), we enabled
them to formulate a few of their own DA questions. From these
experiences, we concluded that practitioners need to be
provided with exemplary use cases and DA questions as well
as a methodical approach for defining both on their own.
3.2. Action planning
To address the problems and causes identified in the
diagnosis, we formulated the working hypothesis that
universally valid use cases and exemplary DA questions exist
and can be applied for different companies and products. Due
to its practical relevance, we decided to try an inductive
approach to identify the questions. For the specification of our
study, we set up a conceptual framework which shows the
relevant variables of the study and their relationships (see
Fig. 2).
Figure 2: Conceptual Framework for the Use Case Definition
The goal of the study were meaningful and promising DA
questions for product planning. These questions were to be
asked by the participants for their considered products. To
improve the relevance of the questions, each study iteration
was framed by a typical product planning task, e.g.,
“identifying customer needs” or “analyzing failures”.
Participants should only formulate questions that contribute to
the specified task. To improve the diversity of the questions,
participants should formulate them in various data analytics
stages (see section 3.3). These DA stages were illustrated by
numerous generic exemplary questions.
3.3. Action taking
The planned actions were implemented in two workshops.
In the following, the implementation and the results of the
workshops are described.
3.3.1. Workshop implementation
Building on the conceptual framework in section 3.2, we
conducted two workshops with two iterations each. Within
these, we varied the framework’s variables and tested their
effects. In the first workshop, partners from four manufacturing
companies were asked to collect potential DA questions for
their own products (see Table 1).
Maurice Meyer et al. / Procedia CIRP 104 (2021) 1179–1184 3
Table 1: Overview of participants of the first workshop
Position
Employees
Product
Senior business
analyst
> 1000
Systems for banks
and commerce
Head of product
development
> 100
Ventilation systems
Head of digital
product management
> 5000
Industr. electronics
and connectivity
Head of virtual
product development
> 500
Solid & sheet metal
forming machines
To structure the questions, we used the six DA stages
remember, understand, apply, analyze, evaluate, and create
[13]. Both iterations of the workshop were divided into five-
minute time slots for each DA stage for the participants to
create questions. For each stage, we prepared five generic
exemplary questions. In the first iteration, we chose the product
planning task “analyzing user behavior” to frame the study.
The second iteration focused on the task “analyzing failures”.
In the second workshop, four product planning experts (see
Table 2) were asked to generate DA questions for the four
representative products washing machine, car, smartphone,
and CNC machining center.
Table 2: Overview of participants of the second workshop
Position
Organization
Senior
engineer
University
Team
leader
University
Research
assistant
University
Research
assistant
University
This time, we only used the three DA stages detection,
diagnosis, and prediction [14], testing a rougher division (see
Fig. 3). For each stage, seven minutes were reserved. The first
iteration was framed by the product planning task “identifying
customer needs”. The participants were not given exemplary
questions. In the second iteration, the study was framed by the
task “analyzing process quality”. The participants were given
five exemplary questions for each DA stage.
3.3.2. Results
During the workshops, we observed how the participants
dealt with the tasks. Afterwards, we asked all participants for
their impressions and opinions on the approach. The following
results show the aggregated observations and answers:
(1) DA Questions:
(R-D-1.1) Within one stage, questions were often very similar.
Often, only the object under investigation was different (e.g.,
motor was replaced by valve). Across the DA stages, questions
also regularly showed a high level of similarity as participants
often referred to earlier questions to derive new ones.
Figure 3: Setup for the Second Workshop (Excerpt)
(R-D-1.2) Especially for the highest DA stages, multiple
questions were not suited for data analysis.
(2) Product planning tasks:
(R-D-2.1) Many participants had difficulties to only focus on
the given product planning task, feeling restricted by the
framing. They rather wanted to holistically collect questions.
(R-D-2.2) Some product planning tasks showed better results
than others. This is also supported by the participants
impression that some tasks were easier to deal with than others.
(3) Data analytics stages:
(R-D-3.1) The utilization of the three DA stages was
considered more intuitive and comprehensible compared to the
six DA stages. The latter led to confusion in the workshops.
(R-D-3.2) Participants found it harder to formulate promising
questions in higher DA stages. This impression is backed by
the qualitative and quantitative decline of the questions in
higher DA stages. Fittingly, the participants described the start
in lower DA stages as beneficial because questions seem to
become more complex in higher DA stages.
(4) Exemplary questions:
(R-D-4.1) The participants found the exemplary questions to be
indispensable. Especially in higher DA stages, they helped the
participants to understand the stages and formulate their own
questions. It was noted that specific examples would be a great
addition to the generic questions already available.
(R-D-4.2) As participants relied upon the exemplary questions,
they often formulated questions very similar to the examples.
(5) Product:
(R-D-5.1) For complex products, participants found it hard to
formulate meaningful questions, esp. if they were no experts.
(R-D-5.2) In contrast, for some product-task combinations
which are easy to understand (e.g., analyzing process quality
for smartphones), participants even felt inhibited as they
thought about too many questions simultaneously.
3.4. Evaluation
After the workshops, we evaluated the results of our study
and derived the following five findings:
4 Maurice Meyer et al. / Procedia CIRP 104 (2021) 1179–1184
(F-1) Specific DA questions for different products often
display high similarities (see R-D-1.1). These observations
imply that specific questions can easily be transferred to other
products. However, the high similarities observed also reveal
the need for creating a higher diversity of the questions.
(F-2) Practitioners need support to assess the quality of their
questions, esp. in higher DA stages (see R-D-3.2).
(F-3) Experts seem to ask better and more focused questions
for the considered product and task (see R-D-5.1)
(F-4) Participants need to be provided with exemplary
questions (see R-D-4.1). However, the usage of the questions
must be methodically planned, as participants tend to copy the
exemplary questions and thus lose creativity (see R-D-4.2).
(F-5) For complex products, it could be necessary to only
consider certain parts like subsystems to facilitate the creation
of DA questions (see R-D-5.1). This could also help when
participants feel overwhelmed by too many potential questions
(see R-D-5.2).
3.5. Reflection
The results show that DA questions for different products
can be quite similar. This indicates that universally valid use
cases might exist (see working hypothesis in section 3.2).
Further research is necessary to find and leverage them to help
companies identify their own promising use cases.
4. Specification of Data Analytics Use Cases
Once a data analytics use case is defined, it needs to be
specified and translated into a concrete analytics problem. In
the following, the results of the CAR concerning the
specification of data analytics use cases are presented.
4.1. Diagnosis
In our previous work, we noticed that companies face major
challenges when asked to specify use cases and approach
analytics problems. Here, especially small and medium-sized
enterprises struggle with finding and developing adequate
analytics solutions for incoming DA questions. From our
experience, this is related to two reasons: First, the necessary
competencies to conduct data analytics projects are only
partially available in most companies. Second, outsourcing of
all analytics activities is no satisfiable solution as data analytics
deeply depends on expert product knowledge, esp. for complex
products. Hence, even though they lack competencies and
experience, companies need to work on solutions for
themselves. To identify a promising solution approach, several
factors need to be considered, e.g., algorithm choice and the
corresponding data analytics pipeline with the data preparation
steps. Here again, further factors must be considered such as
proficiency in the business domain, the algorithm’s ease of use,
the analytics goal as well as data complexity and structure [15].
To overcome these challenges, data scientists in manufacturing
companies need to be provided with a methodical approach and
tools.
4.2. Action planning
As a working hypothesis, we assume that similar DA
questions can be clustered and addressed by only one essential
solution approach. This generic solution approach helps the
data scientists to investigate the analytics problem. To test this
working hypothesis within our study, we again set up a
conceptual framework which displays the relevant variables
and the relationships between them (see Fig. 4).
Figure 4: Conceptual Framework for the Use Case Specification
The starting point were the DA questions previously defined
by the product experts. Following our working hypothesis,
these should be grouped into analytics clusters based on the
underlying analytics problem. The result should be clusters of
various questions reflecting equal or at least similar analytics
problems. The analytics clusters should be structured by the DA
stages and the basic questions data analytics can answer (see
4.3). Each analytics cluster should be addressed with one
essential solution approach helping the data scientists to
translate the DA questions into analytics problems which can
then be analyzed. Here, we also aimed at investigating how data
scientists approached the identified analytics problem.
4.3. Action taking
The proposed conceptual framework was tested in a
workshop. The implementation and the corresponding results
are presented in the following subsections.
4.3.1. Workshop implementation
The workshop was conducted with four data science experts
from a research institute, a university, and a large
manufacturing company (see Table 3).
Table 3: Overview of participants of the third workshop
Position
Department
Organization
Head of
department
Industrial data
science
Research
institute
Research assistant
Industrial data
science
Research
institute
Research assistant
Machine tools
University
Machine Learning
& Data Analysis
Developer
Pre-Development
Manufacturing
company
Maurice Meyer et al. / Procedia CIRP 104 (2021) 1179–1184 5
The data science experts worked with the DA questions
obtained in the workshops from section 3. First, they were
asked to sort the questions into clusters. The sorting criterion
was the presumed similarity of the solution approach for the
DA questions. Subsequently, the clusters should be named with
a concise name or a representative question for the cluster.
Second, the clusters were sorted into a portfolio based on the
three DA stages from section 3 and the five basic questions data
analytics can answer: (1) Is this A or B? (2) Is this weird? (3)
How much or how many? (4) How is it organized? (5) What
should I do next? [16]. Then, each cluster was assigned an
algorithm class. The procedure is illustrated in Fig. 5.
Figure 5: Schematic Procedure of the Specification Workshop
Finally, the experts were asked to answer the following
questions based on exemplary DA questions:
(1) Which information is important besides the question to start
with the data analysis, e.g., to select an analytics pipeline?
(2) What steps do data scientists go through until they start with
a specific model?
4.3.2. Results
After the workshop, we asked the experts for their
impressions when working on the tasks. From their responses
and our observations during the workshop, the following
aggregated results are derived:
(1) DA questions:
(R-S-1.1) The data scientists found it hard to understand the
DA questions without any context. Often, there were multiple
ways for interpretation which would lead to different analytics
problems and consequently to different solution approaches.
(R-S-1.2) Also, the data scientists had difficulties extracting the
underlying common question of all elements of a cluster.
(2) Analytics clusters:
(R-S-2.1) The data scientists translated the DA questions into
analytics problems using algorithm classes for each cluster.
(R-S-2.2) To manage the algorithm assignment, they generated
precise sub-questions, e.g., from “what are critical failures?”,
questions like “what is the condition in case of a machine
failure?” and “what conditions are critical? were derived.
(R-S-2.3) Even though they had difficulties working with the
clusters, they noted that the approach seemed promising.
(3) Basic DA questions and DA stages:
(R-S-3.1) The basic DA questions were not perceived as very
helpful and quickly re-translated into algorithm classes.
(R-S-3.2) The placement of clusters in the portfolio turned out
to be difficult. Lots of discussions were necessary to find a spot.
(4) Solution approaches:
(R-S-4-1) The data scientists lacked information (e.g., the
analysis goal) for assigning the algorithm class to the clusters.
(5) Procedure:
(R-S-5.1) The data scientists based their general procedure on
standard models such as CRISP-DM and stressed the
importance of such models for product planning.
(R-S-5.2) Yet they also emphasized that more concrete models
are missing for more efficient and flawless projects.
(R-S-5.3) They suggested that such models could be focused
on certain problem classes within the product creation process.
(R-S-5.4) Also, they noted that some type of checklist would
help them to overview all points to be clarified and all details
necessary before the start of the analytics project.
4.4. Evaluation
From these results, we derived the following findings:
(F-6) For the data scientists, the DA questions formulated by
the product experts do not provide enough information to
derive a solution approach (see R-S-1.1 and R-S-4.1). The
product experts need to provide further details like goals and
problems, e.g., by using a questionnaire or a checklist. A more
standardized question format could also be investigated to
facilitate the assignment of an algorithm class. Templates for
formulating questions could be helpful in this regard.
(F-7) The creation of suitable, analysis-enabling sub-questions
seems to be a critical factor. The underlying mechanisms for
deriving such questions need to be investigated (see R-S-2.2).
(F-8) The basic DA questions were no help to data scientists as
they tend to focus on algorithm classes from the beginning (see
R-S-3.1). For them, a clustering focused on the underlying
analytics problems seems more promising (see R-S-2.1). Yet,
the basic DA questions could be useful for product experts to
generate use cases and specific DA questions.
(F-9) There is a need for a more concrete and focused
procedure model for conducting analytics projects in product
planning (see R-S-5.2). Data scientists could benefit from such
a model as it would address the unique characteristics more
than a general model like CRISP-DM, e.g., the requirements of
data acquisition.
(F-10) In combination with a comprehensible checklist, the
specific process model could increase the efficiency and
quality of analytics projects (see R-S-5.2 and R-S-5.4).
6 Maurice Meyer et al. / Procedia CIRP 104 (2021) 1179–1184
4.5. Reflection
The results and the findings suggest that our second working
hypotheses might be true (see section 4.2). However, further
investigation of the translation and specification process of use
cases is necessary.
5. Discussion and Conclusion
The definition and the specification of use cases in usage
data-driven product planning are challenging tasks. Through
our study, we generated valuable knowledge about challenges,
requirements, and potential solutions. In the following, we
discuss the key insights and limitations of our study. Finally,
we conclude with implications for future research.
5.1. Key Insights
The definition of promising use cases for data-driven
product planning presents manufacturing companies with huge
challenges. Comprehensible examples are essential to stimulate
the practitioners’ creativity for creating their own use cases.
Likewise, a methodical approach for the definition of use cases
is necessary. For the data scientists to understand the use cases,
product experts need to provide contextual information for the
questions to be answered, e.g., the reason for the analysis can
support a common understanding. These insights show the
strong connection between use case definition and specification
as well as the need to synchronize both. For the use case
specification, data scientists need a specialized process model.
Such a model could address the unique characteristics and
obstacles of product planning.
5.2. Limitations
There are several limitations to our study. First, as we chose
a qualitative research approach, the sample size of twelve
workshop participants is rather small and may result in a
limited generalizability and transferability of the results. The
acquired results need to be checked and validated with
additional practitioners. Second, by conducting workshops as
part of the CAR process, we put the participants into an
artificially created situation which has no similarities to their
daily work. The participants may have behaved differently
compared to their usual work environment. Third, as the
workshops were conducted virtually, the observation of the
participants behavior was restricted. A personal workshop may
have provided even richer insights through observation.
5.3. Implications for Future Research
The results imply various future research needs. (1) A new
approach for the identification and utilization of universally
valid use cases needs to be developed. (2) Research should
focus on measuring the quality of questions (e.g., with a
checklist). (3) The development of a specialized procedure
model for conducting analytics projects in product planning is
necessary. (4) The translation of DA questions into analyzable
analytics problems requires further research.
Acknowledgements
This work is funded by the German Federal Ministry of
Education and Research (BMBF).
References
[1] Porter ME and Heppelmann JE. How Smart, Connected Products Are
Transforming Companies. Harvard Business Review, Vol. 93 (10), 2015,
p. 101, p. 112.
[2] Shearer C. The CRISP-DM Model: The new blueprint for data mining.
Journal of Data Warehousing, Vol. 5, 2000, pp. 13-22.
[3] Wilberg J, Schäfer F, Kandlbinder P, Hollauer C, Omer M and Lindemann
U. Data Analytics in Product Development: Implications from Expert
Interviews. 2017 IEEE International Conference on Industrial
Engineering and Engineering Management (IEEM), Singapore, December
10-13, 2017, p. 822.
[4] Wilberg J, Triep I, Hollauer C and Omer M. Big Data in Product
Development: Need for a data strategy. 2017 Portland International
Conference on Management of Engineering and Technology (PICMET),
Portland, Oregon, USA, July 9-13, 2017, p. 5-6.
[5] Kühn A, Joppen R, Reinhart F, Röltgen D, von Enzberg S & Dumitrescu
R. Analytics Canvas A Framework for the Design and Specification of
Data Analytics Projects. Procedia CIRP, 70, 2018, p. 163.
[6] Nalchigar S, Yu E. Business-Driven Data Analytics: A Conceptual
Modeling Framework. In: Data & Knowledge Engineering 117, 2018, p.
359.
[7] Rogers B, Maguire E, Nishi A. Data & Advanced Analytics High Stakes,
High Rewards. Forbes Insights, EY, 2017, p. 36.
[8] Baskerville R, Wood-Harper AT. Diversity in Information Systems
Action Research Methods. European Journal of Information Systems,
Vol. 7 (2), 2000, p. 97.
[9] Davison R, Martinsons MG, Kock N. Principles of Canonical Action
Research. Information Systems Journal, Vol. 14 (1), 2004, p. 72.
[10] Meyer M, Frank M, Massmann M, Wendt N, Dumitrescu R. Data-Driven
Product Generation and Retrofit Planning. Procedia CIRP, Vol. 93 , 2020,
pp. 965-970.
[11] Meyer M, Frank M, Massmann M, Wendt N, Dumitrescu R. Research and
Consulting in Data-Driven Strategic Product Planning. Journal of
Systemics, Cybernetics and Informatics. Vol 18 (2), 2020, pp. 55-61.
[12] Erevelles S, Fukawa N, Swayne L. Big Data consumer analytics and the
transformation of marketing. Journal of Business Research, Vol. 69 (2),
2016, pp. 900-902.
[13] Egorenkov A.: How to ask questions data science can solve. 2017.
Available at: https://towardsdatascience.com/how-to-ask-questions-data-
science-can-solve-e073d6a06236 (accessed 9 November 2020).
[14] Steenstrup K, Sallam R, Eriksen L, Jacobson S. Industrial Analytics
Revolutionizes Big Data in the Digital Business. Gartner Research, 2014.
Available at: https://www.gartner.com/en/documents/2826118/industrial-
analytics-revolutionizes-big-data-in-the-digi (accessed 21 November
2020).
[15] Moustafa RM, Nassef M, Salah A. Categorization of Factors Affecting
Classification Algorithms Selection. International Journal of Data Mining
& Knowledge Management Process (IJDKP), Vol 9, 2019.
[16] Gilley S, Rohrer B. Data Science for Beginners. Microsoft Azure Machine
Learning, 2019. Available at: https://docs.microsoft.com/de-
de/azure/machine-learning/classic/data-science-for-beginners-the-5-
questions-data-science-answers (accessed 7 Novemer 2020).
... A successful means to support the definition of use cases is the utilization of generic use cases and examples. This has been shown, for example, by Meyer et al. in their paper [8]. There, the authors describe the results of a study in which they asked eight workshop participants to define use cases for analyzing use phase data in product planning. ...
... The results of the study show that without the examples, participants found it significantly more difficult to develop meaningful use cases on their own. With the generic examples, they achieved quantitatively and qualitatively better results [8]. ...
Article
Full-text available
Product planning is transforming. For decades, product managers searched for potentials for improvement of their products using methods such as interviews and workshops with customers. Since the information gained with these methods was predominantly qualitative and often also incomplete, product managers also had to rely on their own experience as well as on assumptions about potentials for improvement. However, as a result of the transformation from mechatronic products to cyber-physical systems, a product’s use phase can now be investigated in detail utilizing extensive use phase data. For example, the strengths and weaknesses of the product, as well as the behavior of customers and users, can be observed. The analysis of these data enables product planning based on facts. Currently, however, product managers struggle to identify potentials for improvement resulting from use phase data analyses. In addition to a lack of methods, they especially lack useful examples and use cases for the analysis of use phase data in product planning, which provide them with orientation in the sense of references to plan their analyses. To identify such use cases, we conducted two workshops with 17 product planning and data science experts. This paper presents the results of these workshops: 17 use cases for analyzing use phase data in product planning. Each use case includes exemplary questions which could be answered through data analytics and suggestions on the data required. These suggestions are based on five categories of use phase data that are also derived from the results of the two workshops. Furthermore, each use case is connected to specific elements of value to demonstrate its usefulness and its intended utilization. With these results, we present the first comprehensive overview of use cases for analyzing use phase data in product planning.
... Meyer et al. show that they have considerable difficulties in developing promising use cases for the analysis of data from the use phase. They need methodological support for planning data analyses (Meyer and Panzner et al., 2021). Therefore, in this paper, we address the research question of how to successfully plan the analysis of use phase data in product planning. ...
... In the process, the product managers involved are confronted with numerous difficulties, such as identifying promising questions. Based on these findings, the authors conclude that product managers need methodological support for planning the analysis of use phase data (Meyer and Panzner et al., 2021). To the best of our knowledge, no method addresses all sub-processes of the reference process model's planning main process. ...
Article
Full-text available
The ongoing digitalization of products offers product managers new potentials to plan future product generations based on data from the use phase instead of assumptions. However, product managers often face difficulties in identifying promising opportunities for analyzing use phase data. In this paper, we propose a method for planning the analysis of use phase data in product planning. It leads product managers from the identification of promising investigation needs to the derivation of specific use cases. The application of the method is shown using the example of a manufacturing company.
... 2) Use Case Specification: For the use case specification, different specification methods such as the analytics canvas [23] or the specification method for use cases in product planning presented by Meyer et al. [24] can be used. An expert in specification methods who specifies the use case with the domain experts can be included optionally. ...
Conference Paper
It is often a problem to combine domain knowledge and data science knowledge in applications of industrial data analytics. Data scientists usually spend a lot of time to understand the domain to develop an application while domain experts lack the skills to interpret results of underlying mathematical models. This leads to difficulties when adapting to changes, handling issues and transfer to similar scenarios, and thus to a lack of acceptance of data analytics applications in industrial companies. Based on the Cross Industry Standard Process for Data Mining (CRISP-DM), we propose a novel process model which integrates training of domain experts to enable them to become citizen data scientists to independently develop and implement data analytics applications. We qualitatively evaluated our process model on a storage location assignment problem in the warehouse of a manufacturer of high-end domestic appliances.
... The fact that methodological support is necessary for this is shown, among other things, by the authors in section 2. We attempted to explore what this might look like in the context of product planning in a previous research. Here we tried to answer the research question "How can data analytics use cases in product planning be specified and translated into concrete analytics tasks?" [13]. The results of the study showed a clear need for supporting methods and tools for defining and specifying analytics use cases in product planning. ...
Article
Full-text available
Cyber-physical systems (CPS) generate huge amounts of data during the usage phase. By analysing these data, CPS providers can systematically uncover hidden product improvement potentials for future product generations. The successful implementation of such analytics use cases depends to a large extent on whether the stakeholders involved succeed in coordinating their goals and procedures. In particular, product managers and data scientists must come to a common understanding in the context of defining and concretizing the use cases. A common vocabulary is necessary so that the data scientists or those responsible for analysis can determine target-oriented, analysis-capable use cases with which the processing of the data can start quickly and successfully. The research question that arises at this point is: How can business goals or use cases be translated into realizable analytics use cases or tasks? In this paper we present the Busines-to-Analytics Canvas as a result of an action design research approach. It supports the translation of business use cases and goals into concrete data analytics tasks for product planning. By providing various information elements and guiding questions, the canvas helps data scientists translate the business goal into a data analytics approach, i.e., an algorithm class, and gather the necessary information to start processing data.
Article
Full-text available
Industry 4.0 and digitalization have transformed the industry. Many manufacturers create additional customer value by offering data-based ser-vices. However, companies can benefit from analyzing data themselves. Learning from product usage and behavior data enables them to systemati-cally improve their products in future generations and retrofits. But using data in product planning is not trivial. Henceforth, we propose a method-ology for data-driven product generation and retrofit planning. It includes all steps from data-based identification of optimization potentials to the implementation of improvements in future product generations and retrofits. The application of the methodology is demonstrated in a case study.
Article
Full-text available
Industry 4.0 and digitalization have transformed the industrial world. Many manufacturers create additional customer value by offering data-based services. However, companies can benefit from analyzing data themselves, too. Through data, companies can learn about product usage and behavior. This enables them to systematically improve their products. But finding improvements through data analysis is not trivial. Henceforth, we developed a method for the data-based identification of product improvements. This method was created in a joint research project with four companies from different industrial sectors. The paper at hand introduces our approach of combining research and consulting in terms of a case study from our research project. The result is a research and consulting concept which is optimized for a two days workshop. From our point of view, there is no other way in researching methods for strategic product planning but through working together closely with companies. This is especially important as methods must be researched for practical usage. Simultaneously, it is essential to never forget that companies only participate in research projects if they clearly see a benefit. A benefit through consulting.
Article
Full-text available
In the process of selecting the most appropriate classification algorithm, there are two main tasks. The determination of the factors that will be used in the selection process and the methodology that will be used to make use of these factors and decide upon the most appropriate algorithm to solve the problem. The first task is to characterize datasets; by extracting its characteristics/meta-data or by adopting a reasonable characterization approach, whilst the second task is the learning and deciding task based on the characterization. Choosing the most appropriate classification algorithm for classification problem is becoming a strategically important task in the data-mining process. Due to the availability of numerous classification algorithms in the area of data mining for solving the same kind of problem, with little guidance available for recommending the most appropriate algorithm to use which gives best results for the dataset at hand, this task becomes more and more complicated. As a way of optimizing the chances of recommending the most appropriate classification algorithm for a dataset, a two-step study was conducted: (1) Survey study focusing on the different factors considered by data miners and researchers in different studies when manually selecting the classification algorithms that will yield desired knowledge for the dataset at hand, a categorization tree was created for the measurable factors. The categorization tree, groups and categorizes these factors so that they can be exploited by recommendation software tools. (2) Experimental study of an automated tool based on a pure Collaborative Filtering Recommender System, User-Item approach, to recommend the most appropriate classification algorithm for a classification problem, relying on historical datasets meta-data. The tool involved using a computable procedure defined as steps to be able to select the most appropriate classification algorithm automatically. In order to gain flexibility and adaptability to the tool, the Collaborative Filtering Recommendation Systems approach was used, as it allows adding new entries to the tool easily without the need to rebuild the model. It also showed that: there is no single factor or group of factors that can be used alone to recommend a classification algorithm for a dataset. Although most of the studies for studying these factors depends crucially on metadata of the datasets. It was shown that there are other paths that can be considered as well in recommending a classification algorithm for a dataset. The recommendation system is allowed to learn by automatically rebuild the similarity matrix with different datasets. Moreover, according to feedback from the user, the input dataset, can be added to the main training dataset to enrich the training data of the tool. The experimentation results showed that the recommendations average accuracy of the most appropriate classification algorithms recommender tool is matching with the actual accuracy for more than 91% of the benchmark datasets used. These results demonstrate that the proposed recommendation tool can offer a reliable, accurate, flexible and fast solution for the problem of selecting the most appropriate classification algorithm for classification problem. With a sufficient amount of training data, better recommendation results could be achieved.
Conference Paper
Industry 4.0 and digitalization have transformed the industrial world. Many manufacturers create additional customer value by offering data-based services. However, companies can benefit from analyzing data themselves, too. Through data, companies can learn about product usage and behavior. This enables them to systematically improve their products. But finding improvements through data analysis is not trivial. Henceforth, we developed a method for the data-based identification of product improvements. This method was created in the joint research project DizRuPt with four companies from different industrial sectors. The paper at hand introduces our approach of combining research and consulting in terms of a case study from our research project DizRuPt. The result is a research and consulting concept which is optimized for a two days workshop. From our point of view, there is no other way in researching methods for strategic product planning but through working together closely with companies. This is especially important as methods must be researched for practical usage. Simultaneously, it is essential to never forget that companies only participate in research projects if they clearly see a benefit. A benefit through consulting.
Article
The effective development of advanced data analytics solutions requires tackling challenges such as eliciting analytical requirements, designing the machine learning solution, and ensuring the alignment between analytics initiatives and business strategies, among others. The use of conceptual modeling methods and techniques is seen to be of considerable value in overcoming such challenges. This paper proposes a modeling framework (including a set of metamodels and a set of design catalogues) for requirements analysis and design of data analytics systems. It consists of three complementary modeling views: business view, analytics design view, and data preparation view. These views are linked together to connect enterprise strategies to analytics algorithms and to data preparation activities. The framework includes a set of design catalogues that codify and represent an organized body of business analytics design knowledge. As the first attempt to validate the framework, three real-world data analytics case studies are used to illustrate the expressiveness and usability of the framework. Findings suggest that the framework provides an adequate set of concepts to support the design and implementation of analytics solutions.