Conference PaperPDF Available

LadderBot: A Requirements Self-Elicitation System

Authors:

Abstract and Figures

[Context] Digital transformation impacts an ever-increasing amount of everyone's business and private life. It is imperative to incorporate user requirements in the development process to design successful information systems (IS). Hence, requirements elicitation (RE) is increasingly performed by users that are novices at contributing requirements to IS development projects. [Objective] We need to develop RE systems that are capable of assisting a wide audience of users in communicating their needs and requirements. Prominent methods, such as elicitation interviews, are challenging to apply in such a context, as time and location constraints limit potential audiences. [Research Method] We present the prototypical self-elicitation system "LadderBot". A conversational agent (CA) enables end-users to articulate needs and requirements on the grounds of the laddering method. The CA mimics a human (expert) interviewer's capability to rephrase questions and provide assistance in the process. An experimental study is proposed to evaluate LadderBot against an established questionnaire-based laddering approach. [Contribution] This work-in-progress introduces the chatbot LadderBot as a tool to guide novice users during requirements self-elicitation using the laddering technique. Furthermore, we present the design of an experimental study and outline the next steps and a vision for the future.
Content may be subject to copyright.
This is the author’s version of a work that was published in the following source
Rietz, T, and Maedche, A. (2019): LadderBot: A requirements self-elicitation system.
Proceedings of the 27th International Requirements Engineering Conference (2019).
Jeju Island, South Korea, September 23–27.
Please note: Copyright is owned by the author and / or the publisher.
Commercial use is not allowed.
Institute of Information Systems and Marketing (IISM)
Fritz-Erler-Strasse 23
76133 Karlsruhe - Germany
http://iism.kit.edu
Karlsruhe Service Research Institute (KSRI)
Kaiserstraße 89
76133 Karlsruhe – Germany
http://ksri.kit.edu
© 2019. This manuscript version is made available under the CC-
BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-
nc-nd/4.0/
LadderBot:
A requirements self-elicitation system
Tim Rietz
Institute of Information Systems and Marketing (IISM)
Karlsruhe Institute of Technology (KIT)
Karlsruhe, Germany
tim.rietz@kit.edu
Alexander Maedche
Institute of Information Systems and Marketing (IISM)
Karlsruhe Institute of Technology (KIT)
Karlsruhe, Germany
alexander.maedche@kit.edu
Abstract—[Context] Digital transformation impacts an ever-increasing amount of everyone’s business and private life. It is imperative
to incorporate user requirements in the development process to design successful information systems (IS). Hence, requirements elicitation
(RE) is increasingly performed by users that are novices at contributing requirements to IS development projects. [Objective] We need
to develop RE systems that are capable of assisting a wide audience of users in communicating their needs and requirements. Prominent
methods, such as elicitation interviews, are challenging to apply in such a context, as time and location constraints limit potential audi-
ences. [Research Method] We present the prototypical self-elicitation system “LadderBot”. A conversational agent (CA) enables end-
users to articulate needs and requirements on the grounds of the laddering method. The CA mimics a human (expert) interviewer’s
capability to rephrase questions and provide assistance in the process. An experimental study is proposed to evaluate LadderBot against
an established questionnaire-based laddering approach. [Contribution] This work-in-progress introduces the chatbot LadderBot as a tool
to guide novice users during requirements self-elicitation using the laddering technique. Furthermore, we present the design of an exper-
imental study and outline the next steps and a vision for the future.
Index TermsUser, Requirements Elicitation, Wide Audience, Conversational Agent, Self-Elicitation, Laddering
I. INTRODUCTION
Digital transformation has brought a variety of information systems into everyone’s business and private life with a substantial
impact on business and society [1]. We observe a transformation towards a digital society, stressing the influence of the Internet on
many traditional services, which advocates a power shift towards the user [2]. In the face of persistently high failure rates of IS
development projects, it is imperative that an increasing number of users is involved in RE processes, with a varying degree of
technological and methodological expertise [3]. The scalable elicitation of user requirements is crucial for developing software that
meets needs and demands and to reduce project failure [4]. Consequently, RE needs to be performed with a wide range of users that
are novices at contributing requirements to development projects [1].
For requirements elicitation, interviews have been used most widely [5]. Especially the laddering interview is considered a very
effective technique for eliciting relevant information for articulating requirements [5]. Laddering produces comprehensive and struc-
tured insights due to the method’s hierarchical nature. In laddering, an interviewer identifies a seed attribute, an initial topic, and
askes a series of “why…?” questions to uncover and clarify needs and related attitudes [6]. While having its roots in personality
psychology, laddering has already seen usage for requirements elicitation [4] (e.g., to elicit Customer Attribute Hierarchies [7]).
Essentially, requirements are elicited as attribute-consequence-value (ACV) chains [6]. Since laddering interviews require highly
trained and experienced interviewers, the availability of suitable interviewers imposes a bottleneck onto elicitation interviews [6].
Tool support is necessary to enable requirements elicitation with a wide range and number of users [8].
Several tools to aid with wide audience elicitation have been proposed over the years. AnnotatePro allows users to submit require-
ments that can be drawn on their screens [9]. Given the common problems with requirements quality, such as completeness and
ambiguity, exploring natural-language (NL) based elicitation systems gained traction. rez and Valderas (2009) combine visualiza-
tion-based RE with NL to reduce ambiguity and inconsistency in end-user RE. Derrick et al. (2013) evaluated an embodied conver-
sational agent to facilitate a group workshop that used prompts to guide and assist during user story formulation [10]. However, these
tools do not suffice in providing a solution to both challenges introduced: Annotation-based tools primarily enable RE for iteratively
improving existing systems; NL tools commonly require a requirements engineer to facilitate the process, hence retaining a bottleneck
for wide audience integration [11]; additionally, existing research rarely considers (methodological) guidance for novice end-users.
Tools such as FAME [12] and ASSERT [13] cater to novices, but only on the side of a novice analyst, not novice users, hence not
enabling self-elicitation. A literature gap remains in extending RE techniques to wide audiences. Guidance and assistance are neces-
sary to facilitate the elicitation of high-quality requirements from novice users [14], [15]. We utilize a conversational agent (CA) in
the form of a chatbot to mimic a human interviewer’s capability to guide an interview [10]. Chatbots allow us to include a wide
audience of users, independent of personal, time, or location restrictions and may guide novice users through laddering interviews.
Therein, we extend our previous research on (semi-)automated RE be explicitly focusing on the collection of unstructured data on the
basis of self-elicitation interviews [16], [17].
II. CONCEPTUAL FOUNDATIONS
A. Common issues in user elicitation interviews
To understand the implications for a novice-centric self-elici-
tation system, we need to understand the characteristics of the re-
quirements (self-)elicitation behavior of novices. In this article, we
refer to self-elicitation of requirements rather than self-service RE
system. As the user is guided in uncovering their requirements, ra-
ther than being enabled to create a service with a direct benefit for
themselves, we argue that self-elicitation serves as a better term to
describe the process.
So far, RE literature rarely focuses on characteristics of novice
users to be supported in elicitation processes [1]. Commonly, nov-
ice RE analysts are the focus of supporting activities [18]. How-
ever, insights from analyzing the behavior of novice analysts in
elicitation processes may serve as a guideline for how to provide
appropriate support for requirements self-elicitation.
Notably, one of the most frequently observed downfalls in elic-
itation performed with novice users or by novice analysts is a lack
of structure [19]. A lack of structure results in interviewers not dig-
ging deep enough when conducting interviews, impacting require-
ments correctness [15]. Since especially novice users are not fa-
miliar with communicating requirements, which may be rooted in
an incomplete understanding of their own needs, the task of uncov-
ering the cause of a need or requirement falls to the interviewer.
Otherwise, interviews lead to ambiguous user statements at the
wrong level of abstraction [13]. Without uncovering the cause of,
or foundation for user needs, the development of disruptive solu-
tions stagnates. We can avoid common mistakes of novice analysts
that happen during interviews, such as question formulation, or-
dering, and question omission through a pre-defined interview
structure [18]. Furthermore, the analyst’s behavior, such as lack of
confidence or unprofessionalism, or inadequate time management,
has a substantial impact on the results of an interview [21]. Hence,
bot structural and behavioral interview guidelines are necessary for
eliciting high-quality requirements.
Analysts should be educated in thinking in relations, hence ap-
plying model-based reasoning rather than object-attributes to in-
crease the performance of requirements analysis [20]. We propose
that by using an elicitation structure following the laddering tech-
nique, we can enable users to generate requirements in a relation-
focused fashion, contributing to the quality of requirements speci-
fication. Fig. 1 provides an overview of how the conceptual foun-
dations feed into the development of LadderBot.
B. The laddering interview technique for RE
Laddering is a cognitive interview technique with its roots in personality psychology that utilizes a structured approach for data-
gathering [6]. For RE, cognitive techniques, in comparison to traditional, collaborative, or contextual techniques, are commonly used
to acquire knowledge. As such, requirements are not direly communicated but extracted from the structure and content of user
knowledge based on rich enough information [21]. Herein, cognitive techniques provide the most natural interaction with end-users
[21].
Laddering was introduced as a method to elicit superordinate items from subordinate ones, to clarify the relations between items
obtained using the repertory grid method, with its origin in personal construct theory. However, the laddering technique has primarily
been used for knowledge-elicitation in marketing and advertising [22]. As such, the technique has become a tool for the means-end
Fig. 1. Overview of the conceptual foundations of LadderBot
theory in marketing. The means-end theory distinguishes three levels of abstraction of meaning that users obtain from a purchase or
consume [6]. These three levels are described as ACV chains: attributes consequences values [23]. Attributes as the least abstract
level describe “concrete, physical, or observable characteristics” of products. Despite the notion initially describing physical products,
we may use the idea for digital products like software, too [24]. Consequences constitute the second level of abstraction. They describe
what a product provides a user with, either on the positive (benefits) or negative side (costs). A product can have functional or non-
functional, e.g., psychosocial, consequences. Values are the most abstract level. They represent a user’s wishes, goals, and needs and
are the end state a customer is trying to achieve through a purchase. An exemplary ACV chain in a software development context has
the following form: Providing default values (A) No need to fill out data repeatedly (C) Happiness (V) [24].
The laddering technique usually comprises three steps: elicitation of attributes, a laddering interview, and representing and ana-
lyzing the results. Attributes serve as the seed for the interview, in the form of lower order characteristics with implications for higher-
order cognitive processes and determine the direction of the interview. As such, multiple methods of generating attributes have been
used, depending on the purpose of related study. The laddering interview itself follows a straightforward structure. Participants are
asked why a particular attribute is important to them, using a series of “why…?” questions while navigating through the ACV chains.
E.g., an interviewer might ask, “why is starting process X from the landing page is important?”. A content-coding procedure initializes
the analysis process of laddering interviews. These codes are then used to build a summary matrix, visualizing each chain from each
participant, showing the included codes per chain. Subsequently, an aggregate implication matrix is formed, showing the aggregated
information across interviews. This matrix contains all direct and indirect relations between attributes, consequences, and values.
Finally, we can visualize the aggregate implication matrix as a hierarchy value map, a tree diagram showing either only direct or both
direct and indirect relations at a specified cut-off value (for examples, see [24][26]).
C. Form and Function of Chatbots
The goal of CAs, as McTear (2002) puts it, is the “[…] effortless, spontaneous communication with a computer”. Klopfenstein
et al. (2017) conducted a systematic analysis of one of the instantiations of CAs, chatbots, categorizing advantages for users and
developers [27]. They find instant availability, a gentle learning curve, and platform independence to be among the most prominent
benefits. Hence, we argue that chatbots serve as a promising form of CAs for approaching a large number of users. Instant availa-
bility and platform independence enable barrier-free interaction with the system. A gentle learning curve, resulting from an interac-
tion mode that is familiar to novice users, texting, creates an effortless experience [27]. Multiple variants of chatbots have seen use
over the years, which can be differentiated according to form and function [28]. The form of a chatbot describes the arrangement of
aspects that do not primarily contribute to the utility of the bot (similar to non-functional requirements). For example, anthropomor-
phism comprises methods for making the appearance and behavior of a bot more human-like. Function describes aspects related to
general performance, such as the bot’s dialogue control strategy. A frame-based bot uses question templates to provide information
back to a user. These systems do not have pre-determined dialogue flows but adapt to user input, e.g., a software problem reparation
tool [29].
Despite a renewed research interest in chatbots, due to advances in artificial intelligence [30], the integration of CAs into RE
remains spare. Derrick et al. (2013) investigated the effect of a simple scripted agent in facilitating group elicitation sessions with
users [10] while other studies developed prototypes for frame-based agents in interview scenarios [31], [32]. While these studies
evaluated the general applicability of CAs as facilitators of elicitation processes, to the best of our knowledge, no evaluation of
chatbot-based requirements elicitation with a wide audience of end-users has been conducted, comparing the performance of a
system with established processes on the basis of measures such as performance and perception [5].
III. LADDERBOT
LadderBot uses a two-column visualization, with a graphical representation of ACV chains on the left and a frame-based chatbot
on the right, as shown in fig. 2. Initially, LadderBot welcomes users and provides a short explanation of the interface and the interview
process. We adapted the subsequent laddering interview structure from Jung (2014). To begin the interview, LadderBot asks the user
to state the three most frequently used features of a system as seed attributes for each chain. The following process is then repeated
until participants constructed three chains. At the beginning of each chain, LadderBot asks an initial question to elicit the first conse-
quence for the current attribute:
LadderBot: “As 2. example, you said Email. Why do you use Email? What do you obtain by using the function?”
User: I need to know if someone needs something from me, and see if I got any updates from the services I signed up for.”
Rather than asking an initial default question, LadderBot integrates the specific attribute that users selected into question formu-
lation. The line of questioning for consequences and values is repeated until a value is identified, or the user is unable to provide a
more precise answer. When asking why-questions repeatedly, the chatbot will rely on four techniques for rephrasing questions to help
and guide the user. We adapted these techniques from suggestions for human interviewers on how to conduct laddering interviews
[6], as described in table 1. Fig. 3 depicts a visual overview of how the solution works in a laddering interview. For now, the four
techniques are applied by LadderBot randomly. The rephrasing techniques primarily incorporate the seed attribute of the current
ladder into the question formulation.
Fig. 2. The interface of LadderBot
User replies are used for rephrasing only in the form of quotes, for ensuring that the resulting question makes sense. The visualization
of the current status of the interview on the left side updates for each elicited consequence. The graphical representation of ACV
chains may assist users in structuring their thoughts and uncovering new relations [20]. When asking a series of questions, a human
interviewer would need to identify if the user has described the value that they satisfy through an attribute to end the elicitation for a
specific attribute or to end the interview in general (e.g. [25]). As the current iteration of LadderBot is not capable of recognizing
whether a user has already described a final value on its own, the bot requires the user to indicate if they want to continue the laddering
process for the current attribute, or switch to the next chain. The user can make this indication with a predefined command (“stop”).
The questioning process for each of the three ladders is continued until the stop command is given and LadderBot concludes the
session.
The current implementation of LadderBot does not impose restrictions on the length of an answer of a user, to keep the interaction
with the chatbot as natural as possible. Long replies impose a challenge for LadderBot in formulating an appropriate question as a
response. As such, user replies are incorporated in questions only as complete references. Furthermore, LadderBot uses the three
features provided by the users at the start of the interview to formulate more direct questions, as we identified these replies to be
rather short. Users are currently not capable of making changes to previous answers. However, we plan to include this functionality
in future iterations. As the technological foundation of LadderBot, we use the Microsoft Bot Framework on node.js. To visualize
Fig. 3. Activity map of LadderBot
elicited ACV chains, we integrate the bot into a web application build on the frameworks d3.js and bootstrap. This architecture allows
for a straightforward reconfiguration of the artifact to change the laddering use case or the interview structure.
IV. EXPERIMENTAL STUDY DESIGN
To evaluate LadderBot, we will conduct an experimental study. The experiment procedure and the applied measurements will
partially build on previous studies that evaluated elicitation techniques [33], [34] or used the laddering technique as part of their
experiment design [26].
We will conduct the study with students from a large university in Germany in an experimental lab designed for conducting
scientific studies. As laddering case, we recreate the laddering structure applied by Jung (2014) to elicit the users’ goals for
smartphone use. Such results may be used to uncover requirements to develop or improve an IS for smartphones. Similar to the
original study, we will invite students as participants, while controlling for the participants’ experience with development projects
and laddering interviews. Around 200 students will be invited, randomly selected from a pool of potential participants.
TABLE I. QUESTION REPHRASING TECHNIQUES
Technique
Description
Example
Negative laddering
Ask the user why they do not do something or do not
want to feel a certain way
What problems could be caused by Email? How would Email have to
change to mitigate these problems?
Exclusion
Ask the user to imagine a situation where an attribute or
consequence does not exist
Imagine you could not use Instagram. What alternatives to Instagram
would you use and why?
Retrospective
Ask the user to imagine their behavior in the past and
compare it to now
Has your perception of this changed compared to a couple of years
ago? If so, why is that and what changed?
Clarification
Repeat a reply back to the user and ask for clarification
Okay, you just said “I want a real-time newsfeed”, right? In the context
of Instagram, could you explain that to me in more detail?
The experimental study will use a between-subject design with three treatments. Across treatments, participants will be asked to
conduct a self-elicitation of their goals in smartphone use. Treatments will be characterized by the available interview tool and the
interview visualization. In treatment (1), participants will use an established version of a “pencil-and-paper” laddering questionnaire
[26]. However, a digital questionnaire will be used to increase comparability with other treatments. In treatment (2), participants will
use the same questionnaire as in treatment (1) but augmented with the visualization used in LadderBot to keep track of already elicited
ladders. In treatment (3), participants will use LadderBot to complete the laddering interview. As such, only one of either the visual-
ization or the interview tool presented to participants is changed between treatments. Thereby, we aim to increase the comparability
of results between treatments while being able to evaluate the visualization and chatbot interface features of LadderBot separately.
We will evaluate the treatments using a combination of quantitative measurements. Herein, we rely on the established procedure
for analyzing the results of the laddering interviews [6]. We will calculate abstractness and centrality based on an aggregate implica-
tion matrix, which represents direct and indirect linkages between attributes, consequences, and values. Abstractness indicates
whether constructs are predominantly at the beginning (attributes) or ends (values) of a chain. Constructs become increasingly abstract
from means to ends. As such, it is a measure of importance in the means-ends structure [6]. Centrality measures the extent to which
a concept is connected to all other concepts in the matrix and is used to evaluate the importance of a concept. Additionally, we will
use the amount of direct/indirect links, the number of elicited consequences and values, and the time taken for comparing treatments
[33]. Furthermore, after the treatments, we will apply a self-reporting questionnaire to collect the participantsperceptions regarding
the following constructs on a 7-point Likert scale: Understandability, Learnability, Efficiency, Effectiveness, and Enjoyment [34], as
well as multiple constructs from the Big Five personality test. The self-reported measurements allow us to compare the perception of
LadderBot against the established computer-based laddering questionnaire. Finally, we will incorporate multiple control questions,
to evaluate the influence of experience, age, or gender, amongst others, on the experiment results.
V. ROADMAP AND CONCLUSION
This paper presents our work-in-progress for building LadderBot, a requirements self-elicitation system capable of guiding a
novice user through a laddering interview to generate attribute-consequence-value chains as follows: Elicitation guidance & assis-
tance - the user is supported through randomized rephrasing of questions based on an established guideline for interviewers; Dynamic
visualization - Elicited attributes, consequences, and values are visualized for the user and continuously updated throughout the in-
terview process.
We propose an experimental study design to evaluate LadderBot against the traditional approach of pencil-and-paper laddering
using a digital questionnaire. As we will use the proposed structure for the evaluation of LadderBot and its subsequent iterations, the
scenario and the generated dataset might be helpful for other researchers for comparing CA-driven tool support for RE. Several
comparisons of elicitation techniques have identified laddering as a very potent technique. However, only a limited amount of research
describes approaches to creating tool support for laddering, especially for tool-supported self-elicitation of user requirements. A
similar approach to our work-in-progress comes from Kassel and Malloy (2003), who attempt to automate requirements elicitation
through combining domain knowledge, a software requirements specification (SRS) template and user needs as XML in a tool-based
approach [35]. However, their focus lies on closed-ended questions, while the laddering tool proposed in our article relies on the
detail introduced by open-ended questions.
Overall, we expect LadderBot to allow the elicitation of requirements from users without the need for highly qualified interview-
ers. Furthermore, enabling users to self-elicit requirements creates the potential to come in contact with a broad range of users, hope-
fully improving software development projects through detailed insights. In the spirit of “RE for everyone” [1], tool support for users
enables developers to get an idea of the expectations of society and supports the end-to-end value co-creation between an outer- and
an inner circle of systems development teams: between users and system engineers, analysts and developers. Additionally, with
LadderBot, we also wish to show a proof-of-concept for using chatbots for RE, which may inspire the usage of the technology with
elicitation techniques other than laddering in the future (e.g., 5W2H).
We are currently working on finalizing the LadderBot artifact and setting up a pre-test for the initial evaluation of the tool. Moving
forward, we envision multiple adjustments to LadderBot, which will be evaluated in future studies: Enable the tool to use an inter-
viewing technique (retrospective, …) not randomly but based on measurements from the interview process, such as time since asking
a question or based on user characteristics, e.g. cognitive styles [36]. For example, should a user diverge a specified amount from the
average response time, the bot may provide additional assistance through question reformulation. Furthermore, future iterations of
LadderBot will explore ways of generating content codes for the analysis of laddering interviews automatically. When dealing with
a large number of self-elicitation interviews, it becomes necessary to provide requirements analysts with support in generating aggre-
gate implication matrices and hierarchy value maps, ideally through an automated aggregation of results as well as an interactive
visualization.
REFERENCES
[1] K. Villela et al., “Towards ubiquitous RE: A perspective on requirements engineering in the era of digital transformation,” in 2018 IEEE 26th
International Requirements Engineering Conference (RE’18), 2018, pp. 205216.
[2] J. M. Leimeister, H. Österle, and S. Alter, “Digital services for consumers,” Electron. Mark., vol. 24, no. 4, pp. 255258, 2014.
[3] J. Jia and L. F. Capretz, “Direct and mediating influences of user-developer perception gaps in requirements understanding on user
participation,” Requir. Eng., vol. 23, no. 2, pp. 277290, 2018.
[4] H. F. Hofmann and F. Lehner, “Requirements Engineering as a Success Factor in Software Projects,” IEEE Softw., vol. 18, no. 4, pp. 5866,
2001.
[5] O. Dieste and N. Juristo, “Systematic review and aggregation of empirical studies on elicitation techniques,” IEEE Trans. Softw. Eng., vol.
37, no. 2, pp. 283304, 2011.
[6] G. M. Breakwell, Doing Social Psychology Research, 1st ed. The British Psychological Society and Blackwell Publishing Ltd, 2004.
[7] C. H. Chen, L. P. Khoo, and W. Yan, “A strategy for acquiring customer requirement patterns using laddering technique and ART2 neural
network,” Adv. Eng. Informatics, vol. 16, no. 3, pp. 229240, 2002.
[8] O. Dieste, M. Lopez, and F. Ramos, “Updating a Systematic Review about Selection of Software Requirements Elicitation Techniques,” in
11th. Workshop on Requirements Engineering Updating, 2008.
[9] A. Rashid, D. Meder, J. Wiesenberger, and A. Behm, “Visual requirement specification in end-user participation,in First International
Workshop on Multimedia Requirements Engineering, MeRE’06, 2006.
[10] D. C. Derrick, A. Read, C. Nguyen, A. Callens, and G. J. De Vreede, “Automated group facilitation for gathering wide audience end-user
requirements,” in Annual Hawaii International Conference on System Sciences (HICSS’13), 2013, pp. 195204.
[11] F. Pérez and P. Valderas, “Allowing end-users to actively participate within the elicitation of pervasive system requirements through
immediate visualization,” 2009 4th Int. Work. Requir. Eng. Vis., pp. 3140, 2009.
[12] M. Oriol et al., “FAME: Supporting continuous requirements elicitation by combining user feedback and monitoring,” in 2018 IEEE 26th
International Requirements Engineering Conference (RE’18), 2018, pp. 217227.
[13] A. Moitra et al., “Towards development of complete and conflict-free requirements,” in 2018 IEEE 26th International Requirements
Engineering Conference (RE’18), 2018, pp. 286296.
[14] I. Mohedas, S. R. Daly, and K. H. Sienko, “Requirements Development: Approaches and Behaviors of Novice Designers,” J. Mech. Des.,
vol. 137, no. 7, pp. 110, Jul. 2015.
[15] J. Kato et al., “A model for navigating interview processes in requirements elicitation,” in Proceedings of the Asia-Pacific Software
Engineering Conference and International Computer Science Conference, APSEC and ICSC, 2001, pp. 141148.
[16] H. Meth, M. Brhel, and A. Maedche, “The state of the art in automated requirements elicitation,” Inf. Softw. Technol., vol. 55, no. 10, pp.
16951709, 2013.
[17] H. Meth, B. Mueller, and A. Maedche, “Designing a requirement mining system,” J. Assoc. Inf. Syst., vol. 16, no. 9, pp. 799837, 2015.
[18] M. Bano, D. Zowghi, A. Ferrari, P. Spoletini, and B. Donati, “Learning from mistakes: An empirical study of elicitation interviews performed
by novices,” in Proceedings - 2018 IEEE 26th International Requirements Engineering Conference, RE 2018, 2018, pp. 182193.
[19] T. Yamanaka, H. Noguchi, S. Yato, and S. Komiya, “A proposal of a method to navigate interview-driven software requirements elicitation
work,” WSEAS Trans. Inf. Sci. Appl., vol. 7, no. 6, pp. 784798, 2010.
[20] I.-L. Huang and J. R. Burns, “A Cognitive Comparison of Modelling Behaviors Between Novice and Expert Information Analysts,” in Sixth
Americas Conference on Information Systems (AMCIS 2000), 2000, pp. 13161322.
[21] T. Tuunanen, “A new perspective on requirements elicitation methods,” J. Inf. Technol. Theory Appl., vol. 5, no. 3, pp. 4572, 2003.
[22] T. Tuunanen and M. Rossi, “Engineering a method for wide audience requirements elicitation and integrating it to software development,”
in 2004 37th Annual Hawaii International Conference on System Sciences (HICSS’04), 2004, pp. 110.
[23] M. S. Mulvey, J. C. Olson, R. L. Celsi, and B. A. Walker, “Exploring the Relationships between Means End Knowledge and Involvement,
Adv. Consum. Res., vol. 21, pp. 5157, 1994.
[24] C. M. Chiu, “Applying means-end chain theory to eliciting system requirements and understanding users perceptual orientations,” Inf.
Manag., vol. 42, no. 3, pp. 455468, 2005.
[25] Y. Jung, “What a smartphone is to me: Understanding user values in using smartphones,” Inf. Syst. J., vol. 24, no. 4, pp. 299321, 2014.
[26] G. Botschen, E. M. Thelen, and R. Pieters, “Using means‐end structures for benefit segmentation,” Eur. J. Mark., vol. 33, no. 1/2, pp. 3858,
2004.
[27] L. C. Klopfenstein, S. Delpriori, S. Malatini, and A. Bogliolo, “The Rise of Bots: A Survey of Conversational Interfaces, Patterns, and
Paradigms,” in 2017 Conference on Designing Interactive Systems (DIS’17), 2017, pp. 555565.
[28] T. Rietz, I. Benke, and A. Maedche, “The Impact of Anthropomorphic and Functional Chatbot Design Features in Enterprise Collaboration
Systems on User Acceptance,” in 2019 14th International Conference on Wirtschaftsinformatik (WI’19), 2019, pp. 16561670.
[29] M. F. McTear, “Spoken dialogue technology: enabling the conversational user interface,” ACM Comput. Surv., vol. 34, no. 1, pp. 90169,
2002.
[30] U. Gnewuch, S. Morana, and A. Maedche, “Towards Designing Cooperative and Social Conversational Agents for Customer Service,” in
2017 International Conference on Information Systems (ICIS’17), 2017, pp. 113.
[31] J. F. Nunamaker, D. C. Derrick, A. C. Elkins, J. K. Burgoon, and M. W. Patton, “Embodied Conversational Agent-Based Kiosk for Automated
Interviewing,” J. Manag. Inf. Syst., vol. 28, no. 1, pp. 1748, 2011.
[32] M. Pickard, R. M. Schuetzler, J. Valacich, and D. A. Wood, “Next-Generation Accounting Interviewing: A Comparison of Human and
Embodied Conversational Agents (ECAs) as Interviewers,” SSRN Electron. J., no. April, pp. 121, 2017.
[33] C. Corbridge, G. Rugg, N. P. Major, N. R. Shadbolt, and A. M. Burton, “Laddering: technique and tool use in knowledge acquisition,”
Knowledge Acquisition, vol. 6. pp. 315341, 1994.
[34] C. R. Coulin, “A Situational Approach and Intelligent Tool for Collaborative Requirements Elicitation,” University of Technology, Sydney,
2007.
[35] N. W. Kassel and B. A. Malloy, “An Approach to Automate Requirements Elicitation and Specification,” Proc. 7th Int. Conf. Softw. Eng.
Appl., pp. 544549, 2003.
[36] O. Blazhenkova and M. Kozhevnikov, “The new object-spatial-verbal cognitive style model: Theory and measurement,” Appl. Cogn.
Psychol., vol. 23, no. 5, pp. 638663, Jul. 2009.
... This serves to understand to what extent the results of experts and the crowd differ. For the evaluation of the coding system, the dataset from Rietz and Maedche [63] was taken. ...
... When all coders participated in the same laddering interview, their understanding of the topic and the related consequences and values could be higher, which might lead to a better quality of results. As the scalable collection of qualitative data produces another key challenge to include wide audiences in qualitative research, a combination of our system with automated elicitation systems (e.g., Ladderbot [63]) might provide a comprehensive end-to-end solution for qualitative researchers. ...
Conference Paper
Full-text available
While qualitative research can produce a rich understanding of peoples’ mind, it requires an essential and strenuous data annotation process known as coding. Coding can be repetitive and timeconsuming, particularly for large datasets. Crowdsourcing provides flexible access toworkers all around theworld, however, researchers remain doubtful about its applicability for coding. In this study, we present an interactive coding system to support crowdsourced deductive coding of semi-structured qualitative data. Through an empirical evaluation on Amazon Mechanical Turk, we assess both the quality and the reliability of crowd-support for coding. Our results show that non-expert coders provide reliable results using our system. The crowd reached a substantial agreement of up to 91% with the coding provided by experts. Our results indicate that crowdsourced coding is an applicable strategy for accelerating a strenuous task. Additionally, we present implications of crowdsourcing to reduce biases in the interpretation of qualitative data.
... Our work is also based on previous studies on challenges/exercises for teaching computer science, in particular related to IT security (Švábenskỳ et al. 2018;Hulin et al. 2017;Chapman et al. 2014;Mirkovic and Peterson 2014;Leune and Petrilli Jr 2017;Tabassum et al. 2018). The present work also makes use of artificial intelligence (AI) methods; in particular, it makes use of the lettering interview technique (Rietz and Maedche 2019). To evaluate our approach in terms of research questions, we follow best practices on survey design and follow standard existing analysis methodologies (Groves et al. 2009;Drever 1995;Harrell and Bradley 2009;Wagner et al. 2020). ...
... Nonetheless, we draw inspiration from the conceptual framework, which we adapt to our scenario. Rietz et al. 2019, show how to apply the laddering interview technique's principles to requirements elicitation. The laddering technique consists of issuing a series of questions based on previous system states (i.e., previous answers and previous questions). ...
Article
Full-text available
Software vulnerabilities, when actively exploited by malicious parties, can lead to catastrophic consequences. Proper handling of software vulnerabilities is essential in the industrial context, particularly when the software is deployed in critical infrastructures. Therefore, several industrial standards mandate secure coding guidelines and industrial software developers’ training, as software quality is a significant contributor to secure software. CyberSecurity Challenges (CSC) form a method that combines serious game techniques with cybersecurity and secure coding guidelines to raise secure coding awareness of software developers in the industry. These cybersecurity awareness events have been used with success in industrial environments. However, until now, these coached events took place on-site. In the present work, we briefly introduce cybersecurity challenges and propose a novel platform that allows these events to take place online. The introduced cybersecurity awareness platform, which the authors call Sifu, performs automatic assessment of challenges in compliance to secure coding guidelines, and uses an artificial intelligence method to provide players with solution-guiding hints. Furthermore, due to its characteristics, the Sifu platform allows for remote (online) learning, in times of social distancing. The CyberSecurity Challenges events based on the Sifu platform were evaluated during four online real-life CSC events. We report on three surveys showing that the Sifu platform’s CSC events are adequate to raise industry software developers awareness on secure coding.
... Their motivation was also related to "a lack of access to collaboration professionals such as facilitators and skilled team leaders. " Rietz and Maedche [14] propose the LadderBot, a chatbot implementing the laddering technique to facilitate requirements elicitation. This process consists of repeatedly asking why questions to the user after initial queries leading the user to think about the requirements. ...
Preprint
Software startups develop innovative, software-intensive product and services. This context leads to uncertainty regarding the software they are building. Experimentation, a process of testing hypotheses about the product, helps these companies to reduce uncertainty through different evidence-based approaches. The first step in experimentation is to identify the hypotheses to be tested. HyMap is a technique where a facilitator helps a software startup founder to draw a cognitive map representing her understanding of the context and, based on that, create hypotheses about the software to be built. In this paper, we present the Digital Mentor, an working-in-progress conversational bot to help creating a HyMap without the need of a human facilitator. We report the proposed solution consisting of a web application with the backend of a natural language understanding system, the current state of development, the challenges we faced so far and the next steps we plan to move forward.
... Rietz et al. [17], show how to apply the principles of the laddering interview technique for requirements elicitation. The laddering technique consists of issuing a series of questions that are based on previous system states (i.e., previous answers and previous questions). ...
Preprint
Over the last years, the number of cyber-attacks on industrial control systems has been steadily increasing. Among several factors, proper software development plays a vital role in keeping these systems secure. To achieve secure software, developers need to be aware of secure coding guidelines and secure coding best practices. This work presents a platform geared towards software developers in the industry that aims to increase awareness of secure software development. The authors also introduce an interactive game component, a virtual coach, which implements a simple artificial intelligence engine based on the laddering technique for interviews. Through a survey, a preliminary evaluation of the implemented artifact with real-world players (from academia and industry) shows a positive acceptance of the developed platform. Furthermore, the players agree that the platform is adequate for training their secure coding skills. The impact of our work is to introduce a new automatic challenge evaluation method together with a virtual coach to improve existing cybersecurity awareness training programs. These training workshops can be easily held remotely or off-line.
... Much of the process can be painstaking and repetitive [46]. This challenge is further aggravated with access to more massive datasets with new possibilities for scalable data collection [36,42], causing coding to lose reliability and become intractable [1,6]. ...
Conference Paper
Qualitative research can produce a rich understanding of a phenomenon but requires an essential and strenuous data annotation process known as coding. Coding can be repetitive and time-consuming, particularly for large datasets. Existing AI-based approaches for partially automating coding, like supervised machine learning (ML) or explicit knowledge represented in code rules, require high technical literacy and lack transparency. Further, little is known about the interaction of researchers with AI-based coding assistance. We introduce Cody, an AI-based system that semi-automates coding through code rules and supervised ML. Cody supports researchers with interactively (re)defining code rules and uses ML to extend coding to unseen data. In two studies with qualitative researchers, we found that (1) code rules provide structure and transparency, (2) explanations are commonly desired but rarely used, (3) suggestions benefit coding quality rather than coding speed, increasing the intercoder reliability, calculated with Krippendorff’s Alpha, from 0.085 (MAXQDA) to 0.33 (Cody).
... Rietz et al. [17], show how to apply the principles of the laddering interview technique for requirements elicitation. The laddering technique consists of issuing a series of questions that are based on previous system states (i.e., previous answers and previous questions). ...
Chapter
Full-text available
Over the last years, the number of cyber-attacks on industrial control systems has been steadily increasing. Among several factors, proper software development plays a vital role in keeping these systems secure. To achieve secure software, developers need to be aware of secure coding guidelines and secure coding best practices. This work presents a platform geared towards software developers in the industry that aims to increase awareness of secure software development. The authors also introduce an interactive game component, a virtual coach, which implements a simple artificial intelligence engine based on the laddering technique for interviews. Through a survey, a preliminary evaluation of the implemented artifact with real-world players (from academia and industry) shows a positive acceptance of the developed platform. Furthermore, the players agree that the platform is adequate for training their secure coding skills. The impact of our work is to introduce a new automatic challenge evaluation method together with a virtual coach to improve existing cybersecurity awareness training programs. These training workshops can be easily held remotely or off-line.
... Unfortunately, coding natural language corpora is a painstaking process due to being time-intensive and repetitive [14,8]. With access to larger datasets due to new possibilities for scalable data collection [11,13], coding loses reliability and becomes intractable [1,2]. ...
Conference Paper
Full-text available
Qualitative coding, the process of assigning labels to text as part of qualitative analysis, is time-consuming and repetitive, especially for large datasets. While available QDAS sometimes allows the semi-automated extension of annotations to unseen data, recent user studies revealed critical issues. In particular , the integration of automated code suggestions into the coding process is not transparent and interactive. In this work, we present "Cody", a system for semi-automated qualitative coding that suggests codes based on human-defined coding rules and supervised machine learning (ML). Suggestions and rules can be revised iteratively by users in a lean interface that provides explanations for code suggestions. In a preliminary evaluation, 42% of all documents could be coded automatically based on code rules. Cody is the first coding system to allow users to define query-style code rules in combination with supervised ML. Thereby, users can extend manual annotations to unseen data to improve coding speed and quality.
... The Era of Big Data further aggravates challenges for coding. With the increasing size of datasets created by digital communication or digital interview techniques (Rietz and Maedche 2019), manual coding techniques are limited by the available workforce (Crowston et al. 2012). For example, a recent study used a chatbot to ask open-ended questions and collected over 11,000 free-text responses, of which only about 50% could be analyzed through qualitative coding in a reasonable time (Xiao et al. 2020). ...
Conference Paper
Full-text available
Coding is an important process in qualitative research. However, qualitative coding is highly time-consuming even for small datasets. To accelerate this process, qualitative coding systems increasingly utilize machine learning (ML) to automatically recommend codes. Existing literature on ML-assisted coding reveals two major issues: (1) ML model training is not well integrated into the qualitative coding process, and (2) code recommendations need to be presented in a trustworthy way. We believe that the recently introduced concept of interactive machine learning (IML) is able to address these issues. We present an ongoing design science research project to design an IML system for qualitative coding. First, we discover several issues that hinder the success of current ML- based coding systems. Drawing on results from multiple fields, we derive meta- requirements, propose design principles and an initial prototype. Thereby, we contribute with design knowledge for the intelligent augmentation of qualitative coding systems to increase coding productivity.
Article
Full-text available
Many software users give feedback online about the applications they use. This feedback often contains valuable requirements information that can be used to guide the effective maintenance and evolution of a software product. Yet, not all software users give online feedback. If the demographics of a user-base aren’t fairly represented, there is a danger that the needs of less vocal users won’t be considered in development. This work investigates feedback on three prominent online channels: app stores, product forums, and social media. We directly survey software users about their feedback habits, as well as what motivates and dissuades them from providing feedback online. In an initial survey of 1040 software users, we identify statistically significant differences in the demographics of users who give feedback (gender, age, etc.), and key differences in what motivates them to engage with each of the three studied channels. In a second survey of 936 software users, we identify the top reasons users don’t give feedback, including significant differences between demographic groups. We also present a detailed list of user-rated methods to encourage their feedback. This work provides meaningful context for requirements sourced from online feedback, identifying demographic groups who are underrepresented. Findings on what motivates and discourages user feedback give insight into how feedback channels and developers can increase engagement with their user-base.
Conference Paper
Full-text available
Information technology is rapidly changing the way how people collaborate in enterprises. Chatbots integrated into enterprise collaboration systems can strengthen collaboration culture and help reduce work overload. In light of a growing usage of chatbots in enterprise collaboration systems, we examine the influence of anthropomorphic and functional chatbot design features on user acceptance. We conducted a survey with professionals familiar with interacting with chatbots in a work environment. The results show a significant effect of anthropomorphic design features on perceived usefulness, with a strength four times the size of the effect of functional chatbot features. We suggest that researchers and practitioners alike dedicate priorities to anthropomorphic design features with the same magnitude as common for functional design features in chatbot design and research.
Conference Paper
Full-text available
Context: Software evolution ensures that software systems in use stay up to date and provide value for end-users. However, it is challenging for requirements engineers to continuously elicit needs for systems used by heterogeneous end-users who are out of organisational reach. Objective: We aim at supporting continuous requirements elicitation by combining user feedback and usage monitoring. Online feedback mechanisms enable end-users to remotely communicate problems, experiences, and opinions, while monitoring provides valuable information about runtime events. It is argued that bringing both information sources together can help requirements engineers to understand end-user needs better. Method/Tool: We present FAME, a framework for the combined and simultaneous collection of feedback and monitoring data in web and mobile contexts to support continuous requirements elicitation. In addition to a detailed discussion of our technical solution, we present the first evidence that FAME can be successfully introduced in real-world contexts. Therefore, we deployed FAME in a web application of a German small and medium-sized enterprise (SME) to collect user feedback and usage data. Results/Conclusion: Our results suggest that FAME not only can be successfully used in industrial environments but that bringing feedback and monitoring data together helps the SME to improve their understanding of end-user needs, ultimately supporting continuous requirements elicitation.
Conference Paper
Full-text available
[Context] Interviews are the most widely used elicitation technique in requirements engineering. However, conducting effective requirements elicitation interviews is challenging, due to the combination of technical and soft skills that requirements analysts often acquire after a long period of professional practice. Empirical evidence about training the novices on conducting effective requirements elicitation interviews is scarce. [Objectives] We present a list of most common mistakes that novices make in requirements elicitation interviews. The objective is to assist the educators in teaching interviewing skills to student analysts. [Research Method] We conducted an empirical study involving role-playing and authentic assessment with 110 students, teamed up in 28 groups, to conduct interviews with a customer. One researcher made observation notes during the inter-view while two researchers reviewed the recordings. We qualitatively analyzed the data to identify the themes and classify the mistakes. [Results and conclusion] We identified 34 unique mistakes classified into 7 high level themes. We also give examples of the mistakes made by the novices in each theme, to assist the educationists and trainers. Our research design is a novel combination of well-known pedagogical approaches described in sufficient details to make it repeatable for future requirements engineering education and training research.
Conference Paper
Full-text available
The idea of interacting with computers through natural language dates back to the 1960s, but recent technological advances have led to a renewed interest in conversational agents such as chatbots or digital assistants. In the customer service context, conversational agents promise to create a fast, convenient, and cost-effective channel for communicating with customers. Although numerous agents have been implemented in the past, most of them could not meet the expectations and disappeared. In this paper, we present our design science research project on how to design cooperative and social conversational agents to increase service quality in customer service. We discuss several issues that hinder the success of current conversational agents in customer service. Drawing on the cooperative principle of conversation and social response theory, we propose preliminary meta-requirements and design principles for cooperative and social conversational agents. Next, we will develop a prototype based on these design principles.
Conference Paper
Full-text available
This work documents the recent rise in popularity of messaging bots: chatterbot-like agents with simple, textual interfaces that allow users to access information, make use of services, or provide entertainment through online messaging platforms. Conversational interfaces have been often studied in their many facets, including natural language processing, artificial intelligence, human-computer interaction, and usability. In this work we analyze the recent trends in chatterbots and provide a survey of major messaging platforms, reviewing their support for bots and their distinguishing features. We then argue for what we call "Botplication", a bot interface paradigm that makes use of context, history, and structured conversation elements for input and output in order to provide a conversational user experience while overcoming the limitations of text-only interfaces.
Article
Full-text available
User participation is considered an effective way to conduct requirements engineering, but user-developer perception gaps in requirements understanding occur frequently. Since user participation in practice is not as active as we expect and the requirements perception gap has been recognized as a risk that negatively affects projects, exploring whether user-developer perception gaps in requirements understanding will hinder user participation is worthwhile. This will help develop a greater comprehension of the intertwined relationship between user participation and perception gap, a topic that has not yet been extensively examined. This study investigates the direct and mediating influences of user-developer requirements perception gaps on user participation by integrating requirements uncertainty and top management support. Survey data collected from 140 subjects were examined and analyzed using structural equation modeling. The results indicate that perception gaps have a direct negative effect on user participation and negate completely the positive effect of top management support on user participation. Additionally, perception gaps do not have a mediating effect between requirements uncertainty and user participation because requirements uncertainty does not significantly and directly affect user participation, but requirements uncertainty indirectly influences user participation due to its significant direct effect on perception gaps. The theoretical and practical implications are discussed, and limitations and possible future research areas are identified.
Conference Paper
We are now living in the era of digital transformation: Innovative and digital business models are transforming the global business world and society. However, the authors of this paper have perceived barriers that prevent requirements engineers from contributing properly to the development of the software systems that underpin the digital transformation. We also realized that breaking down each of these barriers would contribute to requirements engineering (RE) becoming ubiquitous in certain dimensions: RE everywhere, with everyone, for everything, automated, accepting openness, and cross-domain. In this paper, we analyze each dimension of ubiquity in the scope of the interaction between requirements engineers and end users. In particular, we point out the transformation that is required to break down each barrier, present the perspective of the scientific community and our own practical perspective, and discuss our vision on how to achieve this dimension of ubiquity. Our goal is to raise the interest of the research community in providing approaches to address the barriers and move towards ubiquitous RE.