Conference PaperPDF Available

Providing Foundation for User Feedback Concepts by Extending a Communication Ontology


Abstract and Figures

The term user feedback is becoming widely used in requirements engineering (RE) research to refer to the comments and evaluations that users express upon having experienced the use of a software application or service. This explicit feedback takes place in virtual spaces (e.g., issue tracking systems, app stores), aiming, for instance, at reporting on discovered bugs or requesting new features. Founding the notion of explicit user feedback with the use of an ontology may support a deep understanding of the feedback nature, as well as contribute to the development of tool-components for its analysis at use of requirements analysts. In this paper, we present a user feedback ontology as an extension of an existing communication ontology. We describe how we built it, along with a set of competency questions, and illustrate its applicability on an example taken from a collaborative communication related to RE for software evolution.
Content may be subject to copyright.
Providing Foundation for User Feedback Concepts by
Extending a Communication Ontology
Itzel Morales-Ramirez1,2, Anna Perini1, and Renata Guizzardi3
1Software Engineering Research Unit. Fondazione Bruno Kessler - IRST
2International Doctoral School ICT- University of Trento, Italy
3Ontology and Conceptual Modeling Research Group, UFES, Brazil
Abstract. The term user feedback is becoming widely used in requirements en-
gineering (RE) research to refer to the comments and evaluations that users ex-
press upon having experienced the use of a software application or service. This
explicit feedback takes place in virtual spaces (e.g., issue tracking systems, app
stores), aiming, for instance, at reporting on discovered bugs or requesting new
features. Founding the notion of explicit user feedback with the use of an ontology
may support a deep understanding of the feedback nature, as well as contribute
to the development of tool-components for its analysis at use of requirements
analysts. In this paper, we present a user feedback ontology as an extension of
an existing communication ontology. We describe how we built it, along with a
set of competency questions, and illustrate its applicability on an example taken
from a collaborative communication related to RE for software evolution.
Keywords: User Feedback, Communication Ontology, Requirements Engineer-
1 Introduction
More and more, software users make use of social media to express their comments
about a software service or to rate applications they often use. This information, which
is easily accessible via the Internet, is considered an invaluable asset for software de-
velopers and service providers, to help them understand how to improve their software
application or service, and get inspiration for new functionalities and products [1, 2].
This information deliberately provided by the users is generally called explicit user
feedback and can be used as a complement to implicit user feedback, i.e. information
collected by means of observing the user behaviour through logs.
In [3] it is stated, “Feedback is one of the primary results of introducing the im-
plemented software system into the real world. There is an immediate response to the
system from those affected by it”. This definition leads us to recognise user feedback
both as an artefact and as a process. Taking the perspective of user feedback as an arte-
fact, we revise the previous definition and propose the following one: “User feedback
is a reaction of the user upon her experience in using a software service or application.
Explicit user feedback could be based on multi-modal communication, such as natural
language text, images, emoticons, etc.”. Taking the perspective of the communication
2 Itzel Morales-Ramirez, Anna Perini, and Renata Guizzardi
process, we claim that the roles of sender and receiver are essential to make clear the
purpose of explicit feedback.
In our research, we focus on explicit user feedback, with the ultimate goal of defin-
ing methods and techniques to support software system maintenance, as well as col-
laborative RE tasks as described in a recent paper [4]. We consider issues related to
collecting and analysing user feedback. Indeed, different techniques may be needed for
collection and for analysis, depending on the feedback type, and on the volume of the
corresponding data. On one hand, structured feedback is collected according to a pre-
defined input template. Consequently, in this case the analysis is driven by the schema
underlying the template itself. On the other hand, unstructured user feedback is col-
lected freely, without the aid of any predefined structure. Thus, the analysis in this case
may require data mining and natural language processing (NLP) techniques to discover,
retrieve, and extract information and opinions from huge textual information [5].
In this paper we present an ontology of user feedback, key for a deep understanding
of the explicit user feedback and its further exploitation. To build the user feedback
ontology we adopt a goal-oriented methodology that guide us in specifying the main
stakeholders, including the users of our ontology (e.g. designers of feedback collector
tools and analysts), and a set of competency questions (CQs) that the resulting ontology
will answer [6]. For example, the following CQs emerged from the analysis of the goals
of feedback analysts: (a) what are the types of user feedback presentation formats?;
(b) how can user feedback be classified?; (c) and what are the speech acts commonly
expressed by users in their explicit feedback?
Since feedback collection is seen as a communication process, we develop our on-
tology as an extension of a well-founded existing communication ontology [7]. Besides
this, we consider users’ feedback expressed in natural language (NL), therefore we take
into account the speech act theory (SAT) of Searle [8, 9].
The rest of the paper is organised as follows. Section 2 presents the concepts we
borrow from pre-existing ontologies and SAT theory. Section 3 describes the user feed-
back ontology and the concepts that it involves. Related work is recalled in Section 4.
Finally we draw some conclusions and point out future work in Section 5.
2 Baseline
Collecting user feedback basically consists in a process of communication between the
developers and the users of existing software systems. We build on an existing Commu-
nication ontology [7] and extend it including user feedback concepts. This Communica-
tion ontology is especially attractive because it is grounded on a foundational ontology,
namely the Unified Foundational Ontology (UFO) [10], which has been successfully
applied to provide real-world semantics for ontologies in different fields. In the follow-
ing we briefly recall the concepts that we reuse from UFO and from the Communication
UFO distinguishes between endurants and perdurants. Endurants do not have tem-
poral parts, and persist in time while keeping their identity (e.g. a person and the colour
of an apple). A Perdurant (also referred to as event), conversely, is composed of tem-
poral parts (e.g. storm, heart attack, trip). Substantials are existentially independent
endurants (e.g. a person or a car). Moments, in contrast, are endurants that are ex-
Title Suppressed Due to Excessive Length 3
istentially dependent on other endurants or events, inhering in these individuals (e.g.
someone’s headache and the cost of a trip). Moments can be intrinsic or relational. In-
trinsic Moments are those that depend on one single individual in which they inhere.
ARelator is a relational moment, i.e. a moment that inheres simultaneously in multi-
ple individuals. Agents are substantials that can perceive events, perform action con-
tributions and bear special kinds of intrinsic moments, named Intentional Moments
(examples of agents are person, student and software developer). Action contributions
are intentional participations of agents within an event (e.g. saying something to some-
one, writing a letter). An Intention is a type of intentional moment (other examples
of intentional moments are belief and desire) that represents an internal commitment
of the agent to act towards that goal and, therefore, causes the agent to perform action
The communication ontology considers sender and receiver as the central agents in
the communication process. The Sender is an agent that sends a message through a
communicative act. A Communicative Act is an action contribution that carries out
the information exchanged in the communication process. This communicative act cor-
responds to what Searle names illocutionary act [8]. The exchanged information is here
captured as a Message, which is the propositional content of the communicative act.
AReceiver is an agent that perceives the communicated message. As the communica-
tive act, a Perception is an action contribution that consists in the reception of the
exchanged information (thus, also having a message as propositional content). A Com-
municative Interaction is a complex action composed of exactly one communicative
act and one or more perceptions. In other words, in a communicative interaction, there
is one sender agent and at least one receiver agent. Moreover, the communicative act
and the perceptions involved in a communicative interaction have the same message as
propositional content. Thus, this message is also said to be the propositional content of
the communicative interaction.
In addition, due to the gathering of feedback is seen as a communication process,
we extend the communication ontology with concepts coming from the literature about
the characterisation, elaboration and use of user feedback. Specifically, we look at the
feedback expressed in NL, hence, we rely on the linguistic theory of SAT [8, 9].
3 User Feedback Ontology
In this section we explain the concepts of our ontology. Let’s start with the concepts
of the baseline, which are illustrated with a dark grey colour in Fig. 1. We have dis-
tinguished three key concepts of the communication ontology to be extended. These
concepts are intention, message and communicative act.
We extend the concept of Intention into communicative intention and reflexive in-
tention. The Communicative Intention refers to the internal commitment of a sender
of conveying an information to a receiver or an audience, regardless of having this in-
formation understood. While a Reflexive Intention, according to H.P. Grice, as quoted
in [9], refers to the sender’s intention that is formulated and transmitted with the purpose
of being recognised or understood by a receiver.
A message bears a given Topic, in UFO a topic is an intrinsic moment (i.e. a prop-
erty of the message), that becomes the subject of conversation between agents. The last
4 Itzel Morales-Ramirez, Anna Perini, and Renata Guizzardi
triggered by >
< propositional
content of
*< performance of
< performance of
content of >
involves >
Belief Desire
inheres in >
< performance
< inheres in
Fig. 1. Concepts extending the communication ontology. Baseline concepts in dark grey, new
concepts extending the ontology in white.
concept that is added and central in our ontology is the concept Speech Act that is
per-se an action, i.e. an action contribution, and is the basic unit in a linguistic commu-
Communication ontology and SAT concepts. A speech act involves three commu-
nicative acts, namely, locutionary, illocutionary and perlocutionary act that we consider
specialisations of a communicative act. These acts are visualised in Fig. 2, in which we
refine the relation between speech act and communicative act and connect the speech
act directly to each one of the acts. A Locutionary Act is the act of “saying something”
(production of words), an Illocutionary Act makes reference to the way in which the
locutions are used and in which sense (intention to motivate the production of words),
and a Perlocutionary Act is the effect the sender wants to accomplish on the receiver or
audience. Let’s consider the utterance “Is there any example code I could look at?”, the
locutionary act corresponds to the utterance of this sentence, the illocutionary act cor-
responds to the speaker’s intention to make the audience aware that she has a request,
and the effect, i.e. the perlocutionary act, is that the speaker got the audience to handle
her request. A speech act involves at least one act, what we mean is that someone could
utter a senseless phrase (e.g.,“one the snow”), accomplishing the locutionary act, but
the illocutionary and perlocutionary acts are not present. The Performative verb refers
to the verb that classifies the illocutionary act into five categories that were introduced
by Searle, and later revised by Bach and Harnish [9].
An intention inheres in the sender, in this case the reflexive intention that causes
the illocutionary act. Then, this illocutionary act triggers the execution of a locutionary
act through the utterance of specific words that will reify such a reflexive intention. The
consequence of the illocutionary act is the sender’s perlocutionary act that the receiver
will perceive as the overall effect of a speech act. We need to clarify that the relation
consequence of between the illocutionary and perlocutionary act is a type of indirect
causation, i.e. if the communication is successful, the receiver will perform the action
intended by the sender. Finally, a speech act is successful if the intention that the sender
expresses is identified by the receiver by means of recognising such a sender’s intention.
Title Suppressed Due to Excessive Length 5
Sender Communicative
*triggered by >
< propositional
content of
< performance of
content of >
of >
< triggers
of >1
< effect on
causes >
is classified
by >
< inheres in
involves >
1involves >
1involves >
interpreted by analyst >
ConstativeDirective Expressive
... ... ...
Feeback analysis
Request Requirements Questions
Fig. 2. Communicative acts involved in each speech act.
SAT concepts. Our user feedback ontology builds on a revised taxonomy of speech
acts proposed in a previous work [11], which considers speech acts commonly used in
online discussions of an open source project. We grouped the selected speech acts in
three main categories: constantive, directive, and expressive. We are currently consol-
idating the part of the ontology that considers concepts that are specific to feedback
collection and analysis. An excerpt may be found in [6]. For reasons of space we only
show the connection of the reflexive intention to the intention interpreted by a software
requirements analyst. Once the analysis of the speech acts is performed, different re-
sults can be presented. For example, the categorisation of the feedback according to the
user’s reflexive intention. As can be found in the recent literature on feedback from soft-
ware users, possible intentions for a user to send a feedback are Clarification Request,
New Feature Request,Bug Report and Provide Rating. For instance, Bug Report
is inspired on corrective or negative feedback [12, 13] as in this case, the user feed-
back refers to information that should be used to correct the software, which means that
this information has a negative connotation. The encouraging [14] and positive [13]
feedback have been turned into Rating (e.g, stars in the AppStore) [2]. Strategic be-
haviour [14] in our ontology refers to a New Feature Request while Clarification
means that the feedback contains questions or extra information (such as critical de-
tails), to make something clearer. These terms are indeed used in different works and
our feedback ontology attempts at unifying all the different concepts in a single classi-
3.1 Illustrative Example
The following example, see Fig. 3, illustrates how the ontology may support the anal-
ysis of user feedback. We take an excerpt of an e-mail4sent by a user of a software
application called XWiki5. Note that the ontological concepts are highlighted with a
6 Itzel Morales-Ramirez, Anna Perini, and Renata Guizzardi
different font type to facilitate the understanding. This e-mail represents an instance of
unstructured user feedback. The fields Subject: and From: are the concepts Topic and
Sender, respectively, of our ontology. In the body of the e-mail we can distinguish dif-
ferent types of speech acts. In this example we find that the first speech act expresses
the user’s reflexive intention of making questions to be answered. After some other
messages, we find the intention suppose, referring to the speech acts suppositives. The
followed two speech acts requirement and question are also expressed.
Subject: Latex html in editor for math formulas?
| {z }
T opic
From: alan
Hi folks,
(1) Will there be an additional button in the editor, to implement formulas?
| {z }
Or how can I use formulas, maybe you know a good latex html live editor.
| {z }
it would be nice to have something,
| {z }
if there already isnt?
| {z }
Fig. 3. Example of unstructured user feedback in the XWiki mailing list.
Now let’s see a detailed explanation of the analysis performed. Taking the previous
example we first see the Message (1) is the propositional content of aCommunica-
tive Act. As highlighted before, this message bears the Topic “Latex html in editor for
math formulas?” and the Sender is alan. The Reflexive Intention Make a question
inheres in alan that causes the Illocutionary act Quest. This act produces an effect, i.e.
Perlocutionary act that is a consequence of the Illocutionary act, which together with
the Locutionary act Elaborate a question are the three acts involved in the Speech
Act Question. The ontology supports the understanding of such an intention expressed
by a sender that must be recognised by a receiver. In this example the understanding
that the sender is expressing a question is that of producing the Perlocutionary act
Answer –that will have the effect on the receiver, i.e. XWiki community, who through the
Perception Reading–, which is triggered by the communicative act, will eventually
answer the posed question. However, the other speech acts (i.e. Suppositives, Require-
ments, and Questions) may provide to the analyst indicators for identifying a Feature
Request in this feedback.
3.2 Discussion
At this stage of development the ontology is intended to clarify concepts useful to un-
derstand the nature of explicit feedback elaborated by software’s users and to support
requirements analysts when performing feedback analysis. Due to space limits, we only
discuss one of the proposed CQs, namely (c) what are the speech acts commonly ex-
pressed by users in their explicit feedback? This CQ is answered by querying the part
Title Suppressed Due to Excessive Length 7
of the ontology where the concept speech act is specialised into the different sub-kinds
(see Fig. 2, bottom-centre). Answers to this question are taken into account in tool-
supported analysis technique for explicit unstructured user feedback, which exploits
NLP tools. This tool supports the classification of phrases as instances of concepts rep-
resenting speech acts that commonly appear in user feedback [11]. The ontology allows
us to identify the relation between the types of speech acts (or their combination) to the
type of user feedback (see Fig. 2, left-corner at the bottom). For example if the speech
acts used in the feedback under analysis are classified as Suppositive, Requirement, and
Questions, this may be interpreted as indicators for the analyst towards identifying a
Feature Request.
Other capabilities of the proposed ontology, which can not be illustrated here due to
space limits, concern the support for the identification of analysis techniques, based on
the presentation format of the feedback, i.e. visual, audio and textual, or as a composi-
tion of linguistic and non-linguistic act (e.g. attachment, emoticons).
Concerning the scope of application of the proposed ontology, by construction we con-
sider software requirements analysis and design of feedback collection techniques as
application areas. Since we are building the user feedback ontology extending a foun-
dational ontology we believe that it can help understand the nature of user feedback.
Regarding the current status and known limitations of the work, we are consolidating
the ontology and a proper validation along all the CQs considered so far, is to be per-
4 Related Work
We briefly recall work addressing the problem that motivates our research, namely how
to collect and analyse explicit user feedback for the purpose of collaborative RE in
software evolution and maintenance. Worth mentioning are tools, which enable users
to give feedback in situ, based on semi-structured collection, e.g. [15, 16]. Focusing on
works that investigate how explicit, indirect user feedback analysis can support software
maintenance and evolution tasks, the following approaches are worth to be mentioned:
[17] presents an approach based on statistical analysis, to exploit explicit, indirect feed-
back by power users, reporting about software defects through the open bug reporting
in the Mozilla project. Statistical analysis is applied by [2] to answer questions about
how and when users provide feedback in the AppStore.
There is a vast literature about the application of ontologies in RE, but it is out of the
scope of this paper to mention all that work. As examples in the case of requirements
elicitation, we can mention [18] that presents an approach to build a domain ontology
used to guide the analyst on domain concepts used for the elicitation of requirements.
Our work differs from it because our ontology aims at supporting designers of feedback
collector tools and requirements analysts to understand why and how user feedback is
provided, as a previous step before determining the requirements.
5 Conclusion
In this paper we introduced a user feedback ontology that we are developing, which
considers feedback as an artefact and as a special type of communication process. We
8 Itzel Morales-Ramirez, Anna Perini, and Renata Guizzardi
described how we are building it by extending and integrating existing ontologies, bor-
rowing concepts from different theories, including SAT. As future work, we intend to
validate it systematically against the whole set of competency questions that it is built
for. The ontology will provide foundation to our ongoing work on conversation analysis
based on the automatic extraction of speech acts [11].
1. Kienle, H., Distante, D.: Evolution of Web Systems. In Mens, T., Serebrenik, A., Cleve, A.,
eds.: Evolving Software Systems. Springer (2014) 201–228
2. Pagano, D., Maalej, W.: User Feedback in the Appstore: An Empirical Study. In: RE, IEEE
(2013) 125–134
3. Madhavji, N.H., Fern´
andez-Ramil, J.C., Perry, D.E., eds.: Software Evolution and Feedback:
Theory and Practice. John Wiley and Sons Ltd (2006)
4. Morales-Ramirez, I., Vergne, M., Morandini, M., Siena, A., Perini, A., Susi, A.: Who is
the Expert? Combining Intention and Knowledge of Online Discussants in Collaborative RE
Tasks. In: ICSE Companion, ACM (2014) 452–455
5. Cambria, E., Schuller, B., Xia, Y., Havasi, C.: New Avenues in Opinion Mining and Senti-
ment Analysis. IEEE Intelligent Systems 28(2) (2013) 15–21
6. Guizzardi, R.S.S., Morales-Ramirez, I., Perini, A.: A Goal-oriented Analysis to Guide the
Development of a User Feedback Ontology. In: iStar. CEUR Workshop Proceedings (2014)
7. Oliveira, F.F., Antunes, J.C., Guizzardi, R.S.: Towards a Collaboration Ontology. In: Proc.
of the Workshop on Ontologies and Metamodels for Software and Data Engineering. (2007)
8. Searle, J.R.: Intentionality: An Essay in the Philosophy of Mind. Number 143. Cambridge
University Press (1983)
9. Bach, K., Harnish, R.M.: Linguistic Communication and Speech Acts. MIT Press, Cam-
bridge, MA (1979)
10. Guizzardi, G., de Almeida Falbo, R., Guizzardi, R.S.S.: Grounding Software Domain On-
tologies in the Unified Foundational Ontology (UFO): The case of the ODE Software Process
Ontology. In: CIbSE. (2008) 127–140
11. Morales-Ramirez, I., Perini, A.: Discovering Speech Acts in Online Discussions: A Tool-
supported method. In: CAiSE Forum. CEUR Workshop Proceedings (2014) 137–144
12. Hattie, J., Timperley, H.: The Power of Feedback. Review of Educational Research 77(1)
(March 2007) 81–112
13. Brun, Y., Di Marzo Serugendo, G., Gacek, C., Giese, H., Kienle, H., Litoiu, M., M¨
uller, H.,
e, M., Shaw, M.: Engineering Self-Adaptive Systems through Feedback Loops. In: SE
for Self-Adaptive Syst. Springer (2009) 48–70
14. Mory, E.H.: Feedback Research Revisited. Handbook of Research on Educational Commu-
nications and Technology 45(1) (2004) 745–784
15. Seyff, N., Ollmann, G., Bortenschlager, M.: AppEcho: a User-Driven, In Situ Feedback
Approach for Mobile Platforms and Applications. In: MOBILESoft, ACM (2014) 99–108
16. Schneider, K.: Focusing Spontaneous Feedback to Support System Evolution. In: RE, IEEE
(2011) 165–174
17. Ko, A.J., Chilana, P.K.: How Power Users Help and Hinder Open Bug Reporting. In: Proc. of
the Conference on Human Factors in Computing Systems. CHI ’10, ACM (2010) 1665–1674
18. Omoronyia, I., Sindre, G., Stlhane, T., Biffl, S., Moser, T., Sunindyo, W.: A Domain Ontol-
ogy Building Process for Guiding Requirements Elicitation. In: REFSQ. Springer (2010)
... In order to analyze online discussions through mailing lists, as those used in OSS development, we apply the concepts related to SAT and a communication ontology that we have described in a previous work [9]. Specifically, we first identify which linguistic and non-linguistic acts can be used to model such online discussions, we define a suitable speech-act taxonomy and, based on it, a proposal for analyzing online discussions. ...
Conference Paper
Full-text available
Open-Source Software (OSS) community members report bugs, request features or clarifications by writing messages (in unstructured natural language) to mailing lists. Analysts examine them dealing with an effort demanding and error prone task, which requires reading huge threads of emails. Automated support for retrieving relevant information and particularly for recognizing discussants’ intentions (e.g., suggesting, complaining) can support analysts, and allow them to increase the performance of this task. Online discussions are almost synchronous written conversations that can be analyzed applying computational linguistic techniques that build on the speech act theory. Our approach builds on this observation. We propose to analyze OSS mailing-list discussions in terms of the linguistic and non-linguistic acts expressed by the participants, and provide a tool-supported speech-act analysis method. In this paper we describe this method and discuss how to empirically evaluate it. We discuss the results of the first execution of an empirical study that involved 20 subjects.
... In [4], four clusters of user types are presented by their attitude towards providing feedback (see Figure 1), differing in such factors as openness towards being asked or reminded to provide feedback; the extent to which privacy outweighs allowing (anonymized) data mining, and whether feedback is provided out of an intrinsic motivation or because of social factors. Finally, user feedback was characterized in [8], while [9] provides a user feedback ontology to clarify the concepts of this domain. ...
Conference Paper
Stakeholders who are highly distributed form a large, heterogeneous online group, the so-called “crowd”. The rise of mobile, social and cloud apps has led to a stark increase in crowd-based settings. Traditional requirements engineering (RE) techniques face scalability issues and require the co-presence of stakeholders and engineers, which cannot be realized in a crowd setting. While different approaches have recently been introduced to partially automate RE in this context, a multi-method approach to (semi-)automate all RE activities is still needed. We propose “Crowd-based Requirements Engineering” as an approach that integrates existing elicitation and analysis techniques and fills existing gaps by introducing new concepts. It collects feedback through direct interactions and social collaboration, and by deploying mining techniques. This paper describes the initial state of the art of our approach, and previews our plans for further research.
... Recent studies have highlighted how the use of new platforms , such as app stores, mobile phones, and social network increases developers' opportunities to connect with users and listen to their needs [25][26][27][28] . User feedback postrelease is a rich source of information for engineers involved in requirements elicitation [3][29][30][31] and many authors have focused their analysis on this aspect of app stores (e.g., [4][36]). Iacob and Harrison [32] report that 23.3% of the reviews they studied were found to be feature requests, further underscoring the importance of features in app store ecosystems. ...
Conference Paper
Full-text available
We introduce a theoretical characterisation of feature lifecycles in app stores, to help app developers to identify trends and to find undiscovered requirements. To illustrate and motivate app feature lifecycle analysis, we use our theory to empirically analyse the migratory and non-migratory behaviours of 4,053 non-free features from two App Stores (Samsung and BlackBerry). The results reveal that, in both stores, intransitive features (those that neither migrate nor die out) exhibit significantly different behaviours with regard to important properties, such as their price. Further correlation analysis also highlights differences between trends relating price, rating, and popularity. Our results indicate that feature lifecycle analysis can yield insights that may also help developers to understand feature behaviours and attribute relationships.
Specialized programming knowledge and further generic competencies are needed to be able to develop software adequately. Therefore, it is essential to foster communication skills in higher software engineering education. One aspect of communication – feedback – is used rather frequently in agile software development as well as in educational settings, but: How does this indirect feedback affect the attitude of students in higher software engineering education? This paper describes a research proposal in order to detect how and to what extent indirect feedback has an impact on the individual’s attitude.
Full-text available
User feedback is mainly defined as an information source for evaluating the customers’ satisfaction for a given goods, service or software application. Due to the wide diffusion of the Internet and to the proliferation of mobile devices, users can access a myriad of software services and applications, at any time and in any place. In this context users can provide feed- back upon their experience in using a software, through dedicated software applications or web forms. We call it online user feedback and we believe is a powerful source of information for improving the software service or application. Specifically, in software engineering user feedback is recognized as a source of requests for change in a system, so it can contribute to the evolution of software systems. Indeed, user feedback is gaining more attention from the requirements engineering research community, and dedicated buzzwords have been introduced to refer to research studies in RE, e.g. mass RE and crowd RE. Arguing on this premise, the possibility of exploiting user feedback is worth to be investi- gated in requirements engineering, by addressing open challenges in collection as well as in the analysis of online feedback. The research work described in this thesis starts with a state- of-the-art literature analysis that revealed that the definition of user feedback as an artifact, as well as the characterization and understanding of its process of elaboration and communication were still unexplored, especially from the requirements engineering perspective. We adopted a multidisciplinary approach by borrowing concepts and techniques from on- tologies, philosophy of language, natural language processing, requirements engineering and human computer interaction. The main research contributions are: an ontology of user feed- back, the characterization of user feedback as speech acts for applying a semantic analysis, and the proposal of a new way of gathering and filtering user feedback by applying an argumenta- tion framework.
Full-text available
Software product development companies are increasingly striving to become data-driven. The access to customer feedback and product data has been, with products increasingly becoming connected to the Internet, demonetized. Systematically collecting the feedback and efficiently using it in product development, however, are challenges that large-scale software development companies face today when being faced by large amounts of available data. In this thesis, we explore the collection, use and impact of customer feedback on software product development. We base our work on a 2-year longitudinal multiple-case study research with case companies in the software-intensive domain, and complement it with a systematic review of the literature. In our work, we identify and confirm that large-software companies today collect vast amounts of feedback data, however, struggle to effectively use it. And due to this situation, there is a risk of prioritizing the development of features that may not deliver value to customers. Our contribution to this problem is threefold. First, we present a comprehensive and systematic review of activities and techniques used to collect customer feedback and product data in software product development. Next, we show that the impact of customer feedback evolves over time, but due to the lack of sharing of the collected data, companies do not fully benefit from this feedback. Finally, we provide an improvement framework for practitioners and researchers to use the collected feedback data in order to differentiate between different feature types and to model feature value during the lifecycle. With our contributions, we aim to bring software companies one step closer to data-driven decision making in software product development.
Conference Paper
Full-text available
[Context and motivation] To remedy the lack of security expertise, industrial security risk assessment methods come with catalogues of threats and security controls. [Question/problem] We investigate in both qualitative and quantitative terms whether the use of catalogues of threats and security controls has an effect on the actual and perceived effectiveness of a security risk assessment method. In particular , we assessed the effect of using domain-specific versus domain-general catalogues on the actual and perceived efficacy of a security risk assessment method conducted by non-experts and compare it with the effect of running the same method by security experts but without catalogues. [Principal ideas/results] The quantitative analysis shows that non-security experts who applied the method with catalogues identified threats and controls of the same quality of security experts without catalogues. The perceived ease of use was higher when participants used method without catalogues albeit only at 10 % significance level. The qualitative analysis indicates that security experts have different expectations from a catalogue than non-experts. Non-experts are mostly worried about the difficulty of navigating through the catalogue (the larger and less specific the worse it was) while expert users found it mostly useful to get a common terminology and a checklist that nothing was forgotten. [Contribution] This paper sheds light on the important features of the catalogues and discuss how they contribute into risk assessment process.
Full-text available
To deal with the increasing complexity of software systems and uncertainty of their environments, software engineers have turned to self-adaptivity. Self-adaptive systems are capable of dealing with a continuously changing environment and emerging requirements that may be unknown at design-time. However, building such systems cost-effectively and in a predictable manner is a major engineering challenge. In this paper, we explore the state-of-the-art in engineering self-adaptive systems and identify potential improvements in the design process. Our most important finding is that in designing self-adaptive systems, the feedback loops that control self-adaptation must become first-class entities. We explore feedback loops from the perspective of control engineering and within existing self-adaptive systems in nature and biology. Finally, we identify the critical challenges our community must address to enable systematic and well-organized engineering of self-adaptive and self-managing software systems.
Full-text available
Large, distributed software development projects rely on the collaboration of culturally heterogeneous and geographically distributed stakeholders. Software requirements, as well as solution ideas are elicited in distributed processes, which increasingly use online forums and mailing lists, in which stakeholders mainly use free or semi-structured natural language text. The identification of contributors of key information about a given topic --called experts, in both the software domain and code-- and in particular an automated support for retrieving information from available online resources, are becoming of crucial importance. In this paper, we address the problem of expert finding in mailing-list discussions, and propose an approach which combines content- and intent-based information extraction for ranking online discussants with respect to their expertise in the discussed topics. We illustrate its application on an example.
Full-text available
The World Wide Web has led to a new kind of software, web systems, which are based on web technologies. Just like software in other domains, web systems have evolution challenges. This chapter discusses evolution of web systems on three dimensions: architecture, (conceptual) design, and technology. For each of these dimensions we introduce the state-of-the-art in the techniques and tools that are currently available. In order to place current evolution techniques into context, we also provide a survey of the different kinds of web systems as they have emerged, tracing the most important achievements of web systems evolution research from static web sites over dynamic web applications and web services to Ajax-based Rich Internet Applications.
Nowadays, developers and service providers put a lot of effort on collecting and analyzing user feedback with the purpose of improving their applications and services. This motivates the proposal of new tools to collect and analyze feedback. In our work, we develop a user feedback ontology, aimed at clarifying the concepts of this domain. For that, we follow a goal-oriented methodology to identify the competency questions that represent the ontology requirements. In this paper, we discuss an excerpt of the goal model used to guide the development of our ontology. Moreover, we present examples of competency questions identified through the analysis, and the corresponding fragment of the user feedback ontology.
The increasing participation of users of software applications in online discussions is attracting the attention of researchers in requirements elicitation to look at this channel of communication as potential source of requirements knowledge. Taking the perspective of software engineers who analyse online discussions, the task of identifying bugs and new features by reading huge threads of e-mails can become effort demanding and error prone. Recognising discussants' speech acts in an automated manner is important to reveal intentions, such as suggesting, complaining, which can provide indicators for bug isolation and requirements. This paper presents a tool-supported method for identifying speech acts, which may provide hints to software engineers to speed up the analysis of online discussions. It builds on speech act theory and on an adaptation of the GATE framework, which implements computational linguistic techniques.
Mobile platforms and applications are an exciting and important phenomenon in today's software and business world. They are being woven into the fabric of daily life faster than expected. Continuous collection of user feedback enabling the improvement of platforms and applications becomes critical to support the continuous evolution of mobile systems. Particularly user feedback is needed to provide systems that best fit user needs. We have designed a mobile feedback approach, which enables users to document individual feedback on mobile systems in situ. This information can then be evaluated and used as new requirements by developers. Based on this solution we have developed a feedback app for two different mobile platforms. Furthermore, we have conducted a study with smartphone users applying this approach and communicating feedback on a mobile platform and pre-installed apps. The study revealed that users were able to give individual feedback and that a large amount of this feedback was considered to be useful for mobile system improvement by a platform developer.
The distillation of knowledge from the Web—also known as opinion mining and sentiment analysis—is a task that has recently raised growing interest for purposes such as customer service, predicting financial markets, monitoring public security, investigating elections, and measuring a health-related quality of life. This article considers past, present, and future trends of sentiment analysis by delving into the evolution of different tools and techniques—from heuristics to discourse structure, from coarse- to fine-grained analysis, and from keyword- to concept-level opinion mining.