Content uploaded by Marjolein Jacobs
Author content
All content in this area was uploaded by Marjolein Jacobs on Jun 29, 2017
Content may be subject to copyright.
Contributions to HCI:
Introducing the CC-Model and its Pro Forma Abstracts
Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
HAN University of Applied Sciences, Academy of Information Technology and Communication,
Arnhem, The Netherlands
{marjolein.jacobs,koen.vanturnhout,arthur.bennis,sabine.craenmehr,ralph.niels,lambert.zaad,rene.bakker}
@han.nl
Abstract. This chapter addresses the challenge to determine what constitutes a contribution to HCI’s
multidisciplinary research program. To improve the field’s understanding of contributions in HCI we
have (1) outlined the changing narratives about contributions to HCI in the past 30 years, (2) performed
an empirical literature study on a substantial set of abstracts of contemporary HCI papers to indicate the
interrelation between contributions, and (3) analyzed the relationship between research methodology
and contribution. The results of these three analyses have been consolidated in the Connected
Contribution-Model (CC-Model) for contributions to HCI and in pro forma abstracts. The model and
the pro forma abstracts help to understand and identify contributions and assist in writing clear paper
abstracts.
Key words: Methodology, Mixed-Method Research, Pro forma Abstracts, CHI-Contributions
1 Introduction
One of the key challenges for (novice) HCI-researchers is to determine what constitutes a contribution to
the field. This may be especially difficult in HCI because of its multidisciplinary origins [23,24,26,30] and
its tempestuous growth since the 1980's [10,11,25,44]. This growth has left the field to be an almost
boundless domain [5,44], showing a lack of mainstream themes, certainly in the past two decades [32,35].
As such, there is a strongly felt need among HCI-researchers and educators [11] to revisit and rethink the
field's foundation. As a part of this effort we present a reconsideration of the anatomy of HCI-
contributions.
Contributions cannot be seen independent of the growth of knowledge in a field; they are, almost
by definition, the building blocks of the field's progression. Therefore, more than being merely the outcome
or product of HCI research [38], we define a contribution as the utility of those outcomes for the HCI
research community. A contribution is different from research methodology (which is the approach taken
in the study) and ‘topic’ (which merely limits the scope of the work). Thinking about contributions is
important because the projected results of a study in turn settle which methodology is suitable and what
quality standards may be applicable.
HCI researchers, however find it difficult to identify a contribution. The program committee of
one of the field’s flagship conferences: the ACM SIGCHI Conference on Human Factors in Computing
Systems (better known as CHI-international), for example, tries to aid its authors with a yearly updated list
of contribution types accompanied by a set of academic standards. However, their instruction to
prospective authors suggests this is of little avail. They write: "More than one contribution type may seem
plausible, or your paper may fall between contribution types, or it may offer its own unique contribution.
In spite of all this, choose the best match possible." [52]. The apparent inadequacy of the SIGCHI
contributions list, need not to be caused by its make-up of course; a lack of consensus in the field about the
way the field progresses and its core contributions almost certainly play a role.
Considering these difficulties, it is surprising how little discourse about contributions we find in
the HCI literature, nowadays. Landmark papers include meta-studies on contributions in HCI by Wulff and
Mahling [61] (published in 1990) and Newman [38] (published in 1994). But this literature received little
follow-up in recent years, despite the changes of the field since then [10, 25, 44]. Recent meta-studies on
the field as a whole focused on methodology [54], trends and diversity in HCI [5], and HCI in relation to
adjacent fields such as IS [27], not on contributions as such. A body of literature exists on how specific
approaches such as research through design [48,59,62], ethnography [17] or critical theory [3] can
contribute to the field, but these papers do not typically address HCI as a whole.
In this paper we offer a remedy by providing answers to three central questions about
contributions to HCI. First we ask: what are the contributions in our field? Or more specifically: can we
2 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
compose a comprehensive, yet parsimonious list of common contributions to HCI? Second we address the
interrelations between those contributions. We wonder: are these contributions in HCI related to each
other, and how? Finally we turn to methodology, asking: what are the relationships between the
contribution of the paper and the chosen methodology? Our approach answering these questions has been
mainly empirical. We traced the history of the yearly updated CHI-contributions list and performed an
empirical literature study on a substantial set of abstracts of HCI papers. This led to findings which we
used to construct the Connected Contribution Model (CC-Model) and, following [38], in pro forma
abstracts for the core-contributions of the model. These pro formas are helpful in identifying and
understanding contributions and they can be an aid in writing papers. Taken together these efforts aim to
revitalize the debate about contributions in HCI.
This paper is structured as follows. We start with an outline of the evolution of the narrative on
contributions in HCI. We provide a review of the field’s early academic writing about contributions in
particular the papers of Wulff and Mahling [61] and Newman [38] and we provide an analysis of the
history of the yearly updated list of contribution types provided by ACM SIGCHI Conference on Human
Factor in Computing Systems in the period from 1997 to 2015. This provides the framing for our empirical
literature study, which we present in section 3. We scrutinized a substantial number of abstracts to improve
our understanding of contributions to the field and the way authors highlight those in their abstracts. In
section 4, we present an analysis of the connection between contributions and methodology, using the
DOT-Framework [55]: a cross disciplinary framework for research methodology in HCI. The findings of
these 3 studies are consolidated in the Connected Contribution-Model (CC-Model) and in pro forma
abstracts. The CC-Model, presented in section 5, aids in understanding the relation between the different
CHI contributions we identified. In section 6 we present pro forma abstracts for the CC-Model’s core
contribution-types which illustrate and support application of the model. We end this paper with
conclusions and a discussion.
2 A short history of writing about contributions to HCI
We first tried to answer the question whether we can compose a parsimonious list of contributions in HCI.
To this end, we studied existing classifications of CHI-contributions throughout the history of the field.
The first to write academically about contributions in HCI were Wulff and Mahling [61]. They employed a
survey instrument consisting of 8 different categories based on content areas or subject matters and used it
to perform a systematic analysis of the kinds of HCI research published in the ACM SIGCHI proceedings
from 1982 to 1989. The categories they found were Science (theory based research), Cognitive Engineering
(heuristics, experience based research), Artifact Building (engineering research on solutions), Evaluation
(tests and validation in the broadest possible scope, on theory, methods and tools, and solutions), Design
Process (research on methodology), Visualization (research on interface appearance and techniques), Meta-
Comments (opinion, argument), and Documentation (help systems for UI builders). They considered the
first four (Science, Cognitive Engineering, Artifact Building and Evaluation) to be key areas of HCI.
Unsurprisingly, their list comes across as outdated to the contemporary HCI researcher, although several
contributions they identified (such as artifact building and evaluation) are still prevalent in the field. It also
appears that their list is rooted strongly in the different traditions present in the field at the time, which
causes a conflation of ‘topic’ and ‘contribution’. The disciplinary coloring of Wulff and Mahlings work
could have been due to their interpretive, bottom-up approach.
Shortly after, Newman addressed the same question in a different way [38]. In essence he
proposed to model HCI as an engineering discipline, for which he derived three principal contributions:
Enhanced modelling techniques (EM): research originating from and improving theory and models;
Enhanced solutions (ES), research addressing limitations in existing solutions, and Enhanced tools and
methods (ET), research focusing on methods and tools for creating solutions. Newman saw the engineering
research discipline as a continual improvement of the (conceptual) artifacts [6] of engineering practitioners
and researchers. However, Newman found HCI research to fit this scheme badly, so he added two more
contribution types; Radical Solutions (RS): for distinctly novel research, breaking new grounds and
Experience and/or Heuristics (XH): addressing experience based research and case studies. Newman’s
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 3
work has been influential in the sense that it led to a debate about how HCI could be more like engineering
(e.g. [39,57]). Furthermore, his approach was adopted in IS [1] and his pro formas found didactic
employment [46]. In contrast to Wulff and Mahling, Newman achieved a separation of concerns between
contribution type and topic, which made it possible to discuss contributions independently from the
particular phenomena under scrutiny. However, Newman's choice to cast the HCI discipline as a whole as
an engineering discipline may not have done enough justice to HCI’s diversity [23] and multidisciplinarity
[24,30], even at his time.
In recent years, academic debate about what constitutes a contribution in HCI has become sparse
and we have not found recent papers providing a systematic analysis of contributions to HCI. However, the
ACM SIGCHI Conference on Human Factors in Computing Systems - the field's flagship conference -
does produce a yearly updated list of contribution types on their website. Most of these are still online. We
managed to trace this list from 1997 onwards, which is depicted in Figure 1. For each year we compared
the contributions listed with those of the previous and next year. If, based on a comparison of the labeling
and descriptions of the contributions it was safe to conclude there was continuity between the years, we
connected these with a line. Each dot represents an alteration in the labeling of the contribution type. In
some cases contributions appeared to have been split or joined, which is also visible in the chart.
Occasionally, a contribution type has been discontinued; this is marked with an X. As a whole, this
timeline of contribution types breathes continuity throughout the years for most types of contributions.
Figure 1. Overview of the evolution of the CHI contribution list from 1997-2015
Looking at the evolution of the list throughout the years, categories such as Understanding Users,
Systems, and Methodology have been persistent, although occasional splits and relabeling indicate modest
shifts in focus. At some points we found it warranted to place contributions under as single heading. We
have grouped the contributions Development or Refinement of Interface Artifacts or Techniques and
Systems, Tools, Architectures and Infrastructure, for example. Both types can be seen as 'solutions'
requiring, a "rigorous and convincing validation" in the former and an "appropriate and reasonable
4 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
validation" in the latter [52]. Similarly we grouped the contributions Theory, Validation & Refutation, and
Argument, as they all provide theories and models for HCI [52].
The contribution type Experience focusing on case studies and/or design briefing, has been
discontinued in 2011; Judging from the frequent relabeling of the category, the CHI-community struggled
for a long time between a focus on 'real life' design cases versus 'innovative' design cases, thus alternating
between an emphasis on 'realism' versus an 'interest to the academic community'. New contribution types
have been defined as well. From 2009 onward a new category Innovation, Vision and Creativity has been
added referring to contributions of a significant innovation, vision or design concept. Looking at the
description of this category, it shares characteristics with Solution (in particular interaction techniques) as a
'demonstrator' is requested, and Theory (in particular Opinion) because 'novel insights and directions of the
field' are requested by the conference organizers.
Focusing on contributions which are persistent over time and grouping closely related
contributions we believe there are, 6 core-contribution types which can capture the essence of the chart:
Understanding Users, Experience, Solution, Innovation, Methodology, Theory (indicated by the grey-white
horizontal bands on the timeline). Although it is slightly more speculative than the analysis of the CHI-
contribution list, we also have related the core contribution types to the work of Wulff and Mahling [61],
Newman [38]. This is summarized in Table 1, which also includes the CHI 2015 contributions. We found
the 6 core types to provide a fair summary of all three sources, although differences in language and
emphasis persist. Innovation seems to be the only category that has not found antecedents in the early
writings about contributions.
Table 1. The relationship of the shortlist of core-contribution types to the categories proposed by Wulff and Mahling
(1990), Newman (1994), and CHI (2015)
Contribution streams
Wulff & Mahling
(1990)
Newman (1994)
CHI (2015)
Understanding Users
cognitive engineering
X
understanding users
Experience
X
experience and/or
heuristics
X
Solution
- artifact building
- visualization
- evaluation
- enhanced solutions
- radical solutions
- systems, tools,
architectures and
infrastructures
- development or
refinement of interface
artifacts or techniques
Innovation
X
X
innovation, vision and
creativity
Methodology
- design process
- documentation
enhanced analytical
modelling techniques
methodology
Theory
- science
- metacomments
enhanced analytical
modelling techniques
- theory
- argument
- validation and refutation
3 Empirical Literature Study
The second question of this paper is to what extent (and how) contributions to HCI are interrelated. We
answered this question with an empirical literature study. We took the paper abstract as our unit of
analysis. Residing at the top of the hierarchy of the paper narrative, the paper abstract gives concise entry
into the authors’ intent regarding the content, the research approach and the contribution of the research.
We analyzed the abstracts of 97 HCI research papers which were divided into two subsets. The first 40
abstracts coincided with the abstracts of [54] forming an a-select sample of the NordiCHI'012 proceedings
[40]. These abstracts were assumed to give a fair representation of the contributions to mainstream HCI
and they had the added advantage that, because of the earlier study, detailed methodological information of
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 5
the papers was available. The second subset was similar in size: we selected 57 abstracts by taking the best
paper award winners of three recent SIGCHI conferences (17 (CHI2012) + 20 (CHI2013) + 20 (CHI2014))
[50,51,53]. Although this second set of ‘best of breed’ papers might arguably be less representative of the
full breadth of HCI research, it complemented the first set nicely. Six raters took part in the study. All were
staff members of our university.
Our approach was inspired by Newman’s earlier work on contributions in HCI. Newman [38]
proposed pro forma abstracts as an analytical instrument to infer the contribution from an abstract. A pro
forma abstract is an abstract in which key information is replaced by information slots giving the author an
idea of the overall structure and content belonging to a certain contribution: Table 1 shows an example.
Table 2. Newmans pro forma for Enhanced Solution (above) and example (below) [38]
Studies of existing <artifact-type> have shown deficiencies in <property>. An enhanced design for
<artifact-type> is described, based on <solution strategy>. In comparison with existing solutions, it offers
enhanced levels of <property>, according to analyses based on <model-type>. These improvements have
been confirmed / demonstrated in tests of a working <artifact-type> based on the design.
Studies of existing automatic document layout schemes have shown deficiencies in ease of learning and
range of information handled. An enhanced design for a layout system is described, based on
morphological analysis to exact logical structure. In comparison with existing solutions, it offers enhanced
levels of accuracy in determining logical structure. These improvements have been demonstrated in tests of
a working layout system based on the design. (Iwai et al., CHI '89 Proc., pp369-374)
Abstract classification with pro formas has been applied successfully in IS [1] but for our purpose
it has two major drawbacks. First, this top-down approach for classification requires a shortlist of
contributions and pro forma abstracts for each contribution, which we considered to be a desirable result of
our study rather than the starting point. Second, matching each abstract to a predefined solution would
teach us little about the interrelations between the different contributions in HCI. A bottom up approach,
such as in Wulff and Mahling’s study, however, has the possible disadvantage of disciplinary coloring of
the results. For example, a free interpretation of contributions in research papers could lead to the
classification of papers based on topic, rather than contribution. To circumvent this possible pitfall we
developed a semi-quantitative, interpretive approach. Inspired by the pro formas of Newman, we treated
abstracts as series of information slots which could be classified one-by-one. This led to uniformly codified
meta-information about the abstracts which could be subjected to various forms of clustering techniques
and principle component analyses, such as multidimensional scaling. By carefully choosing slots related to
contribution and methodology, we avoided surface features like topic to be of too much influence on the
outcomes. The list of information slots and information about their use also aided us in constructing pro
forma abstracts later in the study.
We codified the most important bits of information in the paper abstracts using slots. We set up a
list of slots by adapting Newman's pro formas about Theory, Method and Solution (such as <improved
theory> and <solution strategy>) and by creating slots to capture the methodology as it was analyzed in
[54] (on which we will elaborate in the next section, examples include <lab> and <field>). We tested this
slot list on the subset of the data an iteratively refined it until the set satisfactorily captured the key
information in the abstracts within the dataset (we added slots like <hypothesis> and <no_users> for
example). The full list of slots can be found in [31]. Overall we used a list of 53 different slots. For each
abstract at least two independent raters assigned slots from the slot list to the abstract. All ratings were
discussed in a meeting with all raters present. On average 6.4 slots were assigned to each abstract. We
estimated the value of Cohens Kappa (Fleish Kappa, with Congers correction) as a measure of agreement
for this slot assignment task resulting in a value =0.84, which can be considered good to excellent [28];
raters turned out not to disagree (much) about the information presented in the abstract.
Next, this data was subjected to multidimensional scaling (MDS). MDS is a statistical technique
that takes a distance measure between all points of a dataset as input and reduces the complexity of the
dataset to a given set of dimensions (often two). As a result a 'map' of the data can be drawn in which data
points with high similarity will cluster. The dimensions of the plots can also be interpreted as meaningful
6 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
principal components of the data [14]. As a distance measure between each pair of abstracts we took the
amount of slots raters agreed to be present in both abstracts. We used SPSS v22 (proxscal, distances
created from data) for the scaling. The resulting plots show 97 dots matching the 97 analyzed abstracts. We
interpreted the plots using a procedure largely in line with the grounded theory approach [20]. We
identified a subset of papers on the plot, carefully examined the abstracts, formed hypotheses about the
underlying structure of the plot and tested these assumptions with a new subset of papers. We iteratively
continued this process until a stable and shared understanding of the underlying structure of the plot
emerged.
The results of our bottom-up study are shown in Figure 2. One striking feature of the plot is the
absence of clear clusters. One would expect clusters to appear, if the abstracts of different contribution
types were qualitatively different; for example when they would have contained markedly different
information. This wasn’t the case. However, we were able to identify underlying dimensions in the plot.
Figure 2. Outcomes of the MDS plot and our interpretation. Blue dots indicate the position of the abstracts, arrows
give the direction of the principle components. Dominant slots of the abstracts are listed in red, a characterization of
the underlying dimensions in black.
Papers on the left bottom of the plot were best described with words such as factual, empirical,
data oriented, (focus on) reality. We found empirical field studies and solution papers with a strong
emphasis on empirical evaluation here. Common slots were <field> and <lab>. At the top we found
papers which can be described with words such as analytical, abstraction, inductive, theoretical. User and
methodological studies with a strong emphasis in theory were found here. At the right we found papers for
which terms such as hypothetical, exploration, vision, and ideation were most appropriate. Common slots
were <theory> and <solution>. We found design research papers in this corner and explorations of
innovative interaction styles. At the bottom, we found papers which could be described best with words
like validating, synthesis, deductive, concrete. Papers at the bottom focus on the creation and evaluation of
a (often fully implemented) system. Slots such as <solution> and <lab> are mostly found at the bottom of
the plot.
As a compliment to our bottom-up study using slot classification we also carried out a top-down
classification. For each abstract the raters were asked to identify one of the contributions of the CHI-
contribution list of 2014 as being the most appropriate for the abstract. The results of this classifications are
displayed in Table 3. We estimated the value of Cohens Kappa (Fleish Kappa, with Congers correction) as
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 7
a measure of agreement, resulting in an value of =0.43, which is inadequate [28]. Table 3 enlists the
disagreements between raters. These are common, but not evenly distributed. The contribution Systems,
Tools, Architectures and Infrastructures for example, is often confused with Development of Interface
Artifacts and Techniques, and Understanding Users with Theory. Our raters were quite surprised by the
low values of kappa though. They did not feel they disagreed strongly on the type of paper, but suggested
their disagreements were caused by assigning a different weight to different bits of information in the paper
abstract or to subtle differences in interpretation.
Table 3. Confusion Matrix showing the number of abstracts assigned to each of the individual CHI 2014 contributions
(leading to =0.43)
Confusion Matrix
U
T
A
I
D
S
M
U
15
3
6
3
4
Understanding Users
T
6
2
2
1
1
6
Theory
A
1
1
Argument
I
2
3
3
Innovation, Creativity and Vision
D
9
10
4
Development of Interface Artifacts or Techniques
S
7
2
Systems, Tools, Architectures and Infrastructures
M
6
Methodology
In Figure 3 we show a combination of the results of the top-down classification with the slot-
classification. The top-down classification is depicted by ellipses drawn around papers with a similar
contribution. Each ellipse captures the locations of about 90% of the papers for which at least one of the
raters assigned a certain contribution. The notable overlap between the ellipses mirrors the common
confusions as listed in the confusion matrix presented earlier (Table 3). We have chosen to omit the
contribution Methodology from this plot, to avoid too much clutter. Methodology papers were spread out
over the whole plot, although most could be found in the center. Methodology is also commonly confused
with all other contributions in the confusion matrix of Table 3. This suggests it is difficult to distinguish
Methodology from other contributions.
Figure 3. The MDS plot with overlay of the top-down classification: ellipses capturing about 90% of the papers
assigned to each contribution.
8 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
Our analysis of these plots suggests that contributions in HCI are, indeed, interrelated. It turns out
to be difficult to assign contributions to an abstract unequivocally. This is apparent in the low value of
kappa in the top-down classification and the absence of clear clusters in the MDS. This is not due to a
disagreement about the key information in the abstracts, as kappa for the slot-classification was good. The
spread in the plot suggests the common confusions of contributions cannot easily be resolved by using
different contribution-type categories. One reason for the lack of clusters and common confusions could be
that HCI researchers tend to combine multiple contributions in a single paper which is supported by our
raters' observation that papers typically are compounds of multiple contributions. So the idea that there is a
categorical list of contributions that are clearly distinguishable may be the wrong way to think about
contributions. Rather HCI papers may be compounds of multiple contributions which can all be of a
different contribution-type. This is in line with our data if we consider that not all combinations of
contributions are equally likely. The two dimensions of the plot suggest the HCI research community is
subject to a tension field between the real and the possible and between the analysis of the current situation
and the synthesis of a new one. These underlying tensions also explain the common confusions in Table 3.
It is, for example, more likely for authors to present novel user insight and theory in a single paper than to
combine such insights with a grand vision of the future, as is done in innovation papers. As such we may
even take things a step further and omit the idea of describing HCI contributions as an item from a nominal
list altogether. Contributions may be seen as a trajectory in the tension field of HCI, reconciling opposing
values in the field: a similar viewpoint has been expressed earlier by Fallman [18] and is an underlying
idea in the DOT-framework [55].
4 Contributions and Methodology
The third question of the paper is how contribution is related to the chosen research methodology.
Although contribution (the utility of the outcomes of a study to the research community) and methodology
(the approach taken to arrive at these outcomes) are different aspects of research, one would expect a
correlation between the two, as certain approaches lead to certain outcomes. We have chosen to use the
DOT-Framework [55] (see Figure 4a) for an analysis.
Fig. 4a The DOT-Framework
Fig. 4b Validated Solution
Fig. 4c Rigor Cycle
Fig. 4d Field Reframing
Fig. 4e Relevance Cycle
Figure 4. The DOT-Framework [55] (Figure 4a) and four mixed-method research design patterns identified via the
DOT-Framework [54] (Figure 4b t/m 4e)
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 9
The DOT-Framework is a cross disciplinary framework for research methodology in HCI. The
DOT-Framework has three layers: an ontological layer (objects of HCI research), an epistemological layer
(approaches to HCI research) and an axiological layer (what are the underlying values). In this section we
focus on the epistemological. This layer maps out five research strategies divided over three domains in the
ontological layer: the domain of available work (on the right), the innovation space in the middle), and the
application domain (on the left). The term Library research refers to an investigation of existing work;
Field is research on user behavior in the application domain; Lab research validates solutions in the context
of use; Showroom research verifies, refines, and/or improves a solution in relation to existing work.
Workshop resides within the innovation space where new ideas and solutions can be explored iteratively.
Van Turnhout et. al. [54] identified mixed-method patterns within the DOT-Framework in which
research is described as a carefully orchestrated triangulation path employing several of the five strategies
(see Figures 4b t/m 4e). They have shown most of the work in HCI can be captured in a handful these
patterns. Three dominant patterns were identified; Validated Solution (using Library, Workshop, Lab) is
used in studies proposing new artifacts, infrastructures or interaction techniques (see Figure 4b); Rigor
Cycle (using Library, Workshop, Showroom) is used to explore solutions and to weed out problems in
existing work (see Figure 4c); and Field Reframing (using Library, Field, Showroom) is used when a
particular context of use is of interest but not yet studied from a particular point of view (see Figure 4d). In
addition three candidate patterns were found, from which the Relevance Cycle (using Field, Workshop,
Lab) captures the user-centered design cycle (see Figure 4e). Van Turnhout et. al. [54] point out that
although the user-centered design cycle is ubiquitous in HCI textbooks, the closely corresponding
relevance cycle was not frequently found in current HCI research papers.
Figure 5. The MDS Plot with our interpretation of the dimensions, the dominant slots and the results of the top-down
classification. Acronyms in capitals indicate the dominant patterns as classified by [54].
We studied the relation between the mixed-method research patterns identified by [54], and the
contributions as examined in this paper, by highlighting the set of NordiCHI'012 papers in the plot, see
Figure 5. We decided to focus on the three dominant patterns since these cover most of our dataset: Field
Reframing, Rigor Cycle, and Validated Solution. By and large these three patterns form three partially
overlapping bands in the plot. Van Turnhout et al. [54] describe Field Reframing studies as studies which
study a particular context of use from a novel point of view. Field Reframing (FR) forms a band at the side
of the plot which is characterized by analytical, abstraction, inductive and theoretical, stretching the full
10 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
spectrum between the factual and hypothetical. Figure 5 shows that field reframing papers contribute user
insights (understanding users) or theories. We find Rigor Cycle (RC) papers in the middle of the plot. The
Rigor Cycle is used to explore solutions and methods in order to weed out problems in existing work [54].
Rigor cycle papers focus both on solutions and on theory development, which may explain their central
placement. Almost all contribution types feature one or more Rigor Cycle papers. Validated Solution
studies test and/or validate new or improved solutions in the application domain. The general difference
between Field Reframing and Validated Solution is that the former has research that 'theorize' and 'justify'
and the latter research that 'build' and 'evaluate' [37]. Validated solution papers contribute to the design of
interactive systems and interaction techniques. These papers are found on the side of the plot described by
synthesis, validating, deductive and concrete, covering both ‘factual’ versions, focusing on
implementation, and more ‘hypothetical’ versions, focusing on innovation and exploration.
The plots show that methodology as defined in the DOT-Framework and contribution are strongly
related, but this holds mostly for the vertical dimension (abstraction versus synthesis) and less for the
horizontal dimension (factual versus hypothetical). Looking at the horizontal dimension; studies that
employ a field reframing pattern have a focus on <field> and <theory>. Typically these Understanding
User and Theory studies, study reality as it is, to produce novel insights which could guide designers in the
future, which could guide designers in the future. Validated Solution papers, such as Systems and
Interaction Techniques, which have a focus on <lab> and <solution> aim at the creation and
implementation of new artifacts increasing the body of existing solutions in HCI. This analysis in terms of
patterns shows no gradient in the dimension distinguishing between the 'possible' (grand vision of the
future) and the 'real' situation (reality as it is right now). Apparently the mixed-method research patterns do
not capture the differences in methodology on this dimension; this would require a more nuanced look on
methodology.
5. The Connected Contribution-Model
Within this section we present the Connected Contribution-Model (CC-Model) that offers an understanding
of contributions to HCI and explains the interconnectedness of the different CHI contributions (see Figure
6). The model is constructed through a synthesis of our findings of section 3 and 4 and relevant literature.
The central idea of the model is taken from Newman [38]. He proposed to model HCI as an expanded
version of the engineering discipline. We follow his idea to model the discipline as a whole as an expanded
version of the practice which it supports, but we amend it by taking the user centered design (UCD) cycle
as a representation of our practice ([54] speak of the Relevance Cycle, see Figure 4e) rather than
engineering.
Loosely put, the UCD cycle runs from observing a context of use, setting priorities for
improvement, exploring opportunities for improvement and creating solutions which are tested in the
context of use. All of these steps can lead to intermediate results which could be contributions to the field.
This matches common practice in HCI: research projects typically cover a small part of the UCD cycle in a
much more rigorous way than practice would allow for. As such the HCI-community can be seen as a
group of researchers solving the generic user centered design problem of human computer interaction in
the future in a radically distributed way. Seen this way, contributions can be split in two elements: firstly
they consist of a (topical) scope: which ‘part’ of the generic problem of 'HCI in the future' is addressed:
‘human-robot interaction’ for example. Secondly they consist of the contribution type which is defined by
the outcome of the slice of the UCD cycle which is taken up, for example ‘solution’. Having established
this separation of concerns between scope and contribution type the CC-Model focuses on the latter.
Central in the model is the arc which represents the UCD design cycle. This is in part a theoretical
construct and in part empirical: a similar arc can be drawn across the MDS plots presented in sections 3
and 4, sequentially touching the contributions Understanding Users, Theory, Innovation and Solution. We
have placed those core-contributions of Figure 3 at the most indicative places of the arc. We placed the
contribution Methodology in the center of the model. Methodology plays an auxiliary role in both HCI
research as HCI practice and as such it is extensively studied. Earlier we highlighted that we found
methodology papers to be scattered across the plot, and indeed, these papers show the diversity of other
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 11
contributions with regards to methodology and contribution type. There are ‘understanding designers’
papers which focus on HCI practitioners practices and needs [19,33], there are ‘theory of method’ papers
which focus on a deeper understanding of methodology [3,29], there are innovation papers suggesting
novel avenues for methodologies [9,15] and ‘method solution’ papers which focus on proposing and
evaluating novel methods [34]. Papers which rigorously evaluate the application of methods by
practitioners were scarce in our dataset, although such evaluations do occur in the field [60]. This diversity
also means that all dominant patterns found by [54] can in principle be applied, but Rigor Cycle appears to
be a common approach. The placement in the middle of the plot underlines both its auxiliary role and its
affinity with other contribution types.
Figure 6. The CC-Model illuminating the interrelations between the different core-contributions in HCI.
In section 3 we said HCI research might best be conceived as a trajectory in a tension field, as has
been proposed by Fallman [18]. We consequently interpreted the dimensions of the plot as fundamental
tensions in HCI. For the final model we have chosen for Abstracting versus Concrescence (following
Rauterberg [42]) and for Reality versus Vision to characterize these tensions. Following the complete cycle
would resolve them all, but most papers only stretch a part of the cycle. The arrows in Figure 5 further
clarify this finding for each core contribution. To give an example: A researcher with the scope ‘human
robot interaction’, and Understanding Users as contribution type might follow this trajectory. She
investigates part of the reality (such as current attitudes of teachers towards teaching robots [45]), abstracts
away from it and delivers results in the form of prototheory (requirements for robots in the classroom)
which can be picked up by others in the field. A similar 'story' can be told for other core-contribution types.
12 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
Clearly, the trajectories belonging to the core contribution types are merely archetypes. Other
paths are possible and as such, contributions can, as the SIGCHI author instruction suggests, fall in
between core-contributions. Researchers may find intermediate results in a trajectory to be of
‘independent’ value to the community or they may combine and stack multiple smaller contributions in a
single paper. However, it is not the case that anything goes. Papers tend to combine proximate
contributions, as is visible in the confusion matrix in Table 3. This is explained by the tension field of the
model: it is hard to have a paper grounded in reality-as-it-is (and thus be highly empirical) and
simultaneously present grand visions of a different future (and thus be 'visionary'). Similarly, papers
typically do not focus on both Abstracting (explaining the reality and capturing it in theories) and
Concrescence (turning these theories into concrete artifacts for a future reality). Moving across the central
arc of the model, the type of questions researchers tend to address change. Roughly speaking researchers
move from descriptive ‘is’ and ‘why’ questions, via prioritizing and normative ‘ought' and ‘should’
questions, through ‘could’ and ‘can’ questions about possibilities to back to ‘is’ questions about evidence
for results.
The CC-model raises several theoretical and empirical questions, which we will now shortly
address. First, modeling the HCI community as a group of researchers, investigating human computer
interaction in the future, bound by a commitment to the user centered design cycle [13], raises the question
how HCI research differs from HCI practice. Although a concern to be relevant to interaction design
practice is deeply embedded in the HCI research community [16,21,22,36,44] and a form of continuity
between research and practice can be expected because of the shared educational background of both
communities. The consensus in HCI research is that research and practice have marked differences in
problem setting, problem solving strategies and culture [2,12,22,47]. The CC-model suggests both HCI
research and HCI practice face similar tensions in solving the problems they face, but it does not equate
HCI research with HCI practice. Rather it allows research practices to be radically different from the
approaches taken by practitioners, in particular when it comes to research pragmatics. Practitioners
typically need to address all tensions in a single project, while researchers have the possibility to do only a
part of the cycle. Therefore HCI researchers are able to make the choice to address some, but not all
tensions of the model, which gives space for a much more innovative, theory based and rigorous approach
[7].
A second, related, question is whether trying to solve the interaction design of the future is the
best way to support (current) HCI practitioners. Several authors have tried to explicate the relationship
between HCI research and practice. John Long [36] for example has proposed an elaborate, formal model
for the relationship between HCI and design practice. More recently Gray et al. [22] have studied such
relationships empirically. They focused on the contribution type Methodology, proposing a different and
simpler model. Although specifying relationships between HCI-research and practice is beyond the scope
of our paper, the CC-Model could be of value in these efforts. Current efforts to create such models do not
make a distinction between different contribution types. However, considering the diversity of the type of
questions that exist for different contributions it is likely that the way in which a research contribution can
be relevant to practice also differs across contribution types. The CC-Model could thus be used to bring
more nuance and focus to academic efforts to understand and operationalize the relationship between HCI
research and practice.
A third question relates to build-up, across multiple contributions. It must be obvious that HCI
research community does not follow the UCD arc of the model in order: an understanding users paper
needs not to be followed by a theory paper and theory papers aren’t always followed by innovation papers.
On the contrary, most HCI contributions probably receive follow up from a paper of a similar type. As an
empirical question, studying such follow-up, would give some answers to the question to what extent HCI
is more a multi – or an interdisciplinary field [7,24]. In ‘multidisciplinary HCI’ several specialist
subcommunities, for example HCI-researchers striving for the direct implementation of a solution, would
pick those problems they could solve with little interaction with and influence from other subcommunities.
In 'interdisciplinary HCI', contributions would not only inspire similar contributions but would also
strongly influence other contribution types, at least the neighboring contributions in the CC-Model. The
absence of clear clusters in our plots suggests the interdisciplinary model to be more correct. Recently
other authors also suggested a growth of the ‘interdisciplinarity’ of the field [7,24,35]. However the proper
way to study this would be to do a bibliometric study mapping the lead up and follow up of papers of each
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 13
contribution type. To the best of our knowledge such a study, which would give interesting insights in the
way interdisciplinary knowledge buildup in our field works, has not been done yet. The CC-model could
provide the basis for such a study.
In summary, the CC-model explains most of our empirical findings and helps to understand the
continuity of contributions in HCI throughout its history: the scope of the field has widely expanded, while
its contribution types have remained largely stable (as has the discipline [16]). The value of the model is
that it not only identifies the core-contribution types, but that it also visualizes how each type relates to the
field as a whole and to other contribution types. This is important because it helps researchers to consider
how their work can be picked up by others. In particular it helps them to understand who those others and
their research needs might be. Each contribution has 3 possible audiences: a contribution can be important
to others who might share the scope and contribution type, it can be important to researchers with a similar
scope but focusing on a different contribution type and it could be relevant for practitioners. Although we
have shown there are many open questions about the particular needs of these three audiences, the model
does give a starting point for thinking about those multiple audiences. HCI researchers often show their
sensibility to others in the field by giving ‘implications for design’ which may be apt in some situations,
but not in others [16,17,32,43]. Making explicit which follow up contributions are directed by a particular
contribution may be a first step into developing a more sophisticated language for sharing findings across
our discipline.
6 Pro forma Abstracts
In this section we provide pro forma abstracts for the core-contributions of the CC-Model which aid in
understanding and applying the model and its core-contributions; Understanding Users, Theory,
Innovation, Solution, and Methodology. The pro formas can be used as an analytic device in the way
Newman [38] and others [1] have used it, although the procedure needs to be adapted to deal with the
presence of mixed contributions in our field. Our intention is, however, also didactic [8,46]. The expansion
of our field in the past few decades has given rise to a need for a (methodological) lingua franca for HCI
[11,23,54] and following our finding that, despite the changes in the field, HCI’s core-contributions been
stable over the years, the CC-model and its pro formas can be a good candidate to be a part of this shared
language. The pro formas in this chapter illustrate the core-contributions of the CC model and aid in
writing clear paper abstracts.
Abstracts are an important part of the paper which need to be independently readable. More often
than not, they only are "seen in isolation" by readers [38,49]. As such many academic authors recommend
to keep abstracts short, simple and easy to understand [56]. The pro formas implement these guidelines.
They are based on the most commonly used slots [31] assigned to papers belonging to the corresponding
core-contribution type. For all contributions we recommend mixed-method research design patterns, for
which we refer to [54] for an in-depth discussion. We have given the pro formas a uniform structure
consisting of four elements (partly derived from [38]). First there is the problem statement which is
indicative of the scope of the contribution and the hypothesized room for improvement, second an
indication of the actual improvement is given, third the author describes the evidence for his findings,
fourth a notion of the contribution is indicated. We have chosen to provide one sentence for each of the
four elements in subsequent order; they are indicated by the numbers (1) problem statement, (2)
improvement, (3) evidence, (4) contribution. With the combination of slots and structure an 'ideal' pro
forma was drawn-up. Finally the pro forma abstracts have been applied to three abstracts from our dataset
and edited for a better applicability. In this last editing round slots were rewritten to be more applicable and
superfluous slots were removed.
When the pro forma are used to write abstracts, their slots can be replaced by a detailed
description of the research itself. The use of these pro formas, however, is not is not a pure form-filling
exercise. They should be considered to be archetypes, that need to be appropriated and changed to capture
the specifics of a certain research effort in particular when papers contain multiple contributions. Its
uniform structure with connection to research methodology gives guidance on what information is
important for the readers and how it can be presented clearly.
14 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
The contribution Understanding Users aims to provide the community with a better understanding
- observation - of users and use-context and to set priorities follow up studies. Its complementing pattern is
Field Reframing. Table 4 shows the pro forma abstract and two fictional examples illustrating its use.
Table 4. Pro forma abstract for Understanding Users (above) and examples (below)
(1) Within <context of use> this <user problem> persists. Existing work is inadequate
because of <deficient theory properties>. (2) In this paper we report on <paper goal>. (3)
We executed a <field study> in <field-setting>. The study revealed <list of results>. (4)
Careful comparison with <existing theory> revealed the following <implications for
design/theory>.
(1) |Users of the trash bin in Microsoft Windows| often |have difficulties deciding when to
empty the bin|. Despite an abundance of theory about people's relationships with real and
virtual trash bins, |few have focused on long term usage patterns|. (2) In this paper we
report on a study |to identify such patterns|. (3) We executed a |large scale ethnographic
study| observing virtual trash bin usage |in three different multinationals|. The study
revealed that |users are often anxious about their trash bin becoming too full and they
display signs of regret when emptying it too early|. (4) Careful comparison |with work on
document management systems| suggests a similar emotional pattern was solved with |a
history function. This could be an avenue for the design of digital trash bins as well|.
(1) |One way to save energy is to avoid using electricity at peak times|. Many users,
however, are |unaware of this strategy and lack the information needed to regulate their
energy saving behavior|. (2) In this paper we |investigate how users might want to be
informed about the best time to use energy|. (3) We conducted |semi-structured interviews|
with |lead figures| in the sustainability movement indicating |that there is a need for
sophisticated and context aware notification systems|. (4) This is in line with earlier
findings |suggesting timing their energy usage is a difficult task for users| which |can be
supported with technology if it takes the intricacies of everyday life into account|.
The contribution Theory aims to improve existing theories about users and design or to propose
new ones; to find opportunities and set priorities. Its complementing patterns are Rigor Cycle and Field
Reframing. Note: if the theory is constructed though a user study (which is often the case) <approach> can
be expanded by inserting parts of the ‘understanding users’ pro forma. Table 5 shows the pro forma
abstract and two fictional examples illustrating its use.
Table 5. Pro forma abstract for Theory (above) and examples (below)
(1) <existing theory> in <context of existing work> has several <deficient theory
properties>. (2) We constructed <improved theory> (3) through <study approach>. The
<improved theory> has <list of theory highlights>. (4) The result has <theory utility>.
(1) |Clark's pragmatic theory of language use| has been |widely adopted|, but |turns out to
have more explanatory than predictive power because of its high level of abstraction|. (2)
We constructed |an alternative framework| by (3) |operationalizing the concept of
grounding so that it can be mathematically derived from a given set of parameters|. The
framework |incorporates a delicate mapping between subtle textual messages and
conversational success|. (4) The framework thus |allows chatbots and virtual agents to
establish to what extent they can assume common ground with their human interlocutors|.
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 15
(1) |Fitts' law| has been successfully applied to optimize the effectiveness |of a range of
physical input devices| but it |fails to account for user satisfaction|. (2) We extended Fitts'
law |with a set of parameters that predict satisfaction as well|. (3) A |meta-analysis of 45
recent Fitts' law studies| was used to arrive at a set |critical parameters to be inserted in
Fitts' law| which |explained 12% of the variance| in the studies' measurement of user
satisfaction. (4) The new model is thus capable of predicting user satisfaction to some
extent which will greatly improve |the early design stages of new satisfactory input
devices|.
The contribution Innovation offers novel visions and concepts, which enables new styles of
interfaces; identifying opportunities and providing solutions for a (possible or grand) future. As we had few
abstracts which fell unequivocally into this category the abstract presented here is more tentative than the
other four. Also we found a variety of methodological approaches, so we do not recommend a specific
pattern. Table 6 shows the pro forma abstract and two fictional examples illustrating its use.
Table 6. Pro forma abstract for Innovation (above) and examples (below)
(1) <context of existing work> highlights the potential of (2) <novel idea>. (3) In this
paper we explore the design space trough <study approach>. We found that <list of
insights>. (4) We propose <research agenda> for <new application area>.
(1) |Although interactive games for humans and animals have been created|, no existing
inter-active games focus on (2) |self-play by animals in the absence of human
interlocutors|. (3) We explored the design space of such self-practice games for rabbits
using |a Wizard of Oz setup in which rabbits were exposed to scripted sequences of
audiovisual cues and rabbit behaviors relating to joy and curiosity were observed|. We
found that |interactive self-play has potential in the case of rabbits, but care should be
taken not to exhaust the animals|. We propose that |future studies should focus on further
exploration and addressing ethical dilemmas| in research on |animal self-play|.
(1) |Brain-computer interfaces| can bring the idea of (2) |seamless interaction for
immersive game experiences| to a new level. (3) We |created a version of the classic Pac-
Man game in which the character can be controlled with conscious thoughts but the ghosts
are controlled unconsciously|. While |users were unaware of controlling the ghosts| they
reported |thrilling experiences, including feelings of paranoia and schizophrenia|, which
normal users of Pac-Man do not have. (4) We propose |to repeat our study for other genres
in computer games| to explore the possible effect of subconscious control on the
|immersion of games| in general.
Research to the contribution Solution aims at solving an existing problem by delivering a
prototype that is tested in the context of use; providing a solution and putting it to the test. Its
complementing pattern is Validated Solution. Table 7 shows the pro forma abstract and two fictional
examples illustrating its use.
Table 7. Pro forma abstract for Solution (above) and examples (below)
(1) A <user problem> exists in <context of use>. (2) Existing solutions show <deficient
solution properties>. We present <solution description> with the following <solution
advantages>. (3) We developed <type of prototype> and performed a <lab study> with
the following results <lab results>. (4) The solution has <significance for users>.
16 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
(1) |Information overload| is a common problem |for users of social media|. (2) Although
automatic filtering solutions exits, these |cannot be tuned to users’ specific information
needs at a given moment|. We present Feedtuner, a |social media filter with an adjustable
tradeoff between false alarms and false rejections of the filter that allows users to adapt
their social media streams to their actual information needs at a given moment|. (3) We
developed |a fully working prototype for the most common social media streams| and
performed |longitudinal tests with 20 users employing the experience sampling method|.
The test showed that |users felt empowered by the system in managing their social media
streams, which increased the number of check-ins at social media services|. (4) User
controllable filtering |can thus greatly improve the usability and usage rates of social
media|.
(1) |Most users have difficulties creating and maintaining a diverse set of safe passwords|
in particular |for online services|. (2) Increasingly, web-developers use explicit rules and
nudges to encourage users to create safer passwords, but |these do not take passwords
created at other sites into consideration|. We present Pass-Carousel: |a browser plugin
which teaches the user to treat passwords as part of a password ecology and
simultaneously acts as a password safe| providing an |easy-to-use solution for safer
password creation|. A |beta version of the plugin was tested among 40 students|, showing a
70% decrease of users who use the same password on different sites and an 18% increase
in safe passwords in general|. (4) Installing the plugin can |make password creation less
tedious while increasing users’ safety on the web|.
The contribution Methodology focusses on improving methods or tools that can be applied in
future work to support the creation of design and analytical tools, or solutions. Although we find
occasional examples with Field Reframing and Validated Solution, for methodology Rigor Cycle appears
to be a common approach. Table 8 shows the pro forma abstract and two fictional examples illustrating its
use.
Table 8. Pro forma abstract for Methodology (above) and examples (below)
(1) <existing methods> show these <method weaknesses>. (2) We present an <improved
method>. (3) We construed this method trough <study approach>. The resulting method
<method description> has <method advantages>. (4) Our <method evaluation> led to
the following generic <methodological insights>.
(1) |Existing methods for usability testing| are underused in the practice of webdesign
because |they are considered to be too heavy by practitioners|. (2) We present a |usability
toolbox|, (3) which we created |by deconstructing existing methods into minimal viable
components|. (4) The resulting |collection of usability evaluation atoms| |can be employed
much more flexibly by practitioners|. A |pilot study in a large web development firm|
indicated that |method toolboxes can be a viable approach to usability testing, in particular
for experienced evaluators|.
(1) |User research methods in general| focus on how information can be elicited from
users, |implicitly framing the designer as a tabula rasa| who needs to learn everything from
the user. (2) We present |user-enhanced auto-ethnography|, (3) which we |developed and
tuned over the course of several case studies|. The method enables designers |to bring their
own experiences into the table and have those critically examined by end users|. This
|supports the designers’ intuitive decision making throughout the process|. (4) A
comparison with |personas and context mapping| showed the |personal experiences of the
designer play an important role in any design process, and ought to be guided through the
proper use of methods|.
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 17
7 Conclusion and Discussion
Within this paper we have tried to fill a void in contemporary HCI-literature, discussing contributions to
the field. Building on an historic analysis and an empirical literature study we have presented a shortlist of
contributions to HCI, the CC-Model which focuses on the interrelations between contributions and
adjoining pro forma abstracts. Together these gave a coherent outlook on contributions within the HCI-
discipline as a whole. Although we managed to identify a list of core-contribution types, it appears that
papers which follow these archetypes are rare. More often than not, HCI-papers provide multiple
contributions. This has created complexities in this study and it does present authors and reviewers in HCI
with a challenge. Envisioning research in our field as paths in a tension field, such as the CC-Model
proposes, may clarify our thinking about contributions while doing justice to the complexity of research in
our community. One approach to create more clarity in the abstracts of papers with mixed contributions is
to make an explicit distinction between primary and secondary contributions. We did find abstracts which
used this style and we do consider this good practice when it improves the clarity of the abstract. Such
compound abstracts can be constructed from our pro forma abstracts so we did not write separate pro
formas for these combined contributions. When the pro forma abstracts of this paper are used as an analytic
instrument in the way Newman has proposed [38] however, the prevalence of combinations of multiple
contributions in HCI papers ought to be incorporated in the study design.
Throughout the study we have been somewhat intrigued by the discontinuation of the experience
(or case study) category by the CHI program committee in 2010. Investigating this further was out of the
scope of this study. However, some speculation may be in place here. We did find only a handful of papers
which could potentially be submitted as a case study in our dataset and Van Turnhout et al. [54] claim to
have found a relevance cycle (corresponding to the UCD cycle) only once. The frequent relabeling of the
category in the CHI-history suggests a struggle between stressing the real-life value of a case-study (thus
demonstrating UCD in reality) and the innovative power (thus serving the interest of the CHI community).
Executing a full user centered design cycle in a single study comes at the cost of the academic rigor and
innovative power, since all tensions of the model need to be addressed. As such case studies often do not
touch the borders of the field. Nevertheless, as they demonstrate the power of UCD in practice, we feel
they should find their way into the HCI-literature.
The work presented in this paper is empirical in nature and as such it is subject to all difficulties of
empirical work. Our sample has been modest in size in order to afford a qualitative analysis in a reasonable
amount of time. Considering the wide range of HCI outlets it may still have been too small, or too much
focused on the mainstream to give a valid and reliable picture of HCI as a whole. As such we are very open
to replications or reexamination of our work from different points of view and therefore we published our
dataset online [31]. Another disadvantage of an empirical study like this is that it implicitly reproduces the
status quo. Even if our assessment of contemporary HCI is correct; this doesn’t necessarily imply this is the
way HCI ought to be. As much as we invite authors to use the CC-model and pro forma abstracts to
improve their work, we invite them to critically re-examine the assumptions behind of the model for truth,
and desirability.
As a final remark: the viewpoints expressed in this paper have addressed the discipline of HCI as a
whole. This has the disadvantage of painting a picture in broad brushstrokes, missing out of much of the
detailed discussions about contributions within sub-disciplines of HCI such as the design research
community (for example [41,58]). We believe such specialist methodological literature is utterly important,
and our work has no intention to replace it. However, if the field is indeed maturing and changing from a
multidisciplinary to a more interdisciplinary one [24], we need to address the connections between 'the
parts' and the 'whole' of HCI. This paper aspires to be a valuable anchor point for the 'whole' in that
equation.
8 References
1. Andoh-Baidoo, F. K., Baker, E. W., Susarapu, S. R. et al.: A Review of IS Research Activities and Outputs using
Pro Forma Abstracts. Information Resources Management Journal (IRMJ), 20, 65-79 (2007)
18 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
2. Arnowitz, J., & Dykstra-Erickson, E.: CHI and the Practitioner Dilemma, Interactions, 12(4), 5-9 (2005)
3. Bardzell, J., & Bardzell, S.: What is Critical about Critical Design? In: Proceedings of the SIGCHI conference on
human factors in computing systems (CHI '13), 3297-3306. ACM (2013)
4. Bardzell, S., Bardzell, J., Forlizzi, J., Zimmerman, J., & Antanitis, J.: Critical Design and Critical Theory: The
Challenge of Designing for Provocation. In: Proceedings of the Designing Interactive Systems Conference (DIS
'12), 288-297. ACM (2012)
5. Barnard, P., May, J., Duke, D. et al.: Systems, Interactions, and Macrotheory. ACM Transactions on Computer-
Human Interaction (TOCHI), 7(2), 222-262 (2000)
6. Bereiter, C.: Education and mind in the knowledge age. Routledge (2005)
7. Blackwell, A. F.: HCI as an Inter-Discipline. In: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (Extended Abstracts) (CHI EA '15), 503-516. ACM (2015)
8. Blythe, M.: Research through Design Fiction: Narrative in Real and Imaginary Abstracts. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 703-712. ACM (2014)
9. Brereton, M., Roe, P., Schroeter, R., & Lee Hong, A.: Beyond Ethnography: Engagement and Reciprocity as
Foundations for Design Research Out here. In: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems (CHI '14), 1183-1186. ACM (2014)
10. Carroll, J. M.: HCI models, theories, and frameworks: Toward a multidisciplinary science. Morgan Kaufmann
(2003)
11. Churchill, E., Bowser, A., Preece, J.: Teaching and Learning Human-Computer Interaction. Interactions, 20(2), 44-
53 (2013)
12. Clemmensen, T.: Community Knowledge in an Emerging Online Professional Community: The Case of Sigchi.
Dk. Knowledge and Process Management, 12(1), 43-52 (2005)
13. Cockton, G.: Revisiting Usability's Three Key Principles. In: Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems (Extended Abstracts) (CHI EA '08), 2473-2484. ACM (2008)
14. Cox, T. F., & Cox, M. A.: Multidimensional scaling. CRC Press (2010)
15. De Roeck, D., Slegers, K., Criel, J., Godon, M., Claeys, L., Kilpi, K., & Jacobs, A.: I would DiYSE for it!: A
Manifesto for do-it-Yourself Internet-of-Things Creation. In: Proceedings of the Nordic Conference on Human-
Computer Interaction (NordiCHI '12), 170-179. ACM (2012)
16. Dix, A.: Human–computer Interaction: A Stable Discipline, a Nascent Science, and the Growth of the Long Tail.
Interacting with Computers, 22(1), 13-27 (2010)
17. Dourish, P.: Implications for Design. In: Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems (CHI '06), 541-550. ACM (2006)
18. Fallman, D.: The Interaction Design Research Triangle of Design Practice, Design Studies, and Design
Exploration. Design Issues, 24(3), 4-18 (2008)
19. Friess, E.: Personas and Decision Making in the Design Process: An Ethnographic Case Study. In: Proceedings of
the SIGCHI Conference on Human Factors in Computing Systems (CHI '12), 1209-1218. ACM (2012)
20. Glaser, B. G., & Strauss, A. L.: The discovery of grounded theory: Strategies for qualitative research. Transaction
Publishers (2009)
21. Goodman, E., Stolterman, E., Wakkary, R.: Understanding Interaction Design Practices. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems (CHI '11), 1061-1070. ACM (2011)
22. Gray, C. M., Stolterman, E., Siegel, M. A.: Reprioritizing the Relationship between HCI Research and Practice:
Bubble-Up and Trickle-Down Effects. In: Proceedings of the Designing Interactive Systems Conference (DIS '14),
725-734. ACM (2014)
23. Gross, T.: Human-Computer Interaction Education and Diversity. In: Human-Computer Interaction. Theories,
Methods, and Tools, 187-198. Springer International Publishing (2014)
24. Grudin, J.: Is Hci Homeless? In Search for Inter-Disciplinary Status. Interactions, 13(1), 54-59 (2006)
25. Grudin, J.: A Moving Target: The Evolution of HCI. The human-computer interaction handbook: Fundamentals,
evolving technologies, and emerging applications, 1-24 (2008)
26. Grudin, J.: Three Faces of Human-Computer Interaction. IEEE Annals of the History of Computing, 4, 46-62
(2005)
27. Guha, S., Steinhardt, S., Ahmed, S. I., & Lagoze, C.: Following Bibliometric Footprints: The ACM Digital Library
and the Evolution of Computer Science. In: Proceedings of the ACM/IEEE-CS joint conference on Digital libraries
(JCDL '13), 139-142. ACM (2013)
28. Gwet, K.: Handbook of Inter-Rater Reliability. Gaithersburg, MD: STATAXIS Publishing Company, 223-246
(2001)
29. Hansen, N. B., & Dalsgaard, P.: The Productive Role of Material Design Artefacts in Participatory Design Events.
In: Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI '12), 665-674. ACM (2012)
30. Harrison, S., Tatar, D., Sengers, P.: The Three Paradigms of HCI. In: Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (Alt. Chi. Session) (CHI '07), 1-18 (2007)
Contributions to HCI:
Introducing the CC-Model and its pro forma abstracts 19
31. Jacobs, M., Turnhout K. van, Bennis A., Craenmehr S., Niels, R., Zaad, L., Bakker, R.. Appendix to “Contributions
to HCI: Introducing the CC-Model and its Pro Forma Abstracts”. Available from: http://bit.ly/ContToHCI
32. Kostakos, V.: The big hole in HCI research. Interactions, 22(2), 48-51 (2015)
33. Kuusinen, K., & Väänänen-Vainio-Mattila, K.: How to make Agile UX Work More Efficient: Management and
Sales Perspectives. In: Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI '12),
139-148. ACM (2012)
34. Laporte, L., Slegers, K., De Grooff, D.: Using Correspondence Analysis to Monitor the Persona Segmentation
Process. In: Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI '12), 265-274.
ACM (2012)
35. Liu, Y., Goncalves, J., Ferreira, D., Xiao, B., Hosio, S., & Kostakos, V.: CHI 1994-2013: Mapping Two Decades
of Intellectual Progress through Co-Word Analysis. In: Proceedings of the SIGCHI Conference on Human Factors
in Computing Systems (CHI '14), 3553-3562. ACM (2014)
36. Long, J.: Specifying Relations between Research and the Design of Human-Computer Interactions. International
Journal of Human-Computer Studies, 44(6), 875-920 (1996)
37. March, S. T., & Smith, G. F.: Design and Natural Science Research on Information Technology. Decision support
systems, 15(4), 251-266 (1995)
38. Newman, W.: A Preliminary Analysis of the Products of HCI Research, using Pro Forma Abstracts. In: Conference
Companion on Human Factors in Computing Systems (CHI '94), 278-284 (1994)
39. Newman, W. M.: Better Or just Different? On the Benefits of Designing Interactive Systems in Terms of Critical
Parameters. In: Proceedings of the Designing Interactive Systems Conference (DIS '97), 239-245 (1997)
40. Nordic Conference on Human-Computer Interaction.: NordiCHI 2012 Making Sense through Design. Copenhagen,
Denmark: ACM, 2012. (2012)
41. Pierce, J.: On the Presentation and Production of Design Research Artifacts in HCI. In: Proceedings of the
Designing Interactive Systems Conference (DIS '14), 735-744 (2014)
42. Rauterberg, G.: How to Characterize a Research Line for User-System Interaction. IPO annual progress report, 35,
66-86 (2000)
43. Reeves, S.: Locating the 'big hole' in HCI research, Interactions, 22(4), 53-56 (2015)
44. Rogers, Y.: New Theoretical Approaches for HCI. Annual Review of Information Science and Technology, 38, 87-
143 (2004)
45. Serholt, S., Barendregt, W., Leite, I., Hastie, H., Jones, A., Paiva, A., Vasalou, A., Castellano, G.: Teachers' Views
on the use of Empathic Robotic Tutors in the Classroom. In: The IEEE International Symposium on Robot and
Human Interactive Communication (IEEE RO-MAN '14), 955-960 (2014)
46. Shaw, M.: Writing Good Software Engineering Research Papers: Minitutorial. In: Proceedings of the International
Conference on Software Engineering (ICSE '03), 726-736. IEEE Computer Society (2003)
47. Stolterman, E., & Pierce, J.: Design Tools in Practice: Studying the Designer-Tool Relationship in Interaction
Design. In: Proceedings of the Designing Interactive Systems Conference (DIS '12), 25-28. ACM (2012)
48. Stolterman, E., & Wiberg, M.: Concept-Driven Interaction Design Research. Human–Computer Interaction, 25(2),
95-118 (2010)
49. Thrower, P. A.: Writing a Scientific Paper: I. Titles and Abstracts. Carbon, 45(11), 2143-2144 (2007)
50. SIGCHI Conference on Human Factors in Computing Systems.: CHI 2012 It's the experience. Austin, Texas,
United States. (2012) http://chi2012.acm.org/
51. SIGCHI Conference on Human Factors in Computing Systems.: CHI 2013 Changing Perspectives. Paris, France.
(2013) http://chi2013.acm.org/program/best-of-chi/
52. SIGCHI Conference on Human Factors in Computing Systems.: CHI 2014 One of a CHInd. Toronto, Canada.
(2014) http://chi2014.acm.org/authors/contribution-types
53. SIGCHI Conference on Human Factors in Computing Systems.: CHI 2014 One of a CHInd. Toronto, Canada.
(2014) http://chi2014.acm.org/program/best-of-chi
54. Turnhout, K. van, Bennis, A., Craenmehr, S., Holwerda, R., Jacobs, M., Niels, R., Zaad, L., Hoppenbrouwers, S.,
Lenior, D., & Bakker, R.: Design Patterns for Mixed-Method Research in HCI. In: Proceedings of the Nordic
Conference on Human-Computer Interaction (NordiCHI '14), 361-370. ACM (2014)
55. Turnhout, K. van, Craenmehr, S., Holwerda, R., Menijn, M., Zwart, J.P., & Bakker, R.: Tradeoffs in Design
Research: Development Oriented Triangulation. In: Proceedings of the International BCS Human Computer
Interaction Conference (BCS-HCI '13). British Computer Society, 56 (2013)
56. Weinberger, C. J., Evans, J. A., Allesina, S.: Ten Simple (Empirical) Rules for Writing Science. (2015)
57. Whittaker, S., Terveen, L., Nardi, B. A.: Let's Stop Pushing the Envelope and Start Addressing it: A Reference
Task Agenda for HCI. Human–Computer Interaction, 15(2-3), 75-106. (2000)
58. Wiberg, M., & Stolterman, E.: What Makes a Prototype Novel?: A Knowledge Contribution Concern for
Interaction Design Research. In: Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI
'14), 531-540. ACM (2014)
20 Marjolein Jacobs, Koen van Turnhout, Arthur Bennis, Sabine Craenmehr,
Ralph Niels, Lambert Zaad, René Bakker
59. Wolf, T.V., Rode, J.A., Sussman, J., & Kellogg, W.A: Dispelling Design as the Black Art of CHI. In: Proceedings
of the SIGCHI Conference on Human Factors in Computing Systems (CHI '06), pp. 521-530. ACM (2006)
60. Woolrych, A., Hornbæk, K., Frøkjær, E., & Cockton, G.: Ingredients and Meals rather than Recipes: A Proposal for
Research that does Not Treat Usability Evaluation Methods as Indivisible Wholes. International Journal of Human-
Computer Interaction 27(10), 940-970 (2011)
61. Wulff, W., & Mahling, D. E.: An Assessment of HCI: Issues and Implications. ACM SIGCHI Bulletin, 22(1), 80-
87 (1990)
62. Zimmerman, J., Forlizzi, J., & Evenson, S.: Research through design as a method for interaction design research in
HCI. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '07), 493-502.
ACM (2007)